modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 06:30:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 06:29:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jackmedda/q-Taxi-v3
|
jackmedda
| 2023-02-18T18:15:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T17:25:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jackmedda/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
huggingtweets/akhund_bilal1
|
huggingtweets
| 2023-02-18T18:08:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-31T14:47:23Z |
---
language: en
thumbnail: http://www.huggingtweets.com/akhund_bilal1/1676743676971/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1578119450339016727/-cglkgsP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bilal</div>
<div style="text-align: center; font-size: 14px;">@akhund_bilal1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bilal.
| Data | Bilal |
| --- | --- |
| Tweets downloaded | 3035 |
| Retweets | 104 |
| Short tweets | 423 |
| Tweets kept | 2508 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mzrxzfhy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @akhund_bilal1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ttjyf21i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ttjyf21i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/akhund_bilal1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mahmoud-mohey/a2c-PandaReachDense-v2
|
mahmoud-mohey
| 2023-02-18T17:53:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T17:51:18Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.91 +/- 0.38
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huggingtweets/elonmusk-svembu
|
huggingtweets
| 2023-02-18T17:50:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-18T17:49:20Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-svembu/1676742622036/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1568853371146338308/w87i8uhE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Sridhar Vembu</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-svembu</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Sridhar Vembu.
| Data | Elon Musk | Sridhar Vembu |
| --- | --- | --- |
| Tweets downloaded | 3193 | 3248 |
| Retweets | 174 | 264 |
| Short tweets | 1048 | 45 |
| Tweets kept | 1971 | 2939 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4x30aqaf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-svembu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ryim7xj2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ryim7xj2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-svembu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phd411r1/SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3
|
phd411r1
| 2023-02-18T17:47:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-18T17:15:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SajjadAyoubi_xlm-roberta-large-fa-qa_finetune_on_hoshfa_3
This model is a fine-tuned version of [SajjadAyoubi/xlm-roberta-large-fa-qa](https://huggingface.co/SajjadAyoubi/xlm-roberta-large-fa-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4424 | 1.0 | 1500 | 2.0999 |
| 1.8186 | 2.0 | 3000 | 1.2042 |
| 1.2822 | 3.0 | 4500 | 0.8894 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
apatidar0/conversation-summ_longformer_bart_like
|
apatidar0
| 2023-02-18T17:28:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"Summarization",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-18T17:21:28Z |
---
license: apache-2.0
tags:
- Summarization
- generated_from_trainer
datasets:
- samsum
model-index:
- name: conversation-summ_longformer_bart_like
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conversation-summ_longformer_bart_like
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|
| No log | 1.0 | 143 | 1.9126 | 42.0973 | 16.7856 | 33.784 | 37.7811 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ArneL2206/ppo-LunarLander-v2
|
ArneL2206
| 2023-02-18T17:21:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-12-10T21:25:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 295.53 +/- 17.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JUNGU/Taxi-v3
|
JUNGU
| 2023-02-18T17:03:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T17:03:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JUNGU/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jackmedda/q-FrozenLake-v1-4x4-noSlippery
|
jackmedda
| 2023-02-18T17:02:31Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T17:02:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jackmedda/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JUNGU/q-FrozenLake-v1-4x4-noSlippery
|
JUNGU
| 2023-02-18T17:01:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T17:01:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JUNGU/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
inkoziev/paraphraser
|
inkoziev
| 2023-02-18T16:49:04Z | 25 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"paraphrasing",
"seq2seq",
"ru",
"dataset:inkoziev/paraphrases",
"license:cc-by-nc-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-01-05T09:17:17Z |
---
language: ru
license: cc-by-nc-4.0
tags:
- paraphrasing
- seq2seq
datasets:
- inkoziev/paraphrases
---
## Поэтический перефразировщик
Это генеративная модель на основе ```sberbank-ai/rugpt3large_based_on_gpt2```, дообученной
на датасете перефразировок [inkoziev/paraphrases](https://huggingface.co/datasets/inkoziev/paraphrases).
Она разработана для использования в проекте [генеративной поэзии](https://github.com/Koziev/verslibre).
Код для тренировки и использования перефразировщика доступен в репозитрии [https://github.com/Koziev/paraphraser](https://github.com/Koziev/paraphraser).
### Особенности перефразировки
Обращаю внимание, что модель **не предназначена** для использования там, где требуется
особо аккуратная работа с именованными сущностями. Так как в стихах не возникает особых проблем (более того,
в некоторых сценариях использования это даже желательно), если перефразировки теряют или добавляют некоторую семантику в исходный текст, то обучающий датасет
и модель на его основе может путать дни недели, имена, добавлять что-то от себя, быть метафоричной или иносказательной.
### Методика файнтюна
В обучающем датасете есть негативные примеры перефразировок, и я использую их вместе с правильными примерами в ходе файнтюна,
подавая на классификационную голову в [GPT2DoubleHeadsModel](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2DoubleHeadsModel).
Код, выполняющий файнтюн, доступен [тут](https://github.com/Koziev/paraphraser/blob/main/train_paraphraser_with_gpt2doublehead.py).
Такой подход к файнтюну оказался лучше, чем два других подхода:
1) дефолтный способ файнтюна, когда GPT дообучается просто на текстах, состоящих из исходного текста и перефразировки,
разделенных специальным токеном. В этом подходе модель обучается также на токенах затравки, что может быть нежелательным.
2) вариация первого способа, в котором токены затравки (исходного текста) исключаются из обратного распространения с помощью
задания labels=-100 ([код](https://github.com/Koziev/paraphraser/blob/main/finetune_paraphraser_with_prompt_masking.py)).
В качестве метрики для сравнения подходов и для подбора числа неверных вариантов перефразировки в GPT2DoubleHeadsModel
использована комбинация из:
1) близость векторов эмбеддингов исходного текста и сгенерированной перефразировки. Векторы получаются с помощью
модели ```sberbank-ai/sbert_large_mt_nlu_ru```. Я не стал использовать [модель-критик](https://huggingface.co/inkoziev/sbert_synonymy),
поскольку она обучалась на таком же датасете.
2) дисконтируем результаты п.1 символьной близостью (3-граммы) по коэффициенту Жаккара. Это штрафует перестановочные
перефразировки, воспроизведение исходного текста и небольшие переписывания.
### Формат входных данных
На вход модели подается исходный текст с добавлением токенов ```<s>``` в начале и ```<sep>``` в конце, например:
```
input_text = '<s>Мороз и солнце, день чудесный<sep>'
```
Результат генерации будет содержать текст с токеном ```</s>``` - это конец последовательности.
### Пример использования
Следующий код позволяет ввести в консоли короткое предложение
и видеть результат ее перефразировки моделью:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/paraphraser"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
while True:
seed = input(':> ').strip()
encoded_prompt = tokenizer.encode("<s>" + seed + "<sep>", add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt,
max_length=100,
typical_p=0.85,
top_k=0,
top_p=1.0,
do_sample=True,
num_return_sequences=10,
pad_token_id=tokenizer.pad_token_id)
for o in output_sequences:
text = tokenizer.decode(o.tolist(), clean_up_tokenization_spaces=True)
text = text[text.index('<sep>') + 5:]
text = text[: text.find('</s>')]
print(text)
```
|
JUNGU/ppo-LunarLander-v2
|
JUNGU
| 2023-02-18T16:47:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-07T22:19:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.62 +/- 26.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dgodderis/ppo-Huggy
|
dgodderis
| 2023-02-18T16:41:02Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-02-18T16:40:56Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: dgodderis/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
huggingcats/PPO_LunarLander-v2
|
huggingcats
| 2023-02-18T16:32:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T16:32:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.96 +/- 76.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jackmedda/ppo-Huggy
|
jackmedda
| 2023-02-18T16:22:58Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-02-18T15:01:42Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: jackmedda/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CoreyMorris/lander-delete-me
|
CoreyMorris
| 2023-02-18T16:12:47Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T16:07:28Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -105.96 +/- 18.63
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 16
'num_steps': 1024
'anneal_lr': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'CoreyMorris/lander-delete-me'
'batch_size': 16384
'minibatch_size': 4096}
```
|
ai-moroz/lazy-ti
|
ai-moroz
| 2023-02-18T15:47:38Z | 0 | 3 | null |
[
"anime",
"textual-inversion",
"embeddings",
"region:us"
] | null | 2023-02-05T13:43:30Z |
---
tags:
- anime
- textual-inversion
- embeddings
---
### Lazy TI dump: Textual inversion embeddings host. Embeddings trained using `stable-textual-inversion-cafe Colab - Lazy Edition` with few images and highly probably only bring pain and headache. <u>Use at own risk. good luck.</u>
- Tsukihime
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/ciel-tm.pt">Ciel</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/ciel.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/ciel.png" width="200"></a>
<pre><b>ciel-tm</b>, cross necklace, upper body, palms together, own hands together, looking up, from above, looking at viewer, pixie cut, sitting, moon
Negative prompt: (worst quality, low quality:1.4), bad anatomy, weapon, blush, messy hair
Steps: 30, Sampler: DPM++ SDE, CFG scale: 7, Seed: 2808656065, Size: 512x768, Model hash: 6e430eb514, Model: anything-v4.5-pruned, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.4, Hires upscaler: Latent</pre>
- Genshin Impact
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/diona-gi.pt">Diona</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/diona.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/diona.png" width="200"></a>
<pre><b>diona-gi</b>, :3, solo, (standing), nature, birds, floating leaves, perfect fingers, bright, noon,(masterpiece:1.2), best quality, highres, original, perfect lighting, (extremely detailed CG:1.2),(8k:1.1)
Negative prompt: (worst quality, low quality:1.4), bad anatomy, blur
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3253826251, Size: 512x768, Model hash: 791d67d4, Denoising strength: 0.6, Clip skip: 2, First pass size: 448x640</pre>
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/xiao-gi.pt">Xiao</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/xiao.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/xiao.png" width="200"></a>
<pre><b>xiao-gi</b>, 1boy, standing, crossed arm, mature, detailed eyes, short hair, white shirt, top, white top, beads, bead necklace, jewelry, ornament, green hair, forehead mark, diamond, ahoge, short hair, arm tattoo, full covered, tassel, spike, standing, shoulder pad, capri pants, black pants, hakama, cowboy shot, (masterpiece:1,2), best quality, highres, perfect lighting, (8k:1.1), dynamic angle
Negative prompt:(worst quality, low quality:1.4), bad anatomy, text, username, watermark, nude, abs
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 3146507317, Size: 512x768, Model hash: 791d67d4, Denoising strength: 0.6, Clip skip: 2, First pass size: 0x0</pre>
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/thoma-gi.pt">Thoma</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/thoma.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/thoma.png" width="200"></a>
<pre><b>thoma-gi</b>, 1boy, solo, blonde, green eyes, low ponytail, military tags, red jacket, crop jacket, black shirt, gloves, tassel, black pants, ((masterpiece)), best quality, highres, vivid, bright
Negative prompt: (worst quality, low quality:1.4), bad anatomy, nsfw, turtleneck, backlight
Steps: 20, Sampler: DPM++ SDE, CFG scale: 6, Seed: 368616108, Size: 512x768, Model hash: f75b19923f, Model: AbyssOrangeMix2_sfw, Clip skip: 2</pre>
- Naruto
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/naruko-nrt.pt">Naruko</a> /face only
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/naruko.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/naruko.png" width="200"></a>
<pre><b>naruko-nrt</b>, solo, (orange jacket:1.4), black pants, unzipping, best quality, masterpiece, wood, dynamic angle, contrapposto, indoor, balcony, vivid, leaves,
Negative prompt: (worst quality, low quality:1.4), bad anatomy, extra fingers
Steps: 20, Sampler: Euler a, CFG scale: 8, Seed: 1902740231, Size: 512x768, Model hash: 6e430eb514, Model: anything-v4.5-pruned, Denoising strength: 0.7, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent</pre>
- Misc
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/grinteeth.pt">grinteeth</a> /mouth only
<div style='float: right;'><a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/grinteeth.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/grinteeth.png" width="200"></a></div>
<pre><b>(grinteeth:0.8)</b>, smile, lips, close-up, solo, blue hair, black eyes, master piece, best quality
Negative prompt: worst quality, low quality, ugly, nsfw, blush
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2885651626, Size: 512x512, Model hash: 6e430eb514, Model: anything-v4.5-pruned, Clip skip: 2</pre>
### uses
- Download and place the file in 'embeddings' folder
- Use the filename in prompt
#### preview models
- WarriorMama777/AbyssOrangeMix2
- andite/anything-v4.5
|
akgeni/LunarLander-v2
|
akgeni
| 2023-02-18T15:43:19Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T15:43:12Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -180.12 +/- 71.93
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
Foxasdf/EnglishModel
|
Foxasdf
| 2023-02-18T15:31:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-02-17T15:24:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: EnglishModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EnglishModel
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1449
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 1
- eval_batch_size: 8
- seed: 45
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:------------------:|:-----:|:----:|:---------------:|:---:|
| 1905.1503 | 0.98 | 600 | 3.1449 | 1.0 |
| 4586886945979761.0 | 1.96 | 1200 | 3.1449 | 1.0 |
| 4847837820171059.0 | 2.94 | 1800 | 3.1449 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.9.0
- Tokenizers 0.10.3
|
email4u/sd-class-butterflies-32
|
email4u
| 2023-02-18T15:30:10Z | 5 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-02-18T15:21:00Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute butterflies.
## Usage
```
python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('email4u/sd-class-butterflies-32')
image = pipeline ().images [0]
image
```
|
njrosati/dqn-SpaceInvadersNoFrameskip-v4
|
njrosati
| 2023-02-18T15:12:05Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-07T03:20:43Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 618.50 +/- 249.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga njrosati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga njrosati -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga njrosati
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
parsasam/dqn-SpaceInvadersNoFrameskip-v4
|
parsasam
| 2023-02-18T14:55:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T14:54:34Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 681.00 +/- 219.84
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga parsasam -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga parsasam -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga parsasam
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
rdesarz/Q-Learning-Taxi-v3
|
rdesarz
| 2023-02-18T14:46:51Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T14:46:47Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Learning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.65
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rdesarz/Q-Learning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hpoddar/ppo-Huggy
|
hpoddar
| 2023-02-18T14:46:43Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-02-18T14:46:36Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: hpoddar/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rdesarz/q-FrozenLake-v1-4x4-noSlippery
|
rdesarz
| 2023-02-18T14:44:13Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T14:44:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rdesarz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Zhengrui/bert2bert_redditJoke
|
Zhengrui
| 2023-02-18T14:24:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:SocialGrep/one-million-reddit-jokes",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-18T12:25:14Z |
---
license: apache-2.0
datasets:
- SocialGrep/one-million-reddit-jokes
language:
- en
pipeline_tag: text2text-generation
---
|
wongchaerim/wong_ai
|
wongchaerim
| 2023-02-18T13:58:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-18T13:06:14Z |
---
license: creativeml-openrail-m
---
|
Taratata/Reinforce-CartPole-v1
|
Taratata
| 2023-02-18T13:48:34Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T13:47:37Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 480.20 +/- 59.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
ibadrehman/ppo-SnowballTarget
|
ibadrehman
| 2023-02-18T13:35:09Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-02-18T13:35:04Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: ibadrehman/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
r1ck/v1-ppo-LunarLander-v2
|
r1ck
| 2023-02-18T13:26:04Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T13:23:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.13 +/- 18.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Rafe350/rafeadtest2023-model1
|
Rafe350
| 2023-02-18T13:21:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-18T13:19:38Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: rfehx
---
### RafeAdTest2023-Model1 Dreambooth model trained by Rafe350 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
rfehx (use that on your prompt)

|
Alex48/q-FrozenLake-v1-4x4-noSlippery
|
Alex48
| 2023-02-18T13:17:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T13:17:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Alex48/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hsengiv/en_gram_core_web_trf
|
hsengiv
| 2023-02-18T13:14:07Z | 6 | 5 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2023-02-18T11:29:14Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_gram_core_web_trf
results:
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: Accuracy
value: 0.9638991277
library_name: spacy
pipeline_tag: token-classification
---
# **Spacy Tagger-Based Grammatical Error Identifier and Corrector for the English language**
## More info: TBD
| Feature | Description |
| --- | --- |
| **Name** | `en_gram_core_web_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.3.1,<3.4.0` |
| **Default Pipeline** | `transformer`, `tagger` |
| **Components** | `transformer`, `tagger` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [hsengivs]() |
### Label Scheme
<details>
<summary>View label scheme (7534 labels for 1 component)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `APPEND_!`, `APPEND_"`, `APPEND_%`, `APPEND_&`, `APPEND_'`, `APPEND_'Loughlin`, `APPEND_'Neal`, `APPEND_'Onofiro`, `APPEND_'d`, `APPEND_'ll`, `APPEND_'m`, `APPEND_'re`, `APPEND_'s`, `APPEND_'t`, `APPEND_'ve`, `APPEND_(`, `APPEND_)`, `APPEND_*`, `APPEND_,`, `APPEND_-`, `APPEND_--`, `APPEND_.`, `APPEND_...`, `APPEND_/`, `APPEND_0-1`, `APPEND_1`, `APPEND_1,115`, `APPEND_1,548`, `APPEND_1-800-507-4774`, `APPEND_1.72`, `APPEND_10,600,000`, `APPEND_11th`, `APPEND_150,000`, `APPEND_16`, `APPEND_1605`, `APPEND_164,000`, `APPEND_1976`, `APPEND_1987`, `APPEND_1988`, `APPEND_1st`, `APPEND_2`, `APPEND_2.0`, `APPEND_20,000`, `APPEND_2000`, `APPEND_2001`, `APPEND_2006`, `APPEND_2007`, `APPEND_2009`, `APPEND_201-689-8031`, `APPEND_2013`, `APPEND_221`, `APPEND_25-19`, `APPEND_27`, `APPEND_28`, `APPEND_3`, `APPEND_3,500`, `APPEND_3-2`, `APPEND_30`, `APPEND_4`, `APPEND_4,000`, `APPEND_400,000`, `APPEND_5`, `APPEND_55`, `APPEND_60`, `APPEND_650,000`, `APPEND_70`, `APPEND_73`, `APPEND_8`, `APPEND_:`, `APPEND_;`, `APPEND_?`, `APPEND_A`, `APPEND_Aberdeen`, `APPEND_About`, `APPEND_Acknowledgement`, `APPEND_Actually`, `APPEND_After`, `APPEND_Alibaba.com`, `APPEND_All`, `APPEND_Alps`, `APPEND_Also`, `APPEND_Although`, `APPEND_America`, `APPEND_American`, `APPEND_Americans`, `APPEND_Among`, `APPEND_An`, `APPEND_And`, `APPEND_Angeles`, `APPEND_Annan`, `APPEND_Annoyed`, `APPEND_Another`, `APPEND_Antonio`, `APPEND_Are`, `APPEND_Argentina`, `APPEND_Arizona`, `APPEND_Arts`, `APPEND_As`, `APPEND_Asia`, `APPEND_Associates`, `APPEND_At`, `APPEND_Aug`, `APPEND_August`, `APPEND_Australia`, `APPEND_Austria`, `APPEND_Ayr`, `APPEND_BANGALORE`, `APPEND_Bahanga`, `APPEND_Bakersfield`, `APPEND_Bank`, `APPEND_Baseball`, `APPEND_Bashir`, `APPEND_Bay`, `APPEND_Because`, `APPEND_Before`, `APPEND_Belichick`, `APPEND_Berlin`, `APPEND_Bernstein`, `APPEND_Besides`, `APPEND_Biden`, `APPEND_Blair`, `APPEND_Blanco`, `APPEND_Boufayed`, `APPEND_Bowker`, `APPEND_Bragg`, `APPEND_Branam`, `APPEND_Britain`, `APPEND_British`, `APPEND_Brons`, `APPEND_Broomielaw`, `APPEND_Brown`, `APPEND_Brunswick`, `APPEND_But`, `APPEND_Buying`, `APPEND_By`, `APPEND_CNN`, `APPEND_Can`, `APPEND_Canada`, `APPEND_Carvajal`, `APPEND_Casualties`, `APPEND_Celtics`, `APPEND_Central`, `APPEND_Chairman`, `APPEND_Chalino`, `APPEND_Child`, `APPEND_China`, `APPEND_Christmas`, `APPEND_Church`, `APPEND_City`, `APPEND_Clinton`, `APPEND_Co.`, `APPEND_Columbia`, `APPEND_CommFirstBank`, `APPEND_Compton`, `APPEND_Congress`, `APPEND_Cooper`, `APPEND_Cornwall`, `APPEND_Corp.`, `APPEND_Could`, `APPEND_Council`, `APPEND_County`, `APPEND_Cow`, `APPEND_Curry`, `APPEND_DJ`, `APPEND_Day`, `APPEND_Dean`, `APPEND_Dec`, `APPEND_Democrats`, `APPEND_Diego`, `APPEND_Do`, `APPEND_Does`, `APPEND_Dozier`, `APPEND_Drogba`, `APPEND_Duke`, `APPEND_During`, `APPEND_EU`, `APPEND_Earle`, `APPEND_Earlier`, `APPEND_Egypt`, `APPEND_Electronics`, `APPEND_End`, `APPEND_England`, `APPEND_English`, `APPEND_Europe`, `APPEND_Even`, `APPEND_Every`, `APPEND_Everything`, `APPEND_Excellent`, `APPEND_Expect`, `APPEND_FARC`, `APPEND_FDIC`, `APPEND_FTC`, `APPEND_Farm`, `APPEND_Feb`, `APPEND_February`, `APPEND_Fields`, `APPEND_First`, `APPEND_Fitting`, `APPEND_Fla`, `APPEND_Florida`, `APPEND_For`, `APPEND_Fortress`, `APPEND_France`, `APPEND_Francisco`, `APPEND_French`, `APPEND_Friday`, `APPEND_From`, `APPEND_Ft`, `APPEND_Fuji`, `APPEND_Furthermore`, `APPEND_GB`, `APPEND_Gatwick`, `APPEND_Gazprom`, `APPEND_Germany`, `APPEND_Gerrard`, `APPEND_Gettelfinger`, `APPEND_Gibson`, `APPEND_Glasgow`, `APPEND_God`, `APPEND_Gold`, `APPEND_Good`, `APPEND_Gooden`, `APPEND_Gouvia`, `APPEND_Government`, `APPEND_Grangemouth`, `APPEND_Group`, `APPEND_Gruber`, `APPEND_Guatemala`, `APPEND_Gupta`, `APPEND_HALIFAX`, `APPEND_HILLS`, `APPEND_Hall`, `APPEND_Ham`, `APPEND_Harvard`, `APPEND_Have`, `APPEND_He`, `APPEND_Heath`, `APPEND_Helping`, `APPEND_Her`, `APPEND_Here`, `APPEND_Hirst`, `APPEND_His`, `APPEND_Hokkaido`, `APPEND_Holdings`, `APPEND_Holyrood`, `APPEND_Hope`, `APPEND_Hospital`, `APPEND_How`, `APPEND_However`, `APPEND_Huerta`, `APPEND_Hutong`, `APPEND_I`, `APPEND_II`, `APPEND_Icelandair`, `APPEND_If`, `APPEND_In`, `APPEND_Inc`, `APPEND_Including`, `APPEND_India`, `APPEND_Indiana`, `APPEND_Ingram`, `APPEND_Inside`, `APPEND_Inspiration`, `APPEND_Instead`, `APPEND_International`, `APPEND_Interpol`, `APPEND_Iowa`, `APPEND_Iran`, `APPEND_Is`, `APPEND_Israel`, `APPEND_It`, `APPEND_Its`, `APPEND_Japan`, `APPEND_Japanese`, `APPEND_Jol`, `APPEND_Jones`, `APPEND_July`, `APPEND_Jump`, `APPEND_June`, `APPEND_Just`, `APPEND_Kabul`, `APPEND_Kansas`, `APPEND_Killing`, `APPEND_Kolmarden`, `APPEND_Korea`, `APPEND_Krakowski`, `APPEND_L.P.`, `APPEND_LEJEUNE`, `APPEND_LLC`, `APPEND_LONDON`, `APPEND_LUNENBURG`, `APPEND_Lassnig`, `APPEND_Last`, `APPEND_Legends`, `APPEND_Let`, `APPEND_Life`, `APPEND_Live`, `APPEND_MDC`, `APPEND_MINNETONKA`, `APPEND_Manchurians`, `APPEND_Many`, `APPEND_Maybe`, `APPEND_Mayflower`, `APPEND_Me`, `APPEND_Medunjanin`, `APPEND_Merck`, `APPEND_Mich`, `APPEND_Michelangelo`, `APPEND_Michelle`, `APPEND_Milan`, `APPEND_Monday`, `APPEND_Most`, `APPEND_Much`, `APPEND_My`, `APPEND_N.J.`, `APPEND_NAPA`, `APPEND_NW`, `APPEND_Neither`, `APPEND_New`, `APPEND_Next`, `APPEND_Nintendo`, `APPEND_No`, `APPEND_Nor`, `APPEND_Not`, `APPEND_Nothing`, `APPEND_November`, `APPEND_Now`, `APPEND_OMAHA`, `APPEND_Obama`, `APPEND_Of`, `APPEND_Oh`, `APPEND_On`, `APPEND_One`, `APPEND_Or`, `APPEND_Our`, `APPEND_PICHER`, `APPEND_Pakistan`, `APPEND_Pellegrino`, `APPEND_Penguin.com`, `APPEND_People`, `APPEND_Philippines`, `APPEND_Phoenix`, `APPEND_Pickens`, `APPEND_Piquot`, `APPEND_Piranesi`, `APPEND_Pirates`, `APPEND_Pitt`, `APPEND_Pittsburg`, `APPEND_Polarbit`, `APPEND_Polman`, `APPEND_Poma`, `APPEND_Prize`, `APPEND_Putin`, `APPEND_QC`, `APPEND_RAPPER`, `APPEND_Rather`, `APPEND_Records`, `APPEND_Remember`, `APPEND_Republican`, `APPEND_Research`, `APPEND_Ribéry`, `APPEND_Richard`, `APPEND_Right`, `APPEND_Road`, `APPEND_Robertson`, `APPEND_Rochdale`, `APPEND_Rodriguez`, `APPEND_Rover`, `APPEND_Runner`, `APPEND_Rush`, `APPEND_SHOREVIEW`, `APPEND_Samaritans`, `APPEND_Sanchez`, `APPEND_Saturday`, `APPEND_School`, `APPEND_Schubert`, `APPEND_See`, `APPEND_Senator`, `APPEND_Sept`, `APPEND_September`, `APPEND_Service`, `APPEND_Shah`, `APPEND_She`, `APPEND_Shiites`, `APPEND_Shouts`, `APPEND_Silverbridge`, `APPEND_Simons`, `APPEND_Since`, `APPEND_So`, `APPEND_Some`, `APPEND_Sound`, `APPEND_South`, `APPEND_State`, `APPEND_Still`, `APPEND_Stir`, `APPEND_Stoke-on-Trent`, `APPEND_Strangely`, `APPEND_Stuart`, `APPEND_Sunday`, `APPEND_Swansea`, `APPEND_Systems`, `APPEND_TV`, `APPEND_Tarbett`, `APPEND_Taymor`, `APPEND_Technology`, `APPEND_Thanks`, `APPEND_That`, `APPEND_The`, `APPEND_Their`, `APPEND_Then`, `APPEND_There`, `APPEND_These`, `APPEND_They`, `APPEND_This`, `APPEND_Those`, `APPEND_Though`, `APPEND_Thuram`, `APPEND_Thursday`, `APPEND_Time`, `APPEND_To`, `APPEND_Today`, `APPEND_Tokyo`, `APPEND_Tomorrow`, `APPEND_Tottenham`, `APPEND_Tuesday`, `APPEND_Two`, `APPEND_Tyne`, `APPEND_US`, `APPEND_Under`, `APPEND_Unfortunately`, `APPEND_Union`, `APPEND_United`, `APPEND_University`, `APPEND_Until`, `APPEND_Using`, `APPEND_Venezuela`, `APPEND_Virginia`, `APPEND_WASHINGTON`, `APPEND_War`, `APPEND_Washington`, `APPEND_We`, `APPEND_Wednesday`, `APPEND_Well`, `APPEND_What`, `APPEND_When`, `APPEND_Whenever`, `APPEND_Which`, `APPEND_While`, `APPEND_Whitmer`, `APPEND_Who`, `APPEND_Why`, `APPEND_WiFi`, `APPEND_Wiedmeier`, `APPEND_Will`, `APPEND_With`, `APPEND_Worcester`, `APPEND_World`, `APPEND_Would`, `APPEND_Xiao`, `APPEND_Yeah`, `APPEND_Year`, `APPEND_Yes`, `APPEND_Yesterday`, `APPEND_York`, `APPEND_Yorkshire`, `APPEND_You`, `APPEND_Your`, `APPEND_Zabaleta`, `APPEND_[`, `APPEND_]`, `APPEND_a`, `APPEND_ability`, `APPEND_able`, `APPEND_about`, `APPEND_above`, `APPEND_abroad`, `APPEND_accepted`, `APPEND_access`, `APPEND_accidents`, `APPEND_accompanied`, `APPEND_according`, `APPEND_account`, `APPEND_accused`, `APPEND_achieve`, `APPEND_acknowledging`, `APPEND_acquisitions`, `APPEND_across`, `APPEND_actions`, `APPEND_activities`, `APPEND_activity`, `APPEND_actress`, `APPEND_actually`, `APPEND_add`, `APPEND_added`, `APPEND_addition`, `APPEND_administration`, `APPEND_adults`, `APPEND_advantage`, `APPEND_advice`, `APPEND_advocated`, `APPEND_affair`, `APPEND_affect`, `APPEND_affected`, `APPEND_afraid`, `APPEND_after`, `APPEND_afternoon`, `APPEND_again`, `APPEND_against`, `APPEND_age`, `APPEND_aged`, `APPEND_agents`, `APPEND_aggressive`, `APPEND_ago`, `APPEND_agreed`, `APPEND_aims`, `APPEND_aircraft`, `APPEND_alcohol`, `APPEND_all`, `APPEND_allow`, `APPEND_allowed`, `APPEND_allows`, `APPEND_almost`, `APPEND_alone`, `APPEND_along`, `APPEND_already`, `APPEND_also`, `APPEND_alternatives`, `APPEND_although`, `APPEND_always`, `APPEND_am`, `APPEND_among`, `APPEND_amount`, `APPEND_amounted`, `APPEND_an`, `APPEND_and`, `APPEND_animals`, `APPEND_ankles`, `APPEND_another`, `APPEND_answer`, `APPEND_answered`, `APPEND_any`, `APPEND_anymore`, `APPEND_anyone`, `APPEND_anything`, `APPEND_apartment`, `APPEND_appear`, `APPEND_appearance`, `APPEND_appearances`, `APPEND_appeared`, `APPEND_application`, `APPEND_apply`, `APPEND_appraiser`, `APPEND_appreciate`, `APPEND_approach`, `APPEND_approaching`, `APPEND_are`, `APPEND_area`, `APPEND_areas`, `APPEND_around`, `APPEND_arrests`, `APPEND_arrived`, `APPEND_arsenal`, `APPEND_art`, `APPEND_as`, `APPEND_ask`, `APPEND_asked`, `APPEND_asking`, `APPEND_ass`, `APPEND_assists`, `APPEND_at`, `APPEND_attacks`, `APPEND_attend`, `APPEND_attendance`, `APPEND_attended`, `APPEND_audience`, `APPEND_authentic`, `APPEND_autopsy`, `APPEND_available`, `APPEND_avoid`, `APPEND_away`, `APPEND_baby`, `APPEND_back`, `APPEND_backed`, `APPEND_backing`, `APPEND_bad`, `APPEND_ball`, `APPEND_balls`, `APPEND_ban`, `APPEND_band`, `APPEND_bank`, `APPEND_bankers`, `APPEND_banks`, `APPEND_based`, `APPEND_baseman`, `APPEND_basketball`, `APPEND_bathroom`, `APPEND_baths`, `APPEND_be`, `APPEND_bearing`, `APPEND_beautiful`, `APPEND_beavers`, `APPEND_became`, `APPEND_because`, `APPEND_become`, `APPEND_becomes`, `APPEND_becoming`, `APPEND_bed`, `APPEND_been`, `APPEND_beer`, `APPEND_before`, `APPEND_began`, `APPEND_begin`, `APPEND_beginning`, `APPEND_begins`, `APPEND_begun`, `APPEND_behavior`, `APPEND_behind`, `APPEND_being`, `APPEND_believe`, `APPEND_believed`, `APPEND_believes`, `APPEND_below`, `APPEND_benefits`, `APPEND_best`, `APPEND_better`, `APPEND_bettered`, `APPEND_between`, `APPEND_big`, `APPEND_biggest`, `APPEND_billion`, `APPEND_billions`, `APPEND_biota`, `APPEND_birthday`, `APPEND_bit`, `APPEND_bitten`, `APPEND_blatantly`, `APPEND_blessing`, `APPEND_blonde`, `APPEND_blood`, `APPEND_board`, `APPEND_bobsled`, `APPEND_bodies`, `APPEND_book`, `APPEND_books`, `APPEND_both`, `APPEND_bought`, `APPEND_boys`, `APPEND_breaches`, `APPEND_break`, `APPEND_breakdown`, `APPEND_breakfast`, `APPEND_bring`, `APPEND_broke`, `APPEND_brothers`, `APPEND_brought`, `APPEND_build`, `APPEND_building`, `APPEND_buildings`, `APPEND_built`, `APPEND_buses`, `APPEND_business`, `APPEND_busy`, `APPEND_but`, `APPEND_buy`, `APPEND_buyers`, `APPEND_buying`, `APPEND_by`, `APPEND_ca`, `APPEND_cake`, `APPEND_call`, `APPEND_called`, `APPEND_calling`, `APPEND_calls`, `APPEND_calm`, `APPEND_came`, `APPEND_campaign`, `APPEND_can`, `APPEND_capable`, `APPEND_capacity`, `APPEND_car`, `APPEND_cards`, `APPEND_care`, `APPEND_career`, `APPEND_carried`, `APPEND_carries`, `APPEND_cars`, `APPEND_case`, `APPEND_cases`, `APPEND_casserole`, `APPEND_caught`, `APPEND_cause`, `APPEND_caused`, `APPEND_causes`, `APPEND_cell`, `APPEND_censors`, `APPEND_cent`, `APPEND_centers`, `APPEND_centuries`, `APPEND_certain`, `APPEND_chairman`, `APPEND_challenged`, `APPEND_champion`, `APPEND_chance`, `APPEND_chances`, `APPEND_change`, `APPEND_changed`, `APPEND_changes`, `APPEND_changing`, `APPEND_chapter`, `APPEND_character`, `APPEND_charities`, `APPEND_chatted`, `APPEND_cheaper`, `APPEND_check`, `APPEND_checked`, `APPEND_cheese`, `APPEND_child`, `APPEND_children`, `APPEND_chills`, `APPEND_choice`, `APPEND_choose`, `APPEND_chose`, `APPEND_circles`, `APPEND_cities`, `APPEND_citizens`, `APPEND_city`, `APPEND_civilians`, `APPEND_claim`, `APPEND_class`, `APPEND_classes`, `APPEND_clean`, `APPEND_clear`, `APPEND_climbed`, `APPEND_cloaks`, `APPEND_close`, `APPEND_closed`, `APPEND_closer`, `APPEND_clothing`, `APPEND_clubs`, `APPEND_co-authored`, `APPEND_coach`, `APPEND_coast`, `APPEND_coasters`, `APPEND_coattails`, `APPEND_coffee`, `APPEND_cold`, `APPEND_college`, `APPEND_columns`, `APPEND_combines`, `APPEND_come`, `APPEND_comes`, `APPEND_coming`, `APPEND_comment`, `APPEND_comments`, `APPEND_commitment`, `APPEND_common`, `APPEND_communication`, `APPEND_communities`, `APPEND_companies`, `APPEND_company`, `APPEND_compared`, `APPEND_competing`, `APPEND_complete`, `APPEND_completed`, `APPEND_completely`, `APPEND_complex`, `APPEND_components`, `APPEND_computing`, `APPEND_concede`, `APPEND_concerned`, `APPEND_concert`, `APPEND_condition`, `APPEND_conditions`, `APPEND_conduct`, `APPEND_conducted`, `APPEND_confidence`, `APPEND_confirmed`, `APPEND_confusion`, `APPEND_connected`, `APPEND_consider`, `APPEND_considered`, `APPEND_considering`, `APPEND_conspiracy`, `APPEND_construction`, `APPEND_consulate`, `APPEND_contact`, `APPEND_contemplative`, `APPEND_content`, `APPEND_continue`, `APPEND_continued`, `APPEND_continues`, `APPEND_continuing`, `APPEND_contract`, `APPEND_contractors`, `APPEND_contributed`, `APPEND_control`, `APPEND_conversation`, `APPEND_cooking`, `APPEND_cooperative`, `APPEND_correct`, `APPEND_correctly`, `APPEND_cost`, `APPEND_costs`, `APPEND_could`, `APPEND_counted`, `APPEND_countries`, `APPEND_country`, `APPEND_county`, `APPEND_couple`, `APPEND_course`, `APPEND_court`, `APPEND_covers`, `APPEND_cranes`, `APPEND_cratered`, `APPEND_craziness`, `APPEND_cream`, `APPEND_create`, `APPEND_created`, `APPEND_criticisms`, `APPEND_crop`, `APPEND_crossing`, `APPEND_culture`, `APPEND_current`, `APPEND_currently`, `APPEND_customers`, `APPEND_cut`, `APPEND_cuts`, `APPEND_daily`, `APPEND_damage`, `APPEND_damaging`, `APPEND_dance`, `APPEND_date`, `APPEND_daughter`, `APPEND_day`, `APPEND_days`, `APPEND_dazzling`, `APPEND_deal`, `APPEND_death`, `APPEND_debt`, `APPEND_decade`, `APPEND_decide`, `APPEND_decided`, `APPEND_decision`, `APPEND_decision-making`, `APPEND_declared`, `APPEND_declined`, `APPEND_deep`, `APPEND_defeat`, `APPEND_defence`, `APPEND_defending`, `APPEND_demands`, `APPEND_denounced`, `APPEND_deny`, `APPEND_depends`, `APPEND_depressed`, `APPEND_depressing`, `APPEND_described`, `APPEND_design`, `APPEND_desired`, `APPEND_despite`, `APPEND_destroyed`, `APPEND_destroying`, `APPEND_details`, `APPEND_detention`, `APPEND_develop`, `APPEND_developing`, `APPEND_development`, `APPEND_developments`, `APPEND_devices`, `APPEND_dialogue`, `APPEND_diary`, `APPEND_did`, `APPEND_die`, `APPEND_died`, `APPEND_difference`, `APPEND_different`, `APPEND_differently`, `APPEND_difficult`, `APPEND_difficulties`, `APPEND_dinner`, `APPEND_directors`, `APPEND_disappointed`, `APPEND_disaster`, `APPEND_discovered`, `APPEND_dispatched`, `APPEND_dispute`, `APPEND_distortions`, `APPEND_do`, `APPEND_documents`, `APPEND_does`, `APPEND_doing`, `APPEND_dollars`, `APPEND_dominate`, `APPEND_done`, `APPEND_down`, `APPEND_dozens`, `APPEND_draped`, `APPEND_dream`, `APPEND_drink`, `APPEND_drive`, `APPEND_driving`, `APPEND_drop`, `APPEND_dropped`, `APPEND_drought`, `APPEND_drugs`, `APPEND_drunk`, `APPEND_due`, `APPEND_during`, `APPEND_each`, `APPEND_earlier`, `APPEND_early`, `APPEND_earthquake`, `APPEND_easily`, `APPEND_easing`, `APPEND_east`, `APPEND_easy`, `APPEND_eat`, `APPEND_economic`, `APPEND_economy`, `APPEND_effect`, `APPEND_effectively`, `APPEND_effects`, `APPEND_efforts`, `APPEND_either`, `APPEND_elderly`, `APPEND_elections`, `APPEND_electorate`, `APPEND_else`, `APPEND_email`, `APPEND_emerging`, `APPEND_emissions`, `APPEND_emotional`, `APPEND_end`, `APPEND_endeavour`, `APPEND_ended`, `APPEND_endlessly`, `APPEND_energetic`, `APPEND_energy`, `APPEND_enjoy`, `APPEND_enjoyable`, `APPEND_enjoyed`, `APPEND_enough`, `APPEND_enrolment`, `APPEND_enter`, `APPEND_entirely`, `APPEND_entries`, `APPEND_entry`, `APPEND_environment`, `APPEND_equates`, `APPEND_equipment`, `APPEND_errors`, `APPEND_especially`, `APPEND_etc`, `APPEND_ethanol`, `APPEND_ethnicity`, `APPEND_euros`, `APPEND_evacuation`, `APPEND_even`, `APPEND_evening`, `APPEND_event`, `APPEND_events`, `APPEND_ever`, `APPEND_every`, `APPEND_everybody`, `APPEND_everyone`, `APPEND_everything`, `APPEND_everywhere`, `APPEND_exactly`, `APPEND_exam`, `APPEND_example`, `APPEND_except`, `APPEND_exchange`, `APPEND_exercise`, `APPEND_exhausted`, `APPEND_expect`, `APPEND_expected`, `APPEND_expenses`, `APPEND_experience`, `APPEND_experienced`, `APPEND_experts`, `APPEND_explain`, `APPEND_explains`, `APPEND_explanation`, `APPEND_exploded`, `APPEND_extend`, `APPEND_extreme`, `APPEND_eyes`, `APPEND_face`, `APPEND_fact`, `APPEND_factor`, `APPEND_fags`, `APPEND_failed`, `APPEND_failure`, `APPEND_faith`, `APPEND_fall`, `APPEND_families`, `APPEND_family`, `APPEND_fanboy`, `APPEND_fans`, `APPEND_far`, `APPEND_farmland`, `APPEND_fast`, `APPEND_faster`, `APPEND_fate`, `APPEND_father`, `APPEND_fatigue`, `APPEND_favorite`, `APPEND_favourite`, `APPEND_fazed`, `APPEND_feel`, `APPEND_feeling`, `APPEND_feelings`, `APPEND_feels`, `APPEND_fell`, `APPEND_felt`, `APPEND_ferries`, `APPEND_few`, `APPEND_fewer`, `APPEND_fight`, `APPEND_fighters`, `APPEND_figure`, `APPEND_fill`, `APPEND_filled`, `APPEND_film`, `APPEND_filming`, `APPEND_filmmakers`, `APPEND_films`, `APPEND_finalise`, `APPEND_finally`, `APPEND_find`, `APPEND_finding`, `APPEND_finds`, `APPEND_fine`, `APPEND_finish`, `APPEND_finished`, `APPEND_fire`, `APPEND_firm`, `APPEND_first`, `APPEND_fish`, `APPEND_fishing`, `APPEND_five`, `APPEND_flashlights`, `APPEND_flexibility`, `APPEND_floor`, `APPEND_flouted`, `APPEND_fluent`, `APPEND_focus`, `APPEND_focused`, `APPEND_follow`, `APPEND_followed`, `APPEND_following`, `APPEND_food`, `APPEND_for`, `APPEND_forgotten`, `APPEND_form`, `APPEND_forward`, `APPEND_found`, `APPEND_founded`, `APPEND_four`, `APPEND_fourth`, `APPEND_foxes`, `APPEND_fraud`, `APPEND_fraudulently`, `APPEND_fray`, `APPEND_free`, `APPEND_friend`, `APPEND_friends`, `APPEND_from`, `APPEND_front`, `APPEND_ft`, `APPEND_full`, `APPEND_fun`, `APPEND_fund`, `APPEND_fundamentals`, `APPEND_funding`, `APPEND_furnishings`, `APPEND_furniture`, `APPEND_furthering`, `APPEND_future`, `APPEND_gain`, `APPEND_gained`, `APPEND_gains`, `APPEND_game`, `APPEND_games`, `APPEND_gardens`, `APPEND_gave`, `APPEND_get`, `APPEND_gets`, `APPEND_getting`, `APPEND_giggles`, `APPEND_girl`, `APPEND_girlfriend`, `APPEND_girls`, `APPEND_give`, `APPEND_given`, `APPEND_gives`, `APPEND_giving`, `APPEND_go`, `APPEND_goal`, `APPEND_goals`, `APPEND_goes`, `APPEND_going`, `APPEND_golds`, `APPEND_gone`, `APPEND_good`, `APPEND_goods`, `APPEND_got`, `APPEND_gotten`, `APPEND_government`, `APPEND_governments`, `APPEND_graduating`, `APPEND_grandma`, `APPEND_grandmother`, `APPEND_great`, `APPEND_grew`, `APPEND_grinned`, `APPEND_gripe`, `APPEND_group`, `APPEND_groups`, `APPEND_grow`, `APPEND_growing`, `APPEND_grown`, `APPEND_growth`, `APPEND_guess`, `APPEND_guilty`, `APPEND_guzzle`, `APPEND_habits`, `APPEND_had`, `APPEND_half`, `APPEND_hand`, `APPEND_handhelds`, `APPEND_hands-on`, `APPEND_happen`, `APPEND_happened`, `APPEND_happens`, `APPEND_happy`, `APPEND_hard`, `APPEND_harder`, `APPEND_harmful`, `APPEND_has`, `APPEND_have`, `APPEND_having`, `APPEND_he`, `APPEND_healer`, `APPEND_health`, `APPEND_hear`, `APPEND_heard`, `APPEND_hearing`, `APPEND_heart`, `APPEND_heat`, `APPEND_height`, `APPEND_held`, `APPEND_hell`, `APPEND_help`, `APPEND_helped`, `APPEND_helping`, `APPEND_helps`, `APPEND_her`, `APPEND_here`, `APPEND_high`, `APPEND_higher`, `APPEND_highs`, `APPEND_him`, `APPEND_himself`, `APPEND_his`, `APPEND_history`, `APPEND_hit`, `APPEND_hold`, `APPEND_holiday`, `APPEND_holidays`, `APPEND_home`, `APPEND_homer`, `APPEND_homered`, `APPEND_homes`, `APPEND_hometown`, `APPEND_homework`, `APPEND_honor`, `APPEND_hookahs`, `APPEND_hope`, `APPEND_hospital`, `APPEND_hospitals`, `APPEND_hot`, `APPEND_hour`, `APPEND_hours`, `APPEND_house`, `APPEND_how`, `APPEND_however`, `APPEND_humans`, `APPEND_hung`, `APPEND_hurts`, `APPEND_husband`, `APPEND_idea`, `APPEND_ideas`, `APPEND_if`, `APPEND_ignore`, `APPEND_ignored`, `APPEND_illegal`, `APPEND_illustrates`, `APPEND_imagine`, `APPEND_immediately`, `APPEND_important`, `APPEND_improve`, `APPEND_improved`, `APPEND_improvement`, `APPEND_in`, `APPEND_incident`, `APPEND_include`, `APPEND_includes`, `APPEND_including`, `APPEND_income`, `APPEND_increase`, `APPEND_increased`, `APPEND_increasing`, `APPEND_index`, `APPEND_indexed`, `APPEND_india`, `APPEND_indictment`, `APPEND_industry`, `APPEND_influences`, `APPEND_information`, `APPEND_inside`, `APPEND_installing`, `APPEND_instances`, `APPEND_instead`, `APPEND_intend`, `APPEND_intended`, `APPEND_interest`, `APPEND_interested`, `APPEND_interesting`, `APPEND_international`, `APPEND_internet`, `APPEND_into`, `APPEND_introduce`, `APPEND_investment`, `APPEND_investors`, `APPEND_involved`, `APPEND_is`, `APPEND_issues`, `APPEND_it`, `APPEND_its`, `APPEND_itself`, `APPEND_jewellery`, `APPEND_job`, `APPEND_jobs`, `APPEND_join`, `APPEND_joined`, `APPEND_joining`, `APPEND_journal`, `APPEND_just`, `APPEND_justice`, `APPEND_keep`, `APPEND_keeping`, `APPEND_keeps`, `APPEND_kept`, `APPEND_keyboards`, `APPEND_killed`, `APPEND_killing`, `APPEND_kind`, `APPEND_kinds`, `APPEND_knew`, `APPEND_know`, `APPEND_known`, `APPEND_knows`, `APPEND_lab`, `APPEND_lack`, `APPEND_language`, `APPEND_languages`, `APPEND_large`, `APPEND_last`, `APPEND_lasted`, `APPEND_late`, `APPEND_lately`, `APPEND_later`, `APPEND_launched`, `APPEND_law`, `APPEND_lawmakers`, `APPEND_laws`, `APPEND_lawyer`, `APPEND_lead`, `APPEND_leaders`, `APPEND_leading`, `APPEND_learn`, `APPEND_learned`, `APPEND_learning`, `APPEND_learnt`, `APPEND_least`, `APPEND_leave`, `APPEND_leaving`, `APPEND_led`, `APPEND_left`, `APPEND_legislatures`, `APPEND_length`, `APPEND_less`, `APPEND_lesson`, `APPEND_let`, `APPEND_letter`, `APPEND_letters`, `APPEND_level`, `APPEND_levels`, `APPEND_lies`, `APPEND_life`, `APPEND_lifestyle`, `APPEND_light`, `APPEND_lighting`, `APPEND_lights`, `APPEND_like`, `APPEND_liked`, `APPEND_likes`, `APPEND_limb`, `APPEND_lines`, `APPEND_list`, `APPEND_listen`, `APPEND_little`, `APPEND_live`, `APPEND_lived`, `APPEND_lives`, `APPEND_living`, `APPEND_loans`, `APPEND_located`, `APPEND_location`, `APPEND_long`, `APPEND_longer`, `APPEND_look`, `APPEND_looked`, `APPEND_looking`, `APPEND_lookout`, `APPEND_looks`, `APPEND_lose`, `APPEND_losing`, `APPEND_loss`, `APPEND_lost`, `APPEND_lot`, `APPEND_love`, `APPEND_low`, `APPEND_lower`, `APPEND_loyal`, `APPEND_luck`, `APPEND_lucky`, `APPEND_lunch`, `APPEND_made`, `APPEND_main`, `APPEND_mainly`, `APPEND_major`, `APPEND_make`, `APPEND_makes`, `APPEND_making`, `APPEND_man`, `APPEND_manufacturers`, `APPEND_many`, `APPEND_marked`, `APPEND_marks`, `APPEND_married`, `APPEND_matchup`, `APPEND_matter`, `APPEND_may`, `APPEND_maybe`, `APPEND_me`, `APPEND_meal`, `APPEND_mean`, `APPEND_meaning`, `APPEND_means`, `APPEND_meant`, `APPEND_meanwhile`, `APPEND_measure`, `APPEND_meat`, `APPEND_medals`, `APPEND_media`, `APPEND_medicine`, `APPEND_meet`, `APPEND_meeting`, `APPEND_meetings`, `APPEND_member`, `APPEND_members`, `APPEND_memories`, `APPEND_men`, `APPEND_mentioned`, `APPEND_meriwether`, `APPEND_message`, `APPEND_met`, `APPEND_methamphetamine`, `APPEND_method`, `APPEND_midst`, `APPEND_might`, `APPEND_migraines`, `APPEND_miles`, `APPEND_milk`, `APPEND_million`, `APPEND_mind`, `APPEND_minds`, `APPEND_mindset`, `APPEND_mine`, `APPEND_minutes`, `APPEND_mirror`, `APPEND_miss`, `APPEND_missed`, `APPEND_missing`, `APPEND_missions`, `APPEND_mistake`, `APPEND_mix`, `APPEND_modules`, `APPEND_moment`, `APPEND_money`, `APPEND_month`, `APPEND_months`, `APPEND_more`, `APPEND_morning`, `APPEND_most`, `APPEND_mother`, `APPEND_move`, `APPEND_moved`, `APPEND_movie`, `APPEND_movies`, `APPEND_moving`, `APPEND_much`, `APPEND_music`, `APPEND_must`, `APPEND_my`, `APPEND_myself`, `APPEND_nailing`, `APPEND_name`, `APPEND_named`, `APPEND_narrating`, `APPEND_nation`, `APPEND_natural`, `APPEND_near`, `APPEND_nearby`, `APPEND_necessary`, `APPEND_need`, `APPEND_needed`, `APPEND_never`, `APPEND_new`, `APPEND_news`, `APPEND_newsgroup`, `APPEND_newspapers`, `APPEND_next`, `APPEND_night`, `APPEND_nightmares`, `APPEND_no`, `APPEND_nobody`, `APPEND_nonsense`, `APPEND_nor`, `APPEND_normal`, `APPEND_not`, `APPEND_notable`, `APPEND_nothing`, `APPEND_notice`, `APPEND_now`, `APPEND_nuclear`, `APPEND_number`, `APPEND_numbers`, `APPEND_objectives`, `APPEND_occurred`, `APPEND_of`, `APPEND_off`, `APPEND_offered`, `APPEND_offers`, `APPEND_office`, `APPEND_officer`, `APPEND_officers`, `APPEND_official--Mortazavi--insisted`, `APPEND_officials`, `APPEND_often`, `APPEND_oh`, `APPEND_old`, `APPEND_older`, `APPEND_on`, `APPEND_once`, `APPEND_one`, `APPEND_one-handed`, `APPEND_ones`, `APPEND_online`, `APPEND_only`, `APPEND_onto`, `APPEND_open`, `APPEND_opened`, `APPEND_opening`, `APPEND_opinion`, `APPEND_opinions`, `APPEND_opponents`, `APPEND_opportunity`, `APPEND_or`, `APPEND_orchestrated`, `APPEND_order`, `APPEND_ordered`, `APPEND_organisation`, `APPEND_organization`, `APPEND_other`, `APPEND_others`, `APPEND_ounces`, `APPEND_our`, `APPEND_ourselves`, `APPEND_out`, `APPEND_outage`, `APPEND_output`, `APPEND_outside`, `APPEND_outward`, `APPEND_over`, `APPEND_overseas`, `APPEND_own`, `APPEND_owned`, `APPEND_owns`, `APPEND_paid`, `APPEND_pain`, `APPEND_pair`, `APPEND_paper`, `APPEND_par-4`, `APPEND_parents`, `APPEND_parliament`, `APPEND_part`, `APPEND_particular`, `APPEND_parties`, `APPEND_parts`, `APPEND_party`, `APPEND_pass`, `APPEND_passed`, `APPEND_passes`, `APPEND_passing`, `APPEND_passport`, `APPEND_past`, `APPEND_pasta`, `APPEND_pastures`, `APPEND_patients`, `APPEND_pay`, `APPEND_payer`, `APPEND_peers`, `APPEND_pennies`, `APPEND_pensions`, `APPEND_people`, `APPEND_percent`, `APPEND_perfect`, `APPEND_performance`, `APPEND_perfumes`, `APPEND_period`, `APPEND_person`, `APPEND_persons`, `APPEND_phone`, `APPEND_physical`, `APPEND_pickup`, `APPEND_picture`, `APPEND_pirouettes`, `APPEND_place`, `APPEND_places`, `APPEND_plan`, `APPEND_planned`, `APPEND_planning`, `APPEND_plans`, `APPEND_plant`, `APPEND_play`, `APPEND_played`, `APPEND_players`, `APPEND_playing`, `APPEND_plays`, `APPEND_please`, `APPEND_pleased`, `APPEND_plots`, `APPEND_point`, `APPEND_points`, `APPEND_policies`, `APPEND_policy`, `APPEND_pools`, `APPEND_poor`, `APPEND_popcorn`, `APPEND_popular`, `APPEND_port`, `APPEND_position`, `APPEND_possession`, `APPEND_possible`, `APPEND_post`, `APPEND_posters`, `APPEND_postmortems`, `APPEND_poverty`, `APPEND_power`, `APPEND_powers`, `APPEND_practice`, `APPEND_practices`, `APPEND_practicing`, `APPEND_prefer`, `APPEND_pregnancy`, `APPEND_prepare`, `APPEND_prepared`, `APPEND_present`, `APPEND_president`, `APPEND_press`, `APPEND_pretty`, `APPEND_prevent`, `APPEND_previous`, `APPEND_price`, `APPEND_prices`, `APPEND_prior`, `APPEND_prisoner`, `APPEND_probably`, `APPEND_probation`, `APPEND_problem`, `APPEND_problems`, `APPEND_procedure`, `APPEND_process`, `APPEND_production`, `APPEND_products`, `APPEND_professor`, `APPEND_program`, `APPEND_programs`, `APPEND_projects`, `APPEND_promised`, `APPEND_propagandc`, `APPEND_provide`, `APPEND_provides`, `APPEND_public`, `APPEND_published`, `APPEND_pulled`, `APPEND_punches`, `APPEND_punish`, `APPEND_punished`, `APPEND_pursuit`, `APPEND_put`, `APPEND_qualification`, `APPEND_qualifier`, `APPEND_quality`, `APPEND_quantities`, `APPEND_quarter`, `APPEND_question`, `APPEND_questioned`, `APPEND_questions`, `APPEND_quick`, `APPEND_quickly`, `APPEND_quit`, `APPEND_quite`, `APPEND_rabbis`, `APPEND_race`, `APPEND_racing`, `APPEND_radicals`, `APPEND_raid`, `APPEND_rain`, `APPEND_ran`, `APPEND_rang`, `APPEND_rather`, `APPEND_reach`, `APPEND_reaction`, `APPEND_read`, `APPEND_real`, `APPEND_realize`, `APPEND_realized`, `APPEND_really`, `APPEND_reason`, `APPEND_reasons`, `APPEND_recall`, `APPEND_receipt`, `APPEND_receive`, `APPEND_received`, `APPEND_recent`, `APPEND_recently`, `APPEND_recommendations`, `APPEND_record`, `APPEND_records`, `APPEND_recovered`, `APPEND_reduce`, `APPEND_reference`, `APPEND_references`, `APPEND_refused`, `APPEND_regarding`, `APPEND_regulars`, `APPEND_reigning`, `APPEND_related`, `APPEND_released`, `APPEND_reliance`, `APPEND_relieved`, `APPEND_remaining`, `APPEND_remember`, `APPEND_remembered`, `APPEND_report`, `APPEND_reports`, `APPEND_representative`, `APPEND_require`, `APPEND_required`, `APPEND_research`, `APPEND_researchers`, `APPEND_reserves`, `APPEND_resort`, `APPEND_response`, `APPEND_rest`, `APPEND_restaurant`, `APPEND_result`, `APPEND_resulting`, `APPEND_results`, `APPEND_retirement`, `APPEND_return`, `APPEND_returned`, `APPEND_revenue`, `APPEND_review`, `APPEND_reward`, `APPEND_ridiculous`, `APPEND_right`, `APPEND_rights`, `APPEND_rise`, `APPEND_risk`, `APPEND_risks`, `APPEND_rivals`, `APPEND_role`, `APPEND_rollout`, `APPEND_room`, `APPEND_roots`, `APPEND_route`, `APPEND_rules`, `APPEND_run`, `APPEND_running`, `APPEND_s`, `APPEND_sad`, `APPEND_safe`, `APPEND_safest`, `APPEND_said`, `APPEND_sale`, `APPEND_same`, `APPEND_saw`, `APPEND_say`, `APPEND_saying`, `APPEND_says`, `APPEND_scan`, `APPEND_scene`, `APPEND_scenes`, `APPEND_schedule`, `APPEND_scheduled`, `APPEND_scheme`, `APPEND_school`, `APPEND_schools`, `APPEND_score`, `APPEND_scored`, `APPEND_scoring`, `APPEND_scraping`, `APPEND_screws`, `APPEND_scrutinized`, `APPEND_season`, `APPEND_second`, `APPEND_secretly`, `APPEND_sector`, `APPEND_see`, `APPEND_seeing`, `APPEND_seeking`, `APPEND_seem`, `APPEND_seemed`, `APPEND_seems`, `APPEND_seen`, `APPEND_sell`, `APPEND_senators`, `APPEND_send`, `APPEND_sense`, `APPEND_sent`, `APPEND_sentence`, `APPEND_series`, `APPEND_serious`, `APPEND_seriously`, `APPEND_service`, `APPEND_services`, `APPEND_set`, `APPEND_several`, `APPEND_sewage`, `APPEND_share`, `APPEND_shareholders`, `APPEND_shares`, `APPEND_she`, `APPEND_shelter`, `APPEND_shirt`, `APPEND_shop`, `APPEND_shopping`, `APPEND_shops`, `APPEND_short`, `APPEND_shot`, `APPEND_should`, `APPEND_show`, `APPEND_showed`, `APPEND_shown`, `APPEND_shows`, `APPEND_siblings`, `APPEND_sickness`, `APPEND_similar`, `APPEND_since`, `APPEND_singer`, `APPEND_singing`, `APPEND_site`, `APPEND_sites`, `APPEND_sitting`, `APPEND_situation`, `APPEND_size`, `APPEND_skills`, `APPEND_skin`, `APPEND_slept`, `APPEND_slower`, `APPEND_slumped`, `APPEND_slums`, `APPEND_small`, `APPEND_smoking`, `APPEND_snow`, `APPEND_so`, `APPEND_soak`, `APPEND_society`, `APPEND_software`, `APPEND_sold`, `APPEND_solid`, `APPEND_some`, `APPEND_somebody`, `APPEND_someone`, `APPEND_something`, `APPEND_sometimes`, `APPEND_son`, `APPEND_songs`, `APPEND_soon`, `APPEND_sound`, `APPEND_sounds`, `APPEND_spate`, `APPEND_speak`, `APPEND_speaking`, `APPEND_species`, `APPEND_speech`, `APPEND_speed`, `APPEND_spend`, `APPEND_spending`, `APPEND_spent`, `APPEND_spin`, `APPEND_spoke`, `APPEND_spoken`, `APPEND_sponsors`, `APPEND_sponsorships`, `APPEND_sport`, `APPEND_spreads`, `APPEND_spring`, `APPEND_stabbing`, `APPEND_staff`, `APPEND_start`, `APPEND_started`, `APPEND_starting`, `APPEND_starts`, `APPEND_states`, `APPEND_station`, `APPEND_stay`, `APPEND_stayed`, `APPEND_stem`, `APPEND_still`, `APPEND_stimulus`, `APPEND_stop`, `APPEND_stopped`, `APPEND_store`, `APPEND_stores`, `APPEND_stories`, `APPEND_storms`, `APPEND_story`, `APPEND_strategy`, `APPEND_strength`, `APPEND_stress`, `APPEND_strong`, `APPEND_struggle`, `APPEND_student`, `APPEND_students`, `APPEND_studied`, `APPEND_studies`, `APPEND_study`, `APPEND_studying`, `APPEND_stuff`, `APPEND_style`, `APPEND_subject`, `APPEND_subside`, `APPEND_successful`, `APPEND_successor`, `APPEND_such`, `APPEND_suddenly`, `APPEND_suffering`, `APPEND_suggestions`, `APPEND_summer`, `APPEND_support`, `APPEND_supporter`, `APPEND_supposed`, `APPEND_sure`, `APPEND_surprised`, `APPEND_surrounding`, `APPEND_survived`, `APPEND_suspended`, `APPEND_suspicion`, `APPEND_swim`, `APPEND_system`, `APPEND_systems`, `APPEND_take`, `APPEND_taken`, `APPEND_takes`, `APPEND_taking`, `APPEND_talk`, `APPEND_talked`, `APPEND_talking`, `APPEND_talks`, `APPEND_taste`, `APPEND_taught`, `APPEND_teach`, `APPEND_teacher`, `APPEND_teachers`, `APPEND_team`, `APPEND_technology`, `APPEND_tell`, `APPEND_telling`, `APPEND_tells`, `APPEND_temperatures`, `APPEND_tend`, `APPEND_tense`, `APPEND_terminations`, `APPEND_terms`, `APPEND_terrible`, `APPEND_test`, `APPEND_testament`, `APPEND_tested`, `APPEND_testify`, `APPEND_tests`, `APPEND_thamespath`, `APPEND_than`, `APPEND_thanks`, `APPEND_that`, `APPEND_the`, `APPEND_their`, `APPEND_them`, `APPEND_themselves`, `APPEND_then`, `APPEND_there`, `APPEND_these`, `APPEND_they`, `APPEND_thing`, `APPEND_things`, `APPEND_think`, `APPEND_thinking`, `APPEND_thinks`, `APPEND_third`, `APPEND_this`, `APPEND_those`, `APPEND_though`, `APPEND_thought`, `APPEND_thoughts`, `APPEND_threat`, `APPEND_threatening`, `APPEND_three`, `APPEND_through`, `APPEND_throughout`, `APPEND_tickets`, `APPEND_tie-up`, `APPEND_ties`, `APPEND_time`, `APPEND_times`, `APPEND_tired`, `APPEND_title`, `APPEND_to`, `APPEND_today`, `APPEND_together`, `APPEND_told`, `APPEND_tomatoes`, `APPEND_tomorrow`, `APPEND_tongue`, `APPEND_tonight`, `APPEND_too`, `APPEND_took`, `APPEND_tools`, `APPEND_topic`, `APPEND_topics`, `APPEND_tornadoes`, `APPEND_totals`, `APPEND_tour`, `APPEND_towards`, `APPEND_town`, `APPEND_track`, `APPEND_trade`, `APPEND_train`, `APPEND_training`, `APPEND_transfer`, `APPEND_transfusions`, `APPEND_travel`, `APPEND_trees`, `APPEND_tried`, `APPEND_trip`, `APPEND_trouble`, `APPEND_troublesome`, `APPEND_trucks`, `APPEND_true`, `APPEND_trumps`, `APPEND_trust`, `APPEND_try`, `APPEND_trying`, `APPEND_turn`, `APPEND_turned`, `APPEND_twice`, `APPEND_twilight`, `APPEND_two`, `APPEND_txtspk`, `APPEND_types`, `APPEND_unable`, `APPEND_under`, `APPEND_underlies`, `APPEND_understand`, `APPEND_understanding`, `APPEND_undertake`, `APPEND_unhurt`, `APPEND_units`, `APPEND_unknown`, `APPEND_until`, `APPEND_unwilling`, `APPEND_up`, `APPEND_upsetting`, `APPEND_urban`, `APPEND_us`, `APPEND_use`, `APPEND_used`, `APPEND_useful`, `APPEND_users`, `APPEND_uses`, `APPEND_using`, `APPEND_usual`, `APPEND_usually`, `APPEND_v.`, `APPEND_value`, `APPEND_various`, `APPEND_vehicles`, `APPEND_venture`, `APPEND_very`, `APPEND_via`, `APPEND_video`, `APPEND_view`, `APPEND_viewers`, `APPEND_vigilant`, `APPEND_violation`, `APPEND_violent`, `APPEND_visit`, `APPEND_visited`, `APPEND_volunteers`, `APPEND_vote`, `APPEND_voted`, `APPEND_waffle`, `APPEND_wait`, `APPEND_waited`, `APPEND_waiting`, `APPEND_wake`, `APPEND_walk`, `APPEND_walked`, `APPEND_walking`, `APPEND_wand`, `APPEND_want`, `APPEND_wanted`, `APPEND_wants`, `APPEND_war`, `APPEND_warm`, `APPEND_warn`, `APPEND_was`, `APPEND_watch`, `APPEND_watched`, `APPEND_watching`, `APPEND_water`, `APPEND_way`, `APPEND_ways`, `APPEND_we`, `APPEND_wealthy`, `APPEND_wearing`, `APPEND_weather`, `APPEND_website`, `APPEND_weddings`, `APPEND_week`, `APPEND_weekend`, `APPEND_weeks`, `APPEND_weight`, `APPEND_well`, `APPEND_went`, `APPEND_were`, `APPEND_what`, `APPEND_when`, `APPEND_whenever`, `APPEND_where`, `APPEND_whether`, `APPEND_which`, `APPEND_while`, `APPEND_whites`, `APPEND_who`, `APPEND_whole`, `APPEND_whom`, `APPEND_whose`, `APPEND_why`, `APPEND_will`, `APPEND_willing`, `APPEND_willingness`, `APPEND_win`, `APPEND_winds`, `APPEND_winner`, `APPEND_winning`, `APPEND_wins`, `APPEND_winter`, `APPEND_wish`, `APPEND_with`, `APPEND_within`, `APPEND_without`, `APPEND_witnesses`, `APPEND_wo`, `APPEND_woman`, `APPEND_women`, `APPEND_won`, `APPEND_wonder`, `APPEND_wondered`, `APPEND_words`, `APPEND_wore`, `APPEND_work`, `APPEND_worked`, `APPEND_workers`, `APPEND_workforce`, `APPEND_working`, `APPEND_works`, `APPEND_world`, `APPEND_worried`, `APPEND_worry`, `APPEND_worse`, `APPEND_worship`, `APPEND_would`, `APPEND_write`, `APPEND_writing`, `APPEND_writings`, `APPEND_written`, `APPEND_wrong`, `APPEND_wrote`, `APPEND_year`, `APPEND_years`, `APPEND_yen`, `APPEND_yesterday`, `APPEND_yet`, `APPEND_you`, `APPEND_young`, `APPEND_your`, `APPEND_yourself`, `DELETE`, `KEEP`, `MERGE_SPACE`, `REPLACE_!`, `REPLACE_"`, `REPLACE_&`, `REPLACE_'`, `REPLACE_'d`, `REPLACE_'ll`, `REPLACE_'m`, `REPLACE_'re`, `REPLACE_'s`, `REPLACE_'ve`, `REPLACE_(`, `REPLACE_)`, `REPLACE_*`, `REPLACE_,`, `REPLACE_-`, `REPLACE_.`, `REPLACE_/`, `REPLACE_1`, `REPLACE_1st`, `REPLACE_2`, `REPLACE_3`, `REPLACE_5`, `REPLACE_8`, `REPLACE_:`, `REPLACE_;`, `REPLACE_?`, `REPLACE_A`, `REPLACE_About`, `REPLACE_Actually`, `REPLACE_After`, `REPLACE_All`, `REPLACE_Also`, `REPLACE_Although`, `REPLACE_America`, `REPLACE_American`, `REPLACE_Americans`, `REPLACE_An`, `REPLACE_And`, `REPLACE_Another`, `REPLACE_Anyway`, `REPLACE_Anyways`, `REPLACE_Are`, `REPLACE_As`, `REPLACE_Asian`, `REPLACE_At`, `REPLACE_August`, `REPLACE_Australia`, `REPLACE_Because`, `REPLACE_Before`, `REPLACE_Besides`, `REPLACE_British`, `REPLACE_But`, `REPLACE_By`, `REPLACE_CANCER`, `REPLACE_Can`, `REPLACE_China`, `REPLACE_Chinese`, `REPLACE_Christmas`, `REPLACE_Church`, `REPLACE_City`, `REPLACE_Could`, `REPLACE_Currently`, `REPLACE_Day`, `REPLACE_Did`, `REPLACE_Do`, `REPLACE_Does`, `REPLACE_During`, `REPLACE_English`, `REPLACE_Especially`, `REPLACE_European`, `REPLACE_Even`, `REPLACE_Every`, `REPLACE_Everyone`, `REPLACE_Everything`, `REPLACE_Facebook`, `REPLACE_Finally`, `REPLACE_First`, `REPLACE_Fla`, `REPLACE_For`, `REPLACE_Fortunately`, `REPLACE_French`, `REPLACE_Friday`, `REPLACE_From`, `REPLACE_Furthermore`, `REPLACE_German`, `REPLACE_God`, `REPLACE_Good`, `REPLACE_Have`, `REPLACE_He`, `REPLACE_Hello`, `REPLACE_Her`, `REPLACE_Here`, `REPLACE_Hi`, `REPLACE_His`, `REPLACE_How`, `REPLACE_However`, `REPLACE_I`, `REPLACE_If`, `REPLACE_In`, `REPLACE_Inc`, `REPLACE_Internet`, `REPLACE_Is`, `REPLACE_It`, `REPLACE_Italian`, `REPLACE_Its`, `REPLACE_Japan`, `REPLACE_Japanese`, `REPLACE_July`, `REPLACE_Just`, `REPLACE_Kong`, `REPLACE_Korea`, `REPLACE_Korean`, `REPLACE_Koreans`, `REPLACE_Lang`, `REPLACE_Last`, `REPLACE_Lately`, `REPLACE_Learning`, `REPLACE_Let`, `REPLACE_Life`, `REPLACE_Many`, `REPLACE_Maybe`, `REPLACE_Me`, `REPLACE_Mom`, `REPLACE_Monday`, `REPLACE_Moreover`, `REPLACE_Most`, `REPLACE_My`, `REPLACE_New`, `REPLACE_Next`, `REPLACE_Nice`, `REPLACE_No`, `REPLACE_Not`, `REPLACE_Now`, `REPLACE_Nowadays`, `REPLACE_OK`, `REPLACE_Of`, `REPLACE_Oh`, `REPLACE_On`, `REPLACE_One`, `REPLACE_Or`, `REPLACE_Our`, `REPLACE_People`, `REPLACE_Philippines`, `REPLACE_Please`, `REPLACE_Recently`, `REPLACE_Right`, `REPLACE_Russian`, `REPLACE_Saturday`, `REPLACE_School`, `REPLACE_Second`, `REPLACE_Secondly`, `REPLACE_See`, `REPLACE_Sept`, `REPLACE_September`, `REPLACE_She`, `REPLACE_Since`, `REPLACE_So`, `REPLACE_Some`, `REPLACE_Sometimes`, `REPLACE_South`, `REPLACE_Spanish`, `REPLACE_Spring`, `REPLACE_Starting`, `REPLACE_Summer`, `REPLACE_Sunday`, `REPLACE_TV`, `REPLACE_Taiwan`, `REPLACE_Thai`, `REPLACE_Thank`, `REPLACE_Thanks`, `REPLACE_That`, `REPLACE_The`, `REPLACE_Their`, `REPLACE_Then`, `REPLACE_There`, `REPLACE_Therefore`, `REPLACE_These`, `REPLACE_They`, `REPLACE_This`, `REPLACE_Those`, `REPLACE_Though`, `REPLACE_Time`, `REPLACE_To`, `REPLACE_Today`, `REPLACE_Tokyo`, `REPLACE_Tomorrow`, `REPLACE_Twitter`, `REPLACE_Two`, `REPLACE_US`, `REPLACE_Unfortunately`, `REPLACE_University`, `REPLACE_We`, `REPLACE_Wednesday`, `REPLACE_Well`, `REPLACE_What`, `REPLACE_When`, `REPLACE_Whenever`, `REPLACE_Which`, `REPLACE_While`, `REPLACE_Who`, `REPLACE_Why`, `REPLACE_Will`, `REPLACE_With`, `REPLACE_World`, `REPLACE_Would`, `REPLACE_Year`, `REPLACE_Yesterday`, `REPLACE_You`, `REPLACE_Your`, `REPLACE_[`, `REPLACE_]`, `REPLACE_a`, `REPLACE_abdomen`, `REPLACE_abide`, `REPLACE_abided`, `REPLACE_abiding`, `REPLACE_abilities`, `REPLACE_ability`, `REPLACE_able`, `REPLACE_abnormalities`, `REPLACE_abnormality`, `REPLACE_about`, `REPLACE_above`, `REPLACE_abroad`, `REPLACE_absurdity`, `REPLACE_abyss`, `REPLACE_academies`, `REPLACE_academy`, `REPLACE_acceptability`, `REPLACE_accepted`, `REPLACE_accessibility`, `REPLACE_accessories`, `REPLACE_accessory`, `REPLACE_accident`, `REPLACE_accidents`, `REPLACE_acclimated`, `REPLACE_acclimatised`, `REPLACE_accommodation`, `REPLACE_account`, `REPLACE_accountability`, `REPLACE_accounts`, `REPLACE_accuracy`, `REPLACE_achieve`, `REPLACE_acidities`, `REPLACE_acidity`, `REPLACE_across`, `REPLACE_action`, `REPLACE_actions`, `REPLACE_activities`, `REPLACE_activity`, `REPLACE_actress`, `REPLACE_actresses`, `REPLACE_actuality`, `REPLACE_actually`, `REPLACE_acuity`, `REPLACE_added`, `REPLACE_addition`, `REPLACE_address`, `REPLACE_addressed`, `REPLACE_addresses`, `REPLACE_addressing`, `REPLACE_adenoviruses`, `REPLACE_adequacy`, `REPLACE_admired`, `REPLACE_ads`, `REPLACE_adultery`, `REPLACE_adults`, `REPLACE_adversaries`, `REPLACE_adversary`, `REPLACE_adversity`, `REPLACE_advertise`, `REPLACE_advertised`, `REPLACE_advertisement`, `REPLACE_advertises`, `REPLACE_advertising`, `REPLACE_advice`, `REPLACE_advisories`, `REPLACE_advisory`, `REPLACE_advocacy`, `REPLACE_affect`, `REPLACE_affected`, `REPLACE_affecting`, `REPLACE_affects`, `REPLACE_affinities`, `REPLACE_affinity`, `REPLACE_afraid`, `REPLACE_after`, `REPLACE_afternoon`, `REPLACE_again`, `REPLACE_against`, `REPLACE_age`, `REPLACE_aged`, `REPLACE_ageing`, `REPLACE_agencies`, `REPLACE_agency`, `REPLACE_ages`, `REPLACE_aging`, `REPLACE_ago`, `REPLACE_agonies`, `REPLACE_agonised`, `REPLACE_agonising`, `REPLACE_agony`, `REPLACE_agribusiness`, `REPLACE_agribusinesses`, `REPLACE_air`, `REPLACE_airbrushed`, `REPLACE_airman`, `REPLACE_airmen`, `REPLACE_albatross`, `REPLACE_alchemy`, `REPLACE_alcohol`, `REPLACE_alderman`, `REPLACE_algae`, `REPLACE_alias`, `REPLACE_aliases`, `REPLACE_alight`, `REPLACE_align`, `REPLACE_aligned`, `REPLACE_all`, `REPLACE_allegory`, `REPLACE_allergies`, `REPLACE_allergy`, `REPLACE_allow`, `REPLACE_allowed`, `REPLACE_allowing`, `REPLACE_allows`, `REPLACE_almost`, `REPLACE_alone`, `REPLACE_along`, `REPLACE_already`, `REPLACE_also`, `REPLACE_although`, `REPLACE_alumni`, `REPLACE_alumnus`, `REPLACE_always`, `REPLACE_am`, `REPLACE_ambiguities`, `REPLACE_ambiguity`, `REPLACE_amenities`, `REPLACE_amoeba`, `REPLACE_among`, `REPLACE_amount`, `REPLACE_amounted`, `REPLACE_amygdala`, `REPLACE_an`, `REPLACE_analogies`, `REPLACE_analogy`, `REPLACE_analyse`, `REPLACE_analysed`, `REPLACE_analyses`, `REPLACE_analysing`, `REPLACE_analysis`, `REPLACE_anaphylaxis`, `REPLACE_anatomy`, `REPLACE_ancestry`, `REPLACE_anchorman`, `REPLACE_anchovies`, `REPLACE_anchovy`, `REPLACE_ancillary`, `REPLACE_and`, `REPLACE_angry`, `REPLACE_animal`, `REPLACE_animals`, `REPLACE_animosity`, `REPLACE_annexe`, `REPLACE_annexed`, `REPLACE_anniversaries`, `REPLACE_anniversary`, `REPLACE_annualised`, `REPLACE_annuities`, `REPLACE_annuity`, `REPLACE_anomaly`, `REPLACE_another`, `REPLACE_answer`, `REPLACE_answered`, `REPLACE_answering`, `REPLACE_answers`, `REPLACE_ante`, `REPLACE_antenna`, `REPLACE_antennas`, `REPLACE_anthologies`, `REPLACE_anthology`, `REPLACE_anthrax`, `REPLACE_anthropology`, `REPLACE_anti-depressants`, `REPLACE_anti-freeze`, `REPLACE_anti-hero`, `REPLACE_anti-inflammatory`, `REPLACE_anti-racism`, `REPLACE_anti-retroviral`, `REPLACE_anti-terrorism`, `REPLACE_anti-virus`, `REPLACE_antibiotic`, `REPLACE_antibiotics`, `REPLACE_antibodies`, `REPLACE_antibody`, `REPLACE_anticlimax`, `REPLACE_anticoagulants`, `REPLACE_antidepressant`, `REPLACE_antidepressants`, `REPLACE_antihero`, `REPLACE_antihistamine`, `REPLACE_antihistamines`, `REPLACE_antioxidants`, `REPLACE_antipathy`, `REPLACE_antiquities`, `REPLACE_antiquity`, `REPLACE_antiretroviral`, `REPLACE_antiterrorism`, `REPLACE_antithesis`, `REPLACE_anxieties`, `REPLACE_anxiety`, `REPLACE_any`, `REPLACE_anybody`, `REPLACE_anymore`, `REPLACE_anyone`, `REPLACE_anything`, `REPLACE_anywhere`, `REPLACE_apartment`, `REPLACE_apologies`, `REPLACE_apologise`, `REPLACE_apologised`, `REPLACE_apologises`, `REPLACE_apologising`, `REPLACE_apology`, `REPLACE_apoplexy`, `REPLACE_apotheosis`, `REPLACE_appalled`, `REPLACE_appalls`, `REPLACE_apparatus`, `REPLACE_appear`, `REPLACE_appeared`, `REPLACE_appears`, `REPLACE_appendectomy`, `REPLACE_appendix`, `REPLACE_appetite`, `REPLACE_appoint`, `REPLACE_appreciate`, `REPLACE_apprised`, `REPLACE_approved`, `REPLACE_aquarium`, `REPLACE_aquariums`, `REPLACE_arc`, `REPLACE_arced`, `REPLACE_archaeology`, `REPLACE_archipelago`, `REPLACE_archive`, `REPLACE_archived`, `REPLACE_archives`, `REPLACE_archiving`, `REPLACE_arcs`, `REPLACE_are`, `REPLACE_area`, `REPLACE_areas`, `REPLACE_aristocracy`, `REPLACE_armies`, `REPLACE_armory`, `REPLACE_armoury`, `REPLACE_army`, `REPLACE_around`, `REPLACE_arrive`, `REPLACE_arrived`, `REPLACE_arses`, `REPLACE_art`, `REPLACE_arteries`, `REPLACE_artery`, `REPLACE_article`, `REPLACE_articles`, `REPLACE_as`, `REPLACE_asbestosis`, `REPLACE_ash`, `REPLACE_ashes`, `REPLACE_ask`, `REPLACE_asked`, `REPLACE_asking`, `REPLACE_asks`, `REPLACE_asleep`, `REPLACE_ass`, `REPLACE_assemblies`, `REPLACE_assembly`, `REPLACE_asses`, `REPLACE_asymmetries`, `REPLACE_asymmetry`, `REPLACE_at`, `REPLACE_ate`, `REPLACE_atherosclerosis`, `REPLACE_atlas`, `REPLACE_atmosphere`, `REPLACE_atomised`, `REPLACE_atria`, `REPLACE_atrium`, `REPLACE_atrocities`, `REPLACE_atrocity`, `REPLACE_attend`, `REPLACE_attended`, `REPLACE_attending`, `REPLACE_audacity`, `REPLACE_audience`, `REPLACE_auguries`, `REPLACE_aunt`, `REPLACE_aunts`, `REPLACE_aura`, `REPLACE_aureus`, `REPLACE_aurorae`, `REPLACE_austerity`, `REPLACE_author`, `REPLACE_authored`, `REPLACE_authorise`, `REPLACE_authorised`, `REPLACE_authorises`, `REPLACE_authorising`, `REPLACE_authorities`, `REPLACE_authority`, `REPLACE_authorization`, `REPLACE_authorizing`, `REPLACE_authors`, `REPLACE_autobiography`, `REPLACE_autonomy`, `REPLACE_autopsies`, `REPLACE_autopsy`, `REPLACE_auxiliary`, `REPLACE_availability`, `REPLACE_available`, `REPLACE_avant-garde`, `REPLACE_averted`, `REPLACE_aviaries`, `REPLACE_avoid`, `REPLACE_avoiding`, `REPLACE_awaiting`, `REPLACE_awareness`, `REPLACE_away`, `REPLACE_awe`, `REPLACE_ax`, `REPLACE_axe`, `REPLACE_axed`, `REPLACE_axis`, `REPLACE_back`, `REPLACE_backed`, `REPLACE_backing`, `REPLACE_backlash`, `REPLACE_backlashes`, `REPLACE_backlights`, `REPLACE_backs`, `REPLACE_backsliding`, `REPLACE_bacteria`, `REPLACE_bacterium`, `REPLACE_bad`, `REPLACE_badge`, `REPLACE_badges`, `REPLACE_bakeries`, `REPLACE_bakery`, `REPLACE_balconies`, `REPLACE_balcony`, `REPLACE_balked`, `REPLACE_balks`, `REPLACE_ballroom`, `REPLACE_banalities`, `REPLACE_bank`, `REPLACE_banked`, `REPLACE_banking`, `REPLACE_bankruptcies`, `REPLACE_bankruptcy`, `REPLACE_banks`, `REPLACE_banned`, `REPLACE_baptised`, `REPLACE_barbarity`, `REPLACE_barbecue`, `REPLACE_barbecuing`, `REPLACE_barfly`, `REPLACE_barman`, `REPLACE_barrel`, `REPLACE_barreling`, `REPLACE_barrels`, `REPLACE_base`, `REPLACE_based`, `REPLACE_bases`, `REPLACE_basis`, `REPLACE_basketball`, `REPLACE_bass`, `REPLACE_batsman`, `REPLACE_batsmen`, `REPLACE_batteries`, `REPLACE_battery`, `REPLACE_bayoneted`, `REPLACE_be`, `REPLACE_bear`, `REPLACE_bearing`, `REPLACE_bears`, `REPLACE_beat`, `REPLACE_beaten`, `REPLACE_beauties`, `REPLACE_beautiful`, `REPLACE_beauty`, `REPLACE_became`, `REPLACE_because`, `REPLACE_become`, `REPLACE_becomes`, `REPLACE_becoming`, `REPLACE_bed`, `REPLACE_bedevilling`, `REPLACE_beds`, `REPLACE_beech`, `REPLACE_beeches`, `REPLACE_beef`, `REPLACE_beefed`, `REPLACE_beefing`, `REPLACE_been`, `REPLACE_beer`, `REPLACE_before`, `REPLACE_began`, `REPLACE_begin`, `REPLACE_beginner`, `REPLACE_beginning`, `REPLACE_begins`, `REPLACE_begun`, `REPLACE_behind`, `REPLACE_being`, `REPLACE_belies`, `REPLACE_believe`, `REPLACE_believed`, `REPLACE_believes`, `REPLACE_benchmark`, `REPLACE_benchmarked`, `REPLACE_benchmarking`, `REPLACE_bend`, `REPLACE_bending`, `REPLACE_beneficiaries`, `REPLACE_beneficiary`, `REPLACE_benefit`, `REPLACE_benefited`, `REPLACE_benefiting`, `REPLACE_benefits`, `REPLACE_benefitted`, `REPLACE_benefitting`, `REPLACE_bent`, `REPLACE_bereaved`, `REPLACE_bereft`, `REPLACE_best`, `REPLACE_bet`, `REPLACE_better`, `REPLACE_between`, `REPLACE_bias`, `REPLACE_biased`, `REPLACE_biases`, `REPLACE_biceps`, `REPLACE_bid`, `REPLACE_bidding`, `REPLACE_bide`, `REPLACE_bids`, `REPLACE_big`, `REPLACE_bigamy`, `REPLACE_biggest`, `REPLACE_bigotry`, `REPLACE_bijou`, `REPLACE_billowing`, `REPLACE_binary`, `REPLACE_bind`, `REPLACE_binding`, `REPLACE_biographies`, `REPLACE_biography`, `REPLACE_biology`, `REPLACE_biomass`, `REPLACE_biopsies`, `REPLACE_biopsy`, `REPLACE_biotechnologies`, `REPLACE_biotechnology`, `REPLACE_birthday`, `REPLACE_bit`, `REPLACE_bite`, `REPLACE_bites`, `REPLACE_biting`, `REPLACE_bits`, `REPLACE_bitten`, `REPLACE_black`, `REPLACE_blackberry`, `REPLACE_blacked`, `REPLACE_blackened`, `REPLACE_blackening`, `REPLACE_blackens`, `REPLACE_blacks`, `REPLACE_blasphemy`, `REPLACE_blend`, `REPLACE_blending`, `REPLACE_blends`, `REPLACE_blindness`, `REPLACE_blinkered`, `REPLACE_blue`, `REPLACE_blueberries`, `REPLACE_blueberry`, `REPLACE_bodies`, `REPLACE_body`, `REPLACE_bogey`, `REPLACE_bogeys`, `REPLACE_bogged`, `REPLACE_bogies`, `REPLACE_bolt`, `REPLACE_bolted`, `REPLACE_bolting`, `REPLACE_bolts`, `REPLACE_bomb`, `REPLACE_bonds`, `REPLACE_bonefish`, `REPLACE_bongo`, `REPLACE_bonus`, `REPLACE_bonuses`, `REPLACE_booby`, `REPLACE_book`, `REPLACE_bookmark`, `REPLACE_bookmarking`, `REPLACE_books`, `REPLACE_bookshelf`, `REPLACE_bookshelves`, `REPLACE_bore`, `REPLACE_bored`, `REPLACE_boring`, `REPLACE_born`, `REPLACE_borne`, `REPLACE_boss`, `REPLACE_both`, `REPLACE_bottle`, `REPLACE_bottle-fed`, `REPLACE_bottled`, `REPLACE_bottles`, `REPLACE_bottling`, `REPLACE_bought`, `REPLACE_bound`, `REPLACE_boundaries`, `REPLACE_boundary`, `REPLACE_bounded`, `REPLACE_bounties`, `REPLACE_bounty`, `REPLACE_boxes`, `REPLACE_boyfriend`, `REPLACE_boys`, `REPLACE_braided`, `REPLACE_brainstorm`, `REPLACE_brainstormed`, `REPLACE_brainstorming`, `REPLACE_brandies`, `REPLACE_brandy`, `REPLACE_brass`, `REPLACE_bravado`, `REPLACE_break`, `REPLACE_breaking`, `REPLACE_breast-fed`, `REPLACE_breast-feed`, `REPLACE_breast-feeding`, `REPLACE_brethren`, `REPLACE_breweries`, `REPLACE_brewery`, `REPLACE_brightness`, `REPLACE_bring`, `REPLACE_bringing`, `REPLACE_brings`, `REPLACE_broke`, `REPLACE_broken`, `REPLACE_broker`, `REPLACE_brokered`, `REPLACE_brokering`, `REPLACE_brokers`, `REPLACE_brooches`, `REPLACE_brother`, `REPLACE_brother-in-law`, `REPLACE_brothers`, `REPLACE_brought`, `REPLACE_brunch`, `REPLACE_buddies`, `REPLACE_buddy`, `REPLACE_buff`, `REPLACE_buffalo`, `REPLACE_buffed`, `REPLACE_buggies`, `REPLACE_buggy`, `REPLACE_build`, `REPLACE_building`, `REPLACE_buildings`, `REPLACE_built`, `REPLACE_bunny`, `REPLACE_buoyancy`, `REPLACE_buoyed`, `REPLACE_bureau`, `REPLACE_bureaucracies`, `REPLACE_bureaucracy`, `REPLACE_bureaus`, `REPLACE_burglaries`, `REPLACE_burglary`, `REPLACE_burn`, `REPLACE_burned`, `REPLACE_burning`, `REPLACE_burns`, `REPLACE_burnt`, `REPLACE_burr`, `REPLACE_bus`, `REPLACE_buses`, `REPLACE_bushman`, `REPLACE_business`, `REPLACE_businesses`, `REPLACE_businessman`, `REPLACE_businessmen`, `REPLACE_businesswoman`, `REPLACE_busy`, `REPLACE_but`, `REPLACE_butchery`, `REPLACE_butterflies`, `REPLACE_butterfly`, `REPLACE_buy`, `REPLACE_buying`, `REPLACE_buys`, `REPLACE_by`, `REPLACE_by-pass`, `REPLACE_ca`, `REPLACE_cache`, `REPLACE_caches`, `REPLACE_cacophony`, `REPLACE_cacti`, `REPLACE_cactus`, `REPLACE_cadavers`, `REPLACE_calamities`, `REPLACE_calamity`, `REPLACE_calculus`, `REPLACE_calf`, `REPLACE_caliber`, `REPLACE_calibre`, `REPLACE_call`, `REPLACE_called`, `REPLACE_calls`, `REPLACE_calorie`, `REPLACE_calories`, `REPLACE_came`, `REPLACE_camera`, `REPLACE_cameraman`, `REPLACE_cameramen`, `REPLACE_cameras`, `REPLACE_campus`, `REPLACE_campuses`, `REPLACE_can`, `REPLACE_canary`, `REPLACE_cancel`, `REPLACE_canceled`, `REPLACE_canceling`, `REPLACE_cancelled`, `REPLACE_cancelling`, `REPLACE_cancels`, `REPLACE_candidacy`, `REPLACE_canned`, `REPLACE_canning`, `REPLACE_cans`, `REPLACE_cant`, `REPLACE_canvas`, `REPLACE_canvases`, `REPLACE_capabilities`, `REPLACE_capability`, `REPLACE_capacities`, `REPLACE_capacity`, `REPLACE_capita`, `REPLACE_capitalise`, `REPLACE_capitalised`, `REPLACE_capitalising`, `REPLACE_captaincy`, `REPLACE_car`, `REPLACE_caravan`, `REPLACE_carcass`, `REPLACE_carcasses`, `REPLACE_cardiology`, `REPLACE_cards`, `REPLACE_care`, `REPLACE_career`, `REPLACE_careers`, `REPLACE_careful`, `REPLACE_cares`, `REPLACE_cargo`, `REPLACE_caroling`, `REPLACE_carried`, `REPLACE_carryover`, `REPLACE_cars`, `REPLACE_cart`, `REPLACE_carted`, `REPLACE_carting`, `REPLACE_cartography`, `REPLACE_carts`, `REPLACE_case`, `REPLACE_cases`, `REPLACE_casings`, `REPLACE_casualties`, `REPLACE_casualty`, `REPLACE_casuistry`, `REPLACE_catalog`, `REPLACE_cataloged`, `REPLACE_catalysed`, `REPLACE_catch`, `REPLACE_categories`, `REPLACE_category`, `REPLACE_catfish`, `REPLACE_catharsis`, `REPLACE_cats`, `REPLACE_cattleman`, `REPLACE_cattlemen`, `REPLACE_caught`, `REPLACE_causalities`, `REPLACE_causality`, `REPLACE_cause`, `REPLACE_caused`, `REPLACE_causes`, `REPLACE_causing`, `REPLACE_cautiously`, `REPLACE_cavalry`, `REPLACE_caveman`, `REPLACE_cavemen`, `REPLACE_cavities`, `REPLACE_cavity`, `REPLACE_ceased`, `REPLACE_celebrities`, `REPLACE_celebrity`, `REPLACE_cemeteries`, `REPLACE_cemetery`, `REPLACE_census`, `REPLACE_censuses`, `REPLACE_center`, `REPLACE_centered`, `REPLACE_centering`, `REPLACE_centers`, `REPLACE_centimeter`, `REPLACE_centimeters`, `REPLACE_centimetre`, `REPLACE_centimetres`, `REPLACE_centrality`, `REPLACE_centuries`, `REPLACE_century`, `REPLACE_ceremonies`, `REPLACE_ceremony`, `REPLACE_certain`, `REPLACE_certainty`, `REPLACE_cervix`, `REPLACE_chains`, `REPLACE_chairman`, `REPLACE_chairmen`, `REPLACE_chairwoman`, `REPLACE_chance`, `REPLACE_chances`, `REPLACE_change`, `REPLACE_changed`, `REPLACE_changeover`, `REPLACE_changes`, `REPLACE_changing`, `REPLACE_channel`, `REPLACE_channeled`, `REPLACE_channeling`, `REPLACE_channelling`, `REPLACE_channels`, `REPLACE_character`, `REPLACE_characterise`, `REPLACE_characterised`, `REPLACE_characterises`, `REPLACE_characters`, `REPLACE_charities`, `REPLACE_charity`, `REPLACE_charred`, `REPLACE_chat`, `REPLACE_chateau`, `REPLACE_chateaux`, `REPLACE_chats`, `REPLACE_chatted`, `REPLACE_chatting`, `REPLACE_cheap`, `REPLACE_cheaper`, `REPLACE_check`, `REPLACE_checked`, `REPLACE_checking`, `REPLACE_cheeses`, `REPLACE_chemistry`, `REPLACE_chemotherapy`, `REPLACE_cherries`, `REPLACE_cherry`, `REPLACE_chicken`, `REPLACE_chickens`, `REPLACE_child`, `REPLACE_children`, `REPLACE_chili`, `REPLACE_chilli`, `REPLACE_chillies`, `REPLACE_chirality`, `REPLACE_chiseled`, `REPLACE_choice`, `REPLACE_choices`, `REPLACE_cholera`, `REPLACE_choose`, `REPLACE_chooses`, `REPLACE_choosing`, `REPLACE_choreography`, `REPLACE_chose`, `REPLACE_chosen`, `REPLACE_chronology`, `REPLACE_chunk`, `REPLACE_chunks`, `REPLACE_churchmen`, `REPLACE_cicadas`, `REPLACE_cilia`, `REPLACE_circuitry`, `REPLACE_circus`, `REPLACE_circuses`, `REPLACE_cirrhosis`, `REPLACE_cities`, `REPLACE_citizenry`, `REPLACE_citizens`, `REPLACE_citrus`, `REPLACE_city`, `REPLACE_civilised`, `REPLACE_clad`, `REPLACE_clamor`, `REPLACE_clamoring`, `REPLACE_clarity`, `REPLACE_class`, `REPLACE_classes`, `REPLACE_classmates`, `REPLACE_clean`, `REPLACE_clear`, `REPLACE_cleft`, `REPLACE_clergy`, `REPLACE_clergyman`, `REPLACE_clergymen`, `REPLACE_clitoris`, `REPLACE_clone`, `REPLACE_cloned`, `REPLACE_clones`, `REPLACE_cloning`, `REPLACE_close`, `REPLACE_closed`, `REPLACE_clothes`, `REPLACE_clothing`, `REPLACE_cm`, `REPLACE_co-author`, `REPLACE_co-authored`, `REPLACE_co-authors`, `REPLACE_co-habiting`, `REPLACE_co-operate`, `REPLACE_co-operated`, `REPLACE_co-operating`, `REPLACE_co-opt`, `REPLACE_co-opted`, `REPLACE_co-ordinate`, `REPLACE_co-ordinated`, `REPLACE_co-ordinates`, `REPLACE_co-ordinating`, `REPLACE_co-produced`, `REPLACE_co-sponsor`, `REPLACE_co-sponsored`, `REPLACE_co-sponsors`, `REPLACE_co-star`, `REPLACE_co-starred`, `REPLACE_co-stars`, `REPLACE_coach`, `REPLACE_coached`, `REPLACE_coaches`, `REPLACE_coaching`, `REPLACE_coast`, `REPLACE_coat`, `REPLACE_cockroach`, `REPLACE_cockroaches`, `REPLACE_code`, `REPLACE_coffee`, `REPLACE_cognoscenti`, `REPLACE_cola`, `REPLACE_cold`, `REPLACE_colleagues`, `REPLACE_college`, `REPLACE_colon`, `REPLACE_colonies`, `REPLACE_colonised`, `REPLACE_colonoscopies`, `REPLACE_colonoscopy`, `REPLACE_colony`, `REPLACE_color`, `REPLACE_colored`, `REPLACE_coloring`, `REPLACE_colors`, `REPLACE_colossus`, `REPLACE_coma`, `REPLACE_combat`, `REPLACE_combating`, `REPLACE_combats`, `REPLACE_combatting`, `REPLACE_come`, `REPLACE_comedies`, `REPLACE_comedy`, `REPLACE_comes`, `REPLACE_comfortable`, `REPLACE_coming`, `REPLACE_coming-of-age`, `REPLACE_commando`, `REPLACE_commandos`, `REPLACE_comment`, `REPLACE_commentaries`, `REPLACE_commentary`, `REPLACE_comments`, `REPLACE_commercialising`, `REPLACE_commodities`, `REPLACE_commodity`, `REPLACE_common`, `REPLACE_commonalities`, `REPLACE_commonality`, `REPLACE_communicate`, `REPLACE_communicating`, `REPLACE_communication`, `REPLACE_communications`, `REPLACE_communities`, `REPLACE_community`, `REPLACE_comorbidities`, `REPLACE_companies`, `REPLACE_company`, `REPLACE_compared`, `REPLACE_comparison`, `REPLACE_compatibility`, `REPLACE_competency`, `REPLACE_complete`, `REPLACE_completed`, `REPLACE_completely`, `REPLACE_complex`, `REPLACE_complexes`, `REPLACE_complexities`, `REPLACE_complexity`, `REPLACE_composite`, `REPLACE_composites`, `REPLACE_computer`, `REPLACE_computerised`, `REPLACE_computers`, `REPLACE_con`, `REPLACE_concerned`, `REPLACE_concerning`, `REPLACE_concerns`, `REPLACE_concert`, `REPLACE_concerto`, `REPLACE_concerts`, `REPLACE_conditions`, `REPLACE_conductivity`, `REPLACE_confectionary`, `REPLACE_confectionery`, `REPLACE_confidence`, `REPLACE_confidentiality`, `REPLACE_conformity`, `REPLACE_confused`, `REPLACE_congress`, `REPLACE_congresses`, `REPLACE_congressman`, `REPLACE_congressmen`, `REPLACE_connectivity`, `REPLACE_conned`, `REPLACE_conquistadores`, `REPLACE_consciousness`, `REPLACE_consensus`, `REPLACE_conservatory`, `REPLACE_consider`, `REPLACE_considered`, `REPLACE_considering`, `REPLACE_considers`, `REPLACE_consisted`, `REPLACE_consistency`, `REPLACE_consists`, `REPLACE_consortia`, `REPLACE_consortium`, `REPLACE_conspiracies`, `REPLACE_conspiracy`, `REPLACE_constancy`, `REPLACE_constituencies`, `REPLACE_constituency`, `REPLACE_consultancies`, `REPLACE_consultancy`, `REPLACE_contact`, `REPLACE_contemporaries`, `REPLACE_contemporary`, `REPLACE_content`, `REPLACE_contingencies`, `REPLACE_contingency`, `REPLACE_continue`, `REPLACE_continued`, `REPLACE_continues`, `REPLACE_continuing`, `REPLACE_continuity`, `REPLACE_continuum`, `REPLACE_contradictory`, `REPLACE_contrary`, `REPLACE_control`, `REPLACE_controlled`, `REPLACE_controlling`, `REPLACE_controls`, `REPLACE_controversies`, `REPLACE_controversy`, `REPLACE_convenience`, `REPLACE_convenient`, `REPLACE_conversation`, `REPLACE_conversations`, `REPLACE_convexity`, `REPLACE_conviviality`, `REPLACE_cook`, `REPLACE_cooked`, `REPLACE_cookery`, `REPLACE_cookies`, `REPLACE_cooking`, `REPLACE_cooky`, `REPLACE_coppiced`, `REPLACE_corpus`, `REPLACE_corrals`, `REPLACE_correct`, `REPLACE_corrected`, `REPLACE_correcting`, `REPLACE_corrections`, `REPLACE_correctly`, `REPLACE_cortex`, `REPLACE_cost`, `REPLACE_costing`, `REPLACE_costs`, `REPLACE_cosy`, `REPLACE_could`, `REPLACE_counsel`, `REPLACE_counseling`, `REPLACE_counselling`, `REPLACE_counsels`, `REPLACE_counter-attack`, `REPLACE_counter-attacked`, `REPLACE_counter-attacking`, `REPLACE_counter-insurgency`, `REPLACE_counterinsurgency`, `REPLACE_counties`, `REPLACE_countries`, `REPLACE_country`, `REPLACE_countryman`, `REPLACE_countrymen`, `REPLACE_county`, `REPLACE_couple`, `REPLACE_couples`, `REPLACE_course`, `REPLACE_courses`, `REPLACE_court-martial`, `REPLACE_court-martialed`, `REPLACE_courtesies`, `REPLACE_courtesy`, `REPLACE_covered`, `REPLACE_covers`, `REPLACE_craftsman`, `REPLACE_cramps`, `REPLACE_cranberries`, `REPLACE_cranberry`, `REPLACE_crash-landed`, `REPLACE_crawfish`, `REPLACE_crayfish`, `REPLACE_cream`, `REPLACE_create`, `REPLACE_created`, `REPLACE_creates`, `REPLACE_creating`, `REPLACE_credibility`, `REPLACE_crematoria`, `REPLACE_crest`, `REPLACE_crested`, `REPLACE_cresting`, `REPLACE_crests`, `REPLACE_crew`, `REPLACE_crewman`, `REPLACE_crewmen`, `REPLACE_crises`, `REPLACE_crisis`, `REPLACE_criteria`, `REPLACE_criticise`, `REPLACE_criticised`, `REPLACE_criticising`, `REPLACE_critique`, `REPLACE_critiques`, `REPLACE_cronies`, `REPLACE_crony`, `REPLACE_cross-checking`, `REPLACE_crossed`, `REPLACE_crowd`, `REPLACE_crowded`, `REPLACE_crows`, `REPLACE_crucifix`, `REPLACE_crucifixes`, `REPLACE_cruelties`, `REPLACE_cruelty`, `REPLACE_crus`, `REPLACE_crystallise`, `REPLACE_cue`, `REPLACE_cues`, `REPLACE_cul-de-sac`, `REPLACE_culpa`, `REPLACE_culture`, `REPLACE_cultures`, `REPLACE_curated`, `REPLACE_curiosity`, `REPLACE_currencies`, `REPLACE_currency`, `REPLACE_current`, `REPLACE_currently`, `REPLACE_curricula`, `REPLACE_curriculum`, `REPLACE_curried`, `REPLACE_curry`, `REPLACE_curse`, `REPLACE_cursed`, `REPLACE_curses`, `REPLACE_cursing`, `REPLACE_custody`, `REPLACE_customer`, `REPLACE_customers`, `REPLACE_customised`, `REPLACE_customises`, `REPLACE_cut`, `REPLACE_cuts`, `REPLACE_cutting`, `REPLACE_cypress`, `REPLACE_daddy`, `REPLACE_dailies`, `REPLACE_daily`, `REPLACE_dairy`, `REPLACE_dairymen`, `REPLACE_dais`, `REPLACE_daisies`, `REPLACE_dance`, `REPLACE_dancing`, `REPLACE_dare`, `REPLACE_dared`, `REPLACE_dares`, `REPLACE_daring`, `REPLACE_data`, `REPLACE_date`, `REPLACE_dates`, `REPLACE_dating`, `REPLACE_daughter`, `REPLACE_daughter-in-law`, `REPLACE_daughters`, `REPLACE_day`, `REPLACE_days`, `REPLACE_de-emphasized`, `REPLACE_de-icing`, `REPLACE_deal`, `REPLACE_dealt`, `REPLACE_decentralised`, `REPLACE_decide`, `REPLACE_decided`, `REPLACE_decision`, `REPLACE_decisions`, `REPLACE_decommission`, `REPLACE_decommissioned`, `REPLACE_decommissioning`, `REPLACE_deconstruct`, `REPLACE_deconstructed`, `REPLACE_decoupling`, `REPLACE_decriminalise`, `REPLACE_decriminalised`, `REPLACE_defence`, `REPLACE_defences`, `REPLACE_defense`, `REPLACE_defenses`, `REPLACE_deficiencies`, `REPLACE_deficiency`, `REPLACE_definitely`, `REPLACE_deformities`, `REPLACE_defuse`, `REPLACE_defused`, `REPLACE_defusing`, `REPLACE_degrees`, `REPLACE_deities`, `REPLACE_deity`, `REPLACE_delegate`, `REPLACE_delegated`, `REPLACE_delegates`, `REPLACE_delicacy`, `REPLACE_delicious`, `REPLACE_delinquencies`, `REPLACE_delinquency`, `REPLACE_delirium`, `REPLACE_delist`, `REPLACE_delisted`, `REPLACE_delisting`, `REPLACE_deliveries`, `REPLACE_delivery`, `REPLACE_demands`, `REPLACE_demilitarised`, `REPLACE_democracies`, `REPLACE_democracy`, `REPLACE_democratised`, `REPLACE_demonise`, `REPLACE_demonising`, `REPLACE_demoralised`, `REPLACE_densities`, `REPLACE_density`, `REPLACE_deoxygenated`, `REPLACE_depend`, `REPLACE_dependency`, `REPLACE_depends`, `REPLACE_depositary`, `REPLACE_depository`, `REPLACE_depressed`, `REPLACE_deputies`, `REPLACE_deputise`, `REPLACE_deputy`, `REPLACE_derby`, `REPLACE_deregulating`, `REPLACE_dermatology`, `REPLACE_dervish`, `REPLACE_describe`, `REPLACE_described`, `REPLACE_deseeded`, `REPLACE_deselect`, `REPLACE_design`, `REPLACE_desirability`, `REPLACE_despite`, `REPLACE_destabilise`, `REPLACE_destabilised`, `REPLACE_destabilising`, `REPLACE_destabilize`, `REPLACE_destabilized`, `REPLACE_destabilizing`, `REPLACE_destinies`, `REPLACE_destiny`, `REPLACE_details`, `REPLACE_detox`, `REPLACE_develop`, `REPLACE_developed`, `REPLACE_developing`, `REPLACE_development`, `REPLACE_developments`, `REPLACE_devil`, `REPLACE_dexterity`, `REPLACE_diagnosis`, `REPLACE_diagramed`, `REPLACE_diagrams`, `REPLACE_dial`, `REPLACE_dialed`, `REPLACE_dialing`, `REPLACE_dialling`, `REPLACE_dialysis`, `REPLACE_diaries`, `REPLACE_diary`, `REPLACE_dice`, `REPLACE_dictionaries`, `REPLACE_dictionary`, `REPLACE_dictum`, `REPLACE_did`, `REPLACE_die`, `REPLACE_died`, `REPLACE_dies`, `REPLACE_dietary`, `REPLACE_difference`, `REPLACE_differences`, `REPLACE_different`, `REPLACE_difficult`, `REPLACE_difficulties`, `REPLACE_difficulty`, `REPLACE_digitise`, `REPLACE_digitised`, `REPLACE_dignitaries`, `REPLACE_dignitary`, `REPLACE_dignity`, `REPLACE_dilemma`, `REPLACE_dilemmas`, `REPLACE_dinghies`, `REPLACE_dinghy`, `REPLACE_dinner`, `REPLACE_director-general`, `REPLACE_directory`, `REPLACE_disabilities`, `REPLACE_disability`, `REPLACE_disabled`, `REPLACE_disaggregated`, `REPLACE_disappointed`, `REPLACE_disappointing`, `REPLACE_discolored`, `REPLACE_discourtesy`, `REPLACE_discovered`, `REPLACE_discoveries`, `REPLACE_discovery`, `REPLACE_discrepancies`, `REPLACE_discrepancy`, `REPLACE_disemboweling`, `REPLACE_dish`, `REPLACE_disharmony`, `REPLACE_dishes`, `REPLACE_disheveled`, `REPLACE_dishonesty`, `REPLACE_disloyalty`, `REPLACE_disorientate`, `REPLACE_disoriented`, `REPLACE_disorienting`, `REPLACE_disparities`, `REPLACE_disparity`, `REPLACE_dispensaries`, `REPLACE_disproven`, `REPLACE_disputes`, `REPLACE_distillery`, `REPLACE_ditto`, `REPLACE_diva`, `REPLACE_divas`, `REPLACE_dive`, `REPLACE_diversity`, `REPLACE_diverticula`, `REPLACE_do`, `REPLACE_doctor`, `REPLACE_documentaries`, `REPLACE_documentary`, `REPLACE_does`, `REPLACE_doggie`, `REPLACE_doggy`, `REPLACE_dogma`, `REPLACE_dogs`, `REPLACE_doing`, `REPLACE_dollar`, `REPLACE_dollars`, `REPLACE_domesticate`, `REPLACE_domesticated`, `REPLACE_domesticity`, `REPLACE_domiciles`, `REPLACE_domino`, `REPLACE_dominoes`, `REPLACE_don`, `REPLACE_done`, `REPLACE_dormitories`, `REPLACE_dormitory`, `REPLACE_dormouse`, `REPLACE_double`, `REPLACE_down`, `REPLACE_download`, `REPLACE_downloaded`, `REPLACE_downloading`, `REPLACE_downloads`, `REPLACE_downsize`, `REPLACE_downsized`, `REPLACE_downsizing`, `REPLACE_draft`, `REPLACE_drafted`, `REPLACE_drafting`, `REPLACE_drafts`, `REPLACE_draftsman`, `REPLACE_drag`, `REPLACE_dragged`, `REPLACE_dragging`, `REPLACE_drags`, `REPLACE_drama`, `REPLACE_dramas`, `REPLACE_drank`, `REPLACE_drapery`, `REPLACE_draughtsman`, `REPLACE_draw`, `REPLACE_dream`, `REPLACE_dreamed`, `REPLACE_dreaming`, `REPLACE_dreams`, `REPLACE_dreamt`, `REPLACE_drew`, `REPLACE_drink`, `REPLACE_drinking`, `REPLACE_drinks`, `REPLACE_drive`, `REPLACE_driven`, `REPLACE_drives`, `REPLACE_driving`, `REPLACE_drop`, `REPLACE_dropped`, `REPLACE_drove`, `REPLACE_drowned`, `REPLACE_drunk`, `REPLACE_duality`, `REPLACE_due`, `REPLACE_duel`, `REPLACE_dueled`, `REPLACE_dueling`, `REPLACE_duelling`, `REPLACE_duplex`, `REPLACE_duplicity`, `REPLACE_durability`, `REPLACE_during`, `REPLACE_duties`, `REPLACE_duty`, `REPLACE_dwarf`, `REPLACE_dwarfed`, `REPLACE_dwell`, `REPLACE_dying`, `REPLACE_dynasties`, `REPLACE_dynasty`, `REPLACE_dysentery`, `REPLACE_dystrophy`, `REPLACE_e`, `REPLACE_e-mail`, `REPLACE_e-mailed`, `REPLACE_e-mailing`, `REPLACE_e-mails`, `REPLACE_each`, `REPLACE_earlier`, `REPLACE_early`, `REPLACE_earthquake`, `REPLACE_earthquakes`, `REPLACE_easier`, `REPLACE_easily`, `REPLACE_easy`, `REPLACE_eat`, `REPLACE_eaten`, `REPLACE_eatery`, `REPLACE_eating`, `REPLACE_eccentricities`, `REPLACE_echoed`, `REPLACE_echoes`, `REPLACE_echoing`, `REPLACE_ecology`, `REPLACE_economic`, `REPLACE_economies`, `REPLACE_economise`, `REPLACE_economy`, `REPLACE_ecstasy`, `REPLACE_eczema`, `REPLACE_edge`, `REPLACE_effect`, `REPLACE_effectiveness`, `REPLACE_effects`, `REPLACE_efficacies`, `REPLACE_efficacy`, `REPLACE_efficiencies`, `REPLACE_efficiency`, `REPLACE_effigies`, `REPLACE_effigy`, `REPLACE_effort`, `REPLACE_efforts`, `REPLACE_eighties`, `REPLACE_either`, `REPLACE_elderberry`, `REPLACE_elderly`, `REPLACE_electricity`, `REPLACE_elegy`, `REPLACE_elf`, `REPLACE_eligibility`, `REPLACE_else`, `REPLACE_email`, `REPLACE_emailed`, `REPLACE_emailing`, `REPLACE_emails`, `REPLACE_embarrassed`, `REPLACE_embarrassing`, `REPLACE_embassies`, `REPLACE_embassy`, `REPLACE_embed`, `REPLACE_embedded`, `REPLACE_embroideries`, `REPLACE_embroidery`, `REPLACE_embryo`, `REPLACE_embryology`, `REPLACE_embryos`, `REPLACE_emergencies`, `REPLACE_emergency`, `REPLACE_emotions`, `REPLACE_emphasis`, `REPLACE_emphasise`, `REPLACE_emphasised`, `REPLACE_emphasising`, `REPLACE_emphysema`, `REPLACE_employability`, `REPLACE_employees`, `REPLACE_empress`, `REPLACE_enamel`, `REPLACE_encapsulated`, `REPLACE_encapsulates`, `REPLACE_encapsulating`, `REPLACE_encased`, `REPLACE_enclosed`, `REPLACE_encrusted`, `REPLACE_encumbered`, `REPLACE_end`, `REPLACE_endeavor`, `REPLACE_endeavors`, `REPLACE_ended`, `REPLACE_endocrinology`, `REPLACE_endorse`, `REPLACE_endorsed`, `REPLACE_endorses`, `REPLACE_endorsing`, `REPLACE_ends`, `REPLACE_enemies`, `REPLACE_enemy`, `REPLACE_energies`, `REPLACE_energised`, `REPLACE_energising`, `REPLACE_energy`, `REPLACE_english`, `REPLACE_engulf`, `REPLACE_engulfed`, `REPLACE_engulfing`, `REPLACE_engulfs`, `REPLACE_enjoy`, `REPLACE_enjoyable`, `REPLACE_enjoyed`, `REPLACE_enjoying`, `REPLACE_enmity`, `REPLACE_enough`, `REPLACE_enquiries`, `REPLACE_enquiry`, `REPLACE_enrolled`, `REPLACE_enrolls`, `REPLACE_enshrined`, `REPLACE_enshrines`, `REPLACE_enshrining`, `REPLACE_ensnared`, `REPLACE_ensure`, `REPLACE_ensured`, `REPLACE_ensures`, `REPLACE_ensuring`, `REPLACE_enter`, `REPLACE_entered`, `REPLACE_enterovirus`, `REPLACE_enters`, `REPLACE_enthralled`, `REPLACE_enthralling`, `REPLACE_entirety`, `REPLACE_entities`, `REPLACE_entity`, `REPLACE_entomology`, `REPLACE_entreaties`, `REPLACE_entrench`, `REPLACE_entrenched`, `REPLACE_entries`, `REPLACE_entrust`, `REPLACE_entrusted`, `REPLACE_entry`, `REPLACE_environment`, `REPLACE_epidemiology`, `REPLACE_epilepsy`, `REPLACE_epiphany`, `REPLACE_epitomised`, `REPLACE_epoxy`, `REPLACE_equal`, `REPLACE_equaled`, `REPLACE_equalised`, `REPLACE_equalising`, `REPLACE_equalities`, `REPLACE_equality`, `REPLACE_equalled`, `REPLACE_equals`, `REPLACE_equilibrium`, `REPLACE_equinox`, `REPLACE_equipment`, `REPLACE_equities`, `REPLACE_equity`, `REPLACE_errors`, `REPLACE_escape`, `REPLACE_escaped`, `REPLACE_escapes`, `REPLACE_escaping`, `REPLACE_esophagus`, `REPLACE_especially`, `REPLACE_estuary`, `REPLACE_etc`, `REPLACE_eternity`, `REPLACE_ethnicities`, `REPLACE_ethnicity`, `REPLACE_etymology`, `REPLACE_eucalyptus`, `REPLACE_eulogies`, `REPLACE_eulogy`, `REPLACE_euro`, `REPLACE_euthanasia`, `REPLACE_euthanized`, `REPLACE_even`, `REPLACE_evening`, `REPLACE_event`, `REPLACE_events`, `REPLACE_eventuality`, `REPLACE_ever`, `REPLACE_every`, `REPLACE_everybody`, `REPLACE_everyday`, `REPLACE_everyone`, `REPLACE_everything`, `REPLACE_everywhere`, `REPLACE_ex`, `REPLACE_ex-wife`, `REPLACE_exam`, `REPLACE_example`, `REPLACE_exams`, `REPLACE_except`, `REPLACE_excess`, `REPLACE_excesses`, `REPLACE_exchange`, `REPLACE_exchanges`, `REPLACE_excitability`, `REPLACE_excited`, `REPLACE_exciting`, `REPLACE_exercise`, `REPLACE_exercises`, `REPLACE_exes`, `REPLACE_exodus`, `REPLACE_exorcised`, `REPLACE_expect`, `REPLACE_expectancies`, `REPLACE_expectancy`, `REPLACE_expected`, `REPLACE_expects`, `REPLACE_expediencies`, `REPLACE_expediency`, `REPLACE_expensive`, `REPLACE_experience`, `REPLACE_experienced`, `REPLACE_experiences`, `REPLACE_experiencing`, `REPLACE_explain`, `REPLACE_express`, `REPLACE_expressed`, `REPLACE_expressions`, `REPLACE_extolled`, `REPLACE_extra`, `REPLACE_extremely`, `REPLACE_extremities`, `REPLACE_extremity`, `REPLACE_eye`, `REPLACE_eyed`, `REPLACE_eyeglass`, `REPLACE_eyeing`, `REPLACE_eyelash`, `REPLACE_eyelashes`, `REPLACE_eyes`, `REPLACE_eyewitness`, `REPLACE_eyewitnesses`, `REPLACE_face`, `REPLACE_faced`, `REPLACE_faces`, `REPLACE_facet`, `REPLACE_facia`, `REPLACE_facilities`, `REPLACE_facility`, `REPLACE_facing`, `REPLACE_fact`, `REPLACE_factor`, `REPLACE_factories`, `REPLACE_factory`, `REPLACE_facts`, `REPLACE_faculty`, `REPLACE_fade`, `REPLACE_faded`, `REPLACE_faeces`, `REPLACE_failed`, `REPLACE_failings`, `REPLACE_fair`, `REPLACE_fairy`, `REPLACE_faithfully`, `REPLACE_fall`, `REPLACE_fallen`, `REPLACE_falling`, `REPLACE_falls`, `REPLACE_falsities`, `REPLACE_familiarise`, `REPLACE_familiarising`, `REPLACE_familiarity`, `REPLACE_families`, `REPLACE_family`, `REPLACE_famous`, `REPLACE_fan`, `REPLACE_fanny`, `REPLACE_fans`, `REPLACE_fantasies`, `REPLACE_fantasised`, `REPLACE_fantasy`, `REPLACE_far`, `REPLACE_fast`, `REPLACE_faster`, `REPLACE_fatalities`, `REPLACE_fatality`, `REPLACE_father`, `REPLACE_father-in-law`, `REPLACE_fatty`, `REPLACE_fauna`, `REPLACE_favor`, `REPLACE_favored`, `REPLACE_favoring`, `REPLACE_favorite`, `REPLACE_favors`, `REPLACE_favourite`, `REPLACE_fax`, `REPLACE_fealty`, `REPLACE_feasibility`, `REPLACE_feces`, `REPLACE_fed`, `REPLACE_fee`, `REPLACE_feed`, `REPLACE_feeding`, `REPLACE_feeds`, `REPLACE_feel`, `REPLACE_feeling`, `REPLACE_feelings`, `REPLACE_feels`, `REPLACE_feet`, `REPLACE_fell`, `REPLACE_felled`, `REPLACE_felonies`, `REPLACE_felony`, `REPLACE_felt`, `REPLACE_femininity`, `REPLACE_ferocity`, `REPLACE_fertilised`, `REPLACE_fertilises`, `REPLACE_fertility`, `REPLACE_festivities`, `REPLACE_fetishes`, `REPLACE_fetus`, `REPLACE_fetuses`, `REPLACE_few`, `REPLACE_fewer`, `REPLACE_fiasco`, `REPLACE_fiberglass`, `REPLACE_fibreglass`, `REPLACE_fibrosis`, `REPLACE_fidelity`, `REPLACE_fight`, `REPLACE_filings`, `REPLACE_fill`, `REPLACE_filled`, `REPLACE_fillies`, `REPLACE_filly`, `REPLACE_film`, `REPLACE_finalise`, `REPLACE_finalised`, `REPLACE_finality`, `REPLACE_finally`, `REPLACE_finch`, `REPLACE_find`, `REPLACE_finding`, `REPLACE_findings`, `REPLACE_fined`, `REPLACE_finish`, `REPLACE_finished`, `REPLACE_fire`, `REPLACE_firefly`, `REPLACE_fireman`, `REPLACE_firemen`, `REPLACE_fireworks`, `REPLACE_firmness`, `REPLACE_first`, `REPLACE_fish`, `REPLACE_fished`, `REPLACE_fisheries`, `REPLACE_fisherman`, `REPLACE_fishermen`, `REPLACE_fishery`, `REPLACE_fishing`, `REPLACE_fitness`, `REPLACE_five`, `REPLACE_flattered`, `REPLACE_flattery`, `REPLACE_flew`, `REPLACE_flex`, `REPLACE_flexed`, `REPLACE_flexibility`, `REPLACE_flexing`, `REPLACE_flied`, `REPLACE_flies`, `REPLACE_flippancy`, `REPLACE_floor`, `REPLACE_floppies`, `REPLACE_floppy`, `REPLACE_flora`, `REPLACE_floss`, `REPLACE_flowers`, `REPLACE_flown`, `REPLACE_fluency`, `REPLACE_fluidity`, `REPLACE_fly`, `REPLACE_flying`, `REPLACE_focus`, `REPLACE_focused`, `REPLACE_focuses`, `REPLACE_focusing`, `REPLACE_focussed`, `REPLACE_focussing`, `REPLACE_foetus`, `REPLACE_foetuses`, `REPLACE_follow`, `REPLACE_follow-up`, `REPLACE_followed`, `REPLACE_following`, `REPLACE_follows`, `REPLACE_folly`, `REPLACE_food`, `REPLACE_foods`, `REPLACE_foot`, `REPLACE_footing`, `REPLACE_for`, `REPLACE_forbade`, `REPLACE_forbid`, `REPLACE_forbidden`, `REPLACE_forbidding`, `REPLACE_force`, `REPLACE_force-feeding`, `REPLACE_forced`, `REPLACE_forego`, `REPLACE_foregoing`, `REPLACE_foregone`, `REPLACE_foreign`, `REPLACE_foreigners`, `REPLACE_forestry`, `REPLACE_forgery`, `REPLACE_forget`, `REPLACE_forgot`, `REPLACE_forgotten`, `REPLACE_form`, `REPLACE_formalised`, `REPLACE_formality`, `REPLACE_formed`, `REPLACE_forming`, `REPLACE_forms`, `REPLACE_formula`, `REPLACE_formulae`, `REPLACE_formulas`, `REPLACE_forum`, `REPLACE_forums`, `REPLACE_forward`, `REPLACE_fossilised`, `REPLACE_found`, `REPLACE_founded`, `REPLACE_founding`, `REPLACE_four`, `REPLACE_fracas`, `REPLACE_fragilities`, `REPLACE_fragility`, `REPLACE_frailty`, `REPLACE_fraternity`, `REPLACE_free`, `REPLACE_frequencies`, `REPLACE_frequency`, `REPLACE_frescoes`, `REPLACE_freshman`, `REPLACE_freshmen`, `REPLACE_freshness`, `REPLACE_friend`, `REPLACE_friends`, `REPLACE_from`, `REPLACE_front`, `REPLACE_fronts`, `REPLACE_frugality`, `REPLACE_fruit`, `REPLACE_fuel`, `REPLACE_fueled`, `REPLACE_fueling`, `REPLACE_fuelled`, `REPLACE_fuelling`, `REPLACE_fuels`, `REPLACE_fulfil`, `REPLACE_fulfilled`, `REPLACE_fulfilling`, `REPLACE_full`, `REPLACE_fun`, `REPLACE_functionality`, `REPLACE_funds`, `REPLACE_fundus`, `REPLACE_fungi`, `REPLACE_fungus`, `REPLACE_funnel`, `REPLACE_funneled`, `REPLACE_funneling`, `REPLACE_funnels`, `REPLACE_funny`, `REPLACE_furies`, `REPLACE_further`, `REPLACE_fury`, `REPLACE_fuse`, `REPLACE_fused`, `REPLACE_fuses`, `REPLACE_fusing`, `REPLACE_future`, `REPLACE_futures`, `REPLACE_gadfly`, `REPLACE_gain`, `REPLACE_gained`, `REPLACE_gaining`, `REPLACE_galaxies`, `REPLACE_galaxy`, `REPLACE_galleries`, `REPLACE_gallery`, `REPLACE_gallows`, `REPLACE_galvanise`, `REPLACE_galvanising`, `REPLACE_game`, `REPLACE_games`, `REPLACE_gas`, `REPLACE_gases`, `REPLACE_gassing`, `REPLACE_gastrectomy`, `REPLACE_gastroenteritis`, `REPLACE_gate`, `REPLACE_gave`, `REPLACE_gearbox`, `REPLACE_gearboxes`, `REPLACE_geese`, `REPLACE_gel`, `REPLACE_gender`, `REPLACE_genders`, `REPLACE_genealogy`, `REPLACE_generalised`, `REPLACE_generalities`, `REPLACE_generosity`, `REPLACE_genesis`, `REPLACE_genius`, `REPLACE_geniuses`, `REPLACE_genotypes`, `REPLACE_genotyping`, `REPLACE_gentleman`, `REPLACE_gentlemen`, `REPLACE_gentry`, `REPLACE_genus`, `REPLACE_geography`, `REPLACE_geology`, `REPLACE_geometries`, `REPLACE_geometry`, `REPLACE_ger`, `REPLACE_get`, `REPLACE_gets`, `REPLACE_getting`, `REPLACE_ghetto`, `REPLACE_ghettoes`, `REPLACE_gift`, `REPLACE_gilded`, `REPLACE_gipsies`, `REPLACE_gipsy`, `REPLACE_girl`, `REPLACE_girlfriend`, `REPLACE_girlfriends`, `REPLACE_girls`, `REPLACE_give`, `REPLACE_giveaway`, `REPLACE_giveaways`, `REPLACE_given`, `REPLACE_gives`, `REPLACE_giving`, `REPLACE_glad`, `REPLACE_glamorise`, `REPLACE_glioblastoma`, `REPLACE_globalised`, `REPLACE_globalized`, `REPLACE_glossary`, `REPLACE_glossy`, `REPLACE_glue`, `REPLACE_glued`, `REPLACE_glues`, `REPLACE_gnaw`, `REPLACE_go`, `REPLACE_goal`, `REPLACE_goals`, `REPLACE_godchildren`, `REPLACE_goddess`, `REPLACE_goes`, `REPLACE_going`, `REPLACE_goldfinch`, `REPLACE_goldfish`, `REPLACE_golf`, `REPLACE_golfed`, `REPLACE_golfing`, `REPLACE_gone`, `REPLACE_good`, `REPLACE_goodie`, `REPLACE_goodies`, `REPLACE_goose`, `REPLACE_gossip`, `REPLACE_got`, `REPLACE_gotten`, `REPLACE_government`, `REPLACE_grade`, `REPLACE_grades`, `REPLACE_graduate`, `REPLACE_graduated`, `REPLACE_graffiti`, `REPLACE_grammar`, `REPLACE_granary`, `REPLACE_grandchild`, `REPLACE_grandchildren`, `REPLACE_grandfather`, `REPLACE_grandfathered`, `REPLACE_grandmother`, `REPLACE_granny`, `REPLACE_grateful`, `REPLACE_gratuity`, `REPLACE_gravel`, `REPLACE_gravity`, `REPLACE_gravy`, `REPLACE_gray`, `REPLACE_graying`, `REPLACE_grays`, `REPLACE_great`, `REPLACE_greet`, `REPLACE_grew`, `REPLACE_grip`, `REPLACE_gripped`, `REPLACE_gripping`, `REPLACE_grips`, `REPLACE_grizzly`, `REPLACE_groceries`, `REPLACE_grocery`, `REPLACE_groundsman`, `REPLACE_group`, `REPLACE_groups`, `REPLACE_grovel`, `REPLACE_groveling`, `REPLACE_grow`, `REPLACE_growing`, `REPLACE_grown`, `REPLACE_guardsman`, `REPLACE_guardsmen`, `REPLACE_guess`, `REPLACE_guidelines`, `REPLACE_guitar`, `REPLACE_guitars`, `REPLACE_gunman`, `REPLACE_gunmen`, `REPLACE_guns`, `REPLACE_guys`, `REPLACE_gymnasium`, `REPLACE_gypsy`, `REPLACE_habits`, `REPLACE_had`, `REPLACE_haemodialysis`, `REPLACE_haemorrhage`, `REPLACE_hair`, `REPLACE_hairbrush`, `REPLACE_hajj`, `REPLACE_half`, `REPLACE_halo`, `REPLACE_halt`, `REPLACE_halted`, `REPLACE_halting`, `REPLACE_halve`, `REPLACE_halved`, `REPLACE_halves`, `REPLACE_halving`, `REPLACE_hammered`, `REPLACE_hand`, `REPLACE_hand-picked`, `REPLACE_handcrafted`, `REPLACE_handful`, `REPLACE_hang`, `REPLACE_hanged`, `REPLACE_hanging`, `REPLACE_hangs`, `REPLACE_happen`, `REPLACE_happened`, `REPLACE_happening`, `REPLACE_happens`, `REPLACE_happiness`, `REPLACE_happy`, `REPLACE_harbor`, `REPLACE_harbored`, `REPLACE_harboring`, `REPLACE_harbors`, `REPLACE_hard`, `REPLACE_harder`, `REPLACE_harkened`, `REPLACE_harkening`, `REPLACE_harmonies`, `REPLACE_harmonise`, `REPLACE_harmony`, `REPLACE_has`, `REPLACE_hat`, `REPLACE_haunches`, `REPLACE_have`, `REPLACE_having`, `REPLACE_he`, `REPLACE_head`, `REPLACE_headdresses`, `REPLACE_headed`, `REPLACE_headhunted`, `REPLACE_headmistress`, `REPLACE_headquartered`, `REPLACE_headquarters`, `REPLACE_health`, `REPLACE_healthy`, `REPLACE_hear`, `REPLACE_heard`, `REPLACE_hearing`, `REPLACE_hearings`, `REPLACE_hears`, `REPLACE_heart`, `REPLACE_hearts`, `REPLACE_heat`, `REPLACE_heated`, `REPLACE_heating`, `REPLACE_heats`, `REPLACE_heaved`, `REPLACE_heaves`, `REPLACE_heavies`, `REPLACE_heavily`, `REPLACE_heavy`, `REPLACE_hegemony`, `REPLACE_heiress`, `REPLACE_held`, `REPLACE_helmsman`, `REPLACE_help`, `REPLACE_helped`, `REPLACE_helping`, `REPLACE_helps`, `REPLACE_henchmen`, `REPLACE_henry`, `REPLACE_her`, `REPLACE_here`, `REPLACE_heresy`, `REPLACE_hernia`, `REPLACE_hero`, `REPLACE_heroes`, `REPLACE_hesitancy`, `REPLACE_hiatus`, `REPLACE_hierarchies`, `REPLACE_hierarchy`, `REPLACE_high`, `REPLACE_high-frequency`, `REPLACE_higher`, `REPLACE_hillbilly`, `REPLACE_him`, `REPLACE_himself`, `REPLACE_hip`, `REPLACE_hippie`, `REPLACE_hippo`, `REPLACE_hippocampus`, `REPLACE_hippopotamus`, `REPLACE_hippos`, `REPLACE_his`, `REPLACE_histories`, `REPLACE_history`, `REPLACE_hit`, `REPLACE_hits`, `REPLACE_hitting`, `REPLACE_hobbies`, `REPLACE_hobby`, `REPLACE_hold`, `REPLACE_holding`, `REPLACE_holiday`, `REPLACE_holidays`, `REPLACE_holly`, `REPLACE_holography`, `REPLACE_hols`, `REPLACE_home`, `REPLACE_homes`, `REPLACE_hometown`, `REPLACE_homework`, `REPLACE_homosexuality`, `REPLACE_honk`, `REPLACE_honked`, `REPLACE_honking`, `REPLACE_honks`, `REPLACE_hoof`, `REPLACE_hooves`, `REPLACE_hope`, `REPLACE_hopes`, `REPLACE_horse-riding`, `REPLACE_horseracing`, `REPLACE_horseshoe`, `REPLACE_hospitalised`, `REPLACE_hostess`, `REPLACE_hostesses`, `REPLACE_hostilities`, `REPLACE_hostility`, `REPLACE_hot`, `REPLACE_hotdog`, `REPLACE_hotdogs`, `REPLACE_hour`, `REPLACE_hourglass`, `REPLACE_hours`, `REPLACE_house`, `REPLACE_houses`, `REPLACE_housewife`, `REPLACE_housewives`, `REPLACE_housing`, `REPLACE_how`, `REPLACE_however`, `REPLACE_human`, `REPLACE_humanities`, `REPLACE_humanity`, `REPLACE_humans`, `REPLACE_humidity`, `REPLACE_hundreds`, `REPLACE_hung`, `REPLACE_hunker`, `REPLACE_hunkered`, `REPLACE_hunkering`, `REPLACE_hurly-burly`, `REPLACE_hurt`, `REPLACE_husband`, `REPLACE_husbandry`, `REPLACE_huskies`, `REPLACE_husky`, `REPLACE_hussy`, `REPLACE_hydrotherapy`, `REPLACE_hype`, `REPLACE_hyped`, `REPLACE_hyperactivity`, `REPLACE_hypersensitivity`, `REPLACE_hyphenate`, `REPLACE_hyphenated`, `REPLACE_hyping`, `REPLACE_hypnosis`, `REPLACE_hypochondria`, `REPLACE_hypocrisies`, `REPLACE_hypocrisy`, `REPLACE_hypotheses`, `REPLACE_hypothesis`, `REPLACE_iPhone`, `REPLACE_ice`, `REPLACE_ice-skating`, `REPLACE_icebreakers`, `REPLACE_idea`, `REPLACE_ideas`, `REPLACE_identities`, `REPLACE_identity`, `REPLACE_ideologies`, `REPLACE_ideology`, `REPLACE_idiocy`, `REPLACE_if`, `REPLACE_ignominy`, `REPLACE_illegalities`, `REPLACE_illegality`, `REPLACE_illness`, `REPLACE_illnesses`, `REPLACE_imagery`, `REPLACE_imagine`, `REPLACE_imagines`, `REPLACE_immaturity`, `REPLACE_immediacy`, `REPLACE_immediately`, `REPLACE_immobilised`, `REPLACE_immobility`, `REPLACE_immortalised`, `REPLACE_immortalising`, `REPLACE_immunity`, `REPLACE_immunodeficiency`, `REPLACE_immunohistochemistry`, `REPLACE_immunology`, `REPLACE_immunotherapy`, `REPLACE_impact`, `REPLACE_imperiled`, `REPLACE_impetus`, `REPLACE_important`, `REPLACE_impossibility`, `REPLACE_improprieties`, `REPLACE_impropriety`, `REPLACE_improve`, `REPLACE_improved`, `REPLACE_improving`, `REPLACE_in`, `REPLACE_inability`, `REPLACE_inactivity`, `REPLACE_inadequacies`, `REPLACE_inadequacy`, `REPLACE_incapacity`, `REPLACE_incendiary`, `REPLACE_inch`, `REPLACE_inched`, `REPLACE_inches`, `REPLACE_inching`, `REPLACE_include`, `REPLACE_included`, `REPLACE_includes`, `REPLACE_including`, `REPLACE_incompatibilities`, `REPLACE_incongruity`, `REPLACE_inconsistencies`, `REPLACE_inconsistency`, `REPLACE_increase`, `REPLACE_increased`, `REPLACE_increases`, `REPLACE_increasing`, `REPLACE_increments`, `REPLACE_indecency`, `REPLACE_indemnity`, `REPLACE_index`, `REPLACE_indexes`, `REPLACE_indices`, `REPLACE_indignity`, `REPLACE_industrialised`, `REPLACE_industries`, `REPLACE_industry`, `REPLACE_inefficiencies`, `REPLACE_inefficiency`, `REPLACE_inequalities`, `REPLACE_inequality`, `REPLACE_inequities`, `REPLACE_inequity`, `REPLACE_inevitability`, `REPLACE_infancy`, `REPLACE_infantry`, `REPLACE_infantrymen`, `REPLACE_infertility`, `REPLACE_infidelities`, `REPLACE_infidelity`, `REPLACE_infinity`, `REPLACE_infirmary`, `REPLACE_influx`, `REPLACE_influxes`, `REPLACE_informality`, `REPLACE_information`, `REPLACE_ingredient`, `REPLACE_ingredients`, `REPLACE_iniquities`, `REPLACE_initial`, `REPLACE_injuries`, `REPLACE_injury`, `REPLACE_inlaid`, `REPLACE_innuendo`, `REPLACE_input`, `REPLACE_inputs`, `REPLACE_inquiries`, `REPLACE_inquiry`, `REPLACE_insanity`, `REPLACE_insecurities`, `REPLACE_insecurity`, `REPLACE_insensitivity`, `REPLACE_inside`, `REPLACE_insincerity`, `REPLACE_insisting`, `REPLACE_insolvency`, `REPLACE_instability`, `REPLACE_install`, `REPLACE_installed`, `REPLACE_installing`, `REPLACE_installs`, `REPLACE_instead`, `REPLACE_instil`, `REPLACE_institutionalised`, `REPLACE_instrument`, `REPLACE_instruments`, `REPLACE_insufficiency`, `REPLACE_insurgency`, `REPLACE_integrity`, `REPLACE_intensity`, `REPLACE_interactivity`, `REPLACE_interconnected`, `REPLACE_interest`, `REPLACE_interested`, `REPLACE_interesting`, `REPLACE_interests`, `REPLACE_interface`, `REPLACE_interfaces`, `REPLACE_interfacing`, `REPLACE_intermediary`, `REPLACE_international`, `REPLACE_internet`, `REPLACE_interweaves`, `REPLACE_interwoven`, `REPLACE_intimacy`, `REPLACE_into`, `REPLACE_intricacies`, `REPLACE_introduce`, `REPLACE_introduced`, `REPLACE_invented`, `REPLACE_invited`, `REPLACE_iris`, `REPLACE_ironies`, `REPLACE_irony`, `REPLACE_irregularities`, `REPLACE_is`, `REPLACE_issue`, `REPLACE_issued`, `REPLACE_issues`, `REPLACE_it`, `REPLACE_item`, `REPLACE_itineraries`, `REPLACE_itinerary`, `REPLACE_its`, `REPLACE_ivory`, `REPLACE_jackass`, `REPLACE_jealousy`, `REPLACE_jellyfish`, `REPLACE_jeopardise`, `REPLACE_jeopardised`, `REPLACE_jet`, `REPLACE_jetty`, `REPLACE_job`, `REPLACE_jobs`, `REPLACE_join`, `REPLACE_joined`, `REPLACE_joining`, `REPLACE_jostle`, `REPLACE_jostled`, `REPLACE_jostling`, `REPLACE_journal`, `REPLACE_journals`, `REPLACE_journeyman`, `REPLACE_judge`, `REPLACE_judiciary`, `REPLACE_juice`, `REPLACE_juices`, `REPLACE_juicing`, `REPLACE_jukebox`, `REPLACE_junkie`, `REPLACE_juries`, `REPLACE_jury`, `REPLACE_just`, `REPLACE_keep`, `REPLACE_keeping`, `REPLACE_keeps`, `REPLACE_kennel`, `REPLACE_kennels`, `REPLACE_kept`, `REPLACE_kernel`, `REPLACE_kernels`, `REPLACE_kibbutz`, `REPLACE_kibbutzes`, `REPLACE_kick-off`, `REPLACE_kickoff`, `REPLACE_kidnap`, `REPLACE_kidnapped`, `REPLACE_kidnapping`, `REPLACE_kidnappings`, `REPLACE_kidney`, `REPLACE_killed`, `REPLACE_killings`, `REPLACE_kilograms`, `REPLACE_kilometer`, `REPLACE_kilometers`, `REPLACE_kilometre`, `REPLACE_kilometres`, `REPLACE_kind`, `REPLACE_kindness`, `REPLACE_kinds`, `REPLACE_kitties`, `REPLACE_kitty`, `REPLACE_knew`, `REPLACE_knife`, `REPLACE_knifing`, `REPLACE_knives`, `REPLACE_knocked`, `REPLACE_know`, `REPLACE_knowledge`, `REPLACE_known`, `REPLACE_knows`, `REPLACE_krona`, `REPLACE_krone`, `REPLACE_kroner`, `REPLACE_kronor`, `REPLACE_label`, `REPLACE_labeled`, `REPLACE_labeling`, `REPLACE_labelled`, `REPLACE_labels`, `REPLACE_labor`, `REPLACE_laboratories`, `REPLACE_laboratory`, `REPLACE_labored`, `REPLACE_lack`, `REPLACE_lactobacilli`, `REPLACE_ladies`, `REPLACE_lady`, `REPLACE_laid`, `REPLACE_lain`, `REPLACE_landlady`, `REPLACE_landmass`, `REPLACE_landmasses`, `REPLACE_language`, `REPLACE_languages`, `REPLACE_larceny`, `REPLACE_large`, `REPLACE_larvae`, `REPLACE_last`, `REPLACE_lasting`, `REPLACE_late`, `REPLACE_lately`, `REPLACE_later`, `REPLACE_latex`, `REPLACE_laughed`, `REPLACE_laundry`, `REPLACE_lavatories`, `REPLACE_lavatory`, `REPLACE_law`, `REPLACE_laws`, `REPLACE_lay`, `REPLACE_layered`, `REPLACE_laying`, `REPLACE_layman`, `REPLACE_laymen`, `REPLACE_lead`, `REPLACE_leading`, `REPLACE_leaf`, `REPLACE_leaflet`, `REPLACE_leafleting`, `REPLACE_leaflets`, `REPLACE_lean`, `REPLACE_leaned`, `REPLACE_leaning`, `REPLACE_leap`, `REPLACE_leap-frogged`, `REPLACE_leaped`, `REPLACE_leaping`, `REPLACE_leaps`, `REPLACE_leapt`, `REPLACE_learn`, `REPLACE_learned`, `REPLACE_learning`, `REPLACE_learnt`, `REPLACE_lease`, `REPLACE_least`, `REPLACE_leave`, `REPLACE_leaves`, `REPLACE_leaving`, `REPLACE_led`, `REPLACE_leech`, `REPLACE_leeching`, `REPLACE_left`, `REPLACE_lefties`, `REPLACE_lefty`, `REPLACE_legacies`, `REPLACE_legacy`, `REPLACE_legalised`, `REPLACE_legalising`, `REPLACE_legality`, `REPLACE_legitimise`, `REPLACE_legitimises`, `REPLACE_lei`, `REPLACE_lending`, `REPLACE_lens`, `REPLACE_lenses`, `REPLACE_leprosy`, `REPLACE_less`, `REPLACE_lesson`, `REPLACE_lessons`, `REPLACE_let`, `REPLACE_lethargy`, `REPLACE_lets`, `REPLACE_leu`, `REPLACE_level`, `REPLACE_leveled`, `REPLACE_levelled`, `REPLACE_levelling`, `REPLACE_levels`, `REPLACE_leverage`, `REPLACE_leveraged`, `REPLACE_leverages`, `REPLACE_leveraging`, `REPLACE_lexicon`, `REPLACE_liabilities`, `REPLACE_liability`, `REPLACE_libel`, `REPLACE_libeling`, `REPLACE_liberalised`, `REPLACE_liberals`, `REPLACE_liberties`, `REPLACE_liberty`, `REPLACE_libido`, `REPLACE_libraries`, `REPLACE_library`, `REPLACE_lice`, `REPLACE_license`, `REPLACE_lie`, `REPLACE_lied`, `REPLACE_lies`, `REPLACE_life`, `REPLACE_life-expectancy`, `REPLACE_lifestyle`, `REPLACE_lift`, `REPLACE_light`, `REPLACE_lighted`, `REPLACE_lighting`, `REPLACE_lightness`, `REPLACE_lights`, `REPLACE_like`, `REPLACE_liked`, `REPLACE_likeness`, `REPLACE_likenesses`, `REPLACE_likes`, `REPLACE_liking`, `REPLACE_lily`, `REPLACE_line`, `REPLACE_linearity`, `REPLACE_lineman`, `REPLACE_linemen`, `REPLACE_lines`, `REPLACE_linesman`, `REPLACE_liquor`, `REPLACE_liquors`, `REPLACE_lira`, `REPLACE_list`, `REPLACE_listed`, `REPLACE_listen`, `REPLACE_listened`, `REPLACE_listening`, `REPLACE_lit`, `REPLACE_litany`, `REPLACE_literacy`, `REPLACE_literate`, `REPLACE_little`, `REPLACE_liturgies`, `REPLACE_liturgy`, `REPLACE_live`, `REPLACE_lived`, `REPLACE_lives`, `REPLACE_living`, `REPLACE_loaded`, `REPLACE_loaf`, `REPLACE_loan`, `REPLACE_loans`, `REPLACE_loaves`, `REPLACE_localities`, `REPLACE_locality`, `REPLACE_located`, `REPLACE_loci`, `REPLACE_lonely`, `REPLACE_long`, `REPLACE_longer`, `REPLACE_longevity`, `REPLACE_look`, `REPLACE_looked`, `REPLACE_looking`, `REPLACE_looks`, `REPLACE_loony`, `REPLACE_lorries`, `REPLACE_lorry`, `REPLACE_lose`, `REPLACE_losing`, `REPLACE_loss`, `REPLACE_losses`, `REPLACE_lost`, `REPLACE_lot`, `REPLACE_lots`, `REPLACE_lottery`, `REPLACE_lotus`, `REPLACE_loud`, `REPLACE_love`, `REPLACE_loved`, `REPLACE_lovely`, `REPLACE_loves`, `REPLACE_low`, `REPLACE_low-down`, `REPLACE_lower`, `REPLACE_lowered`, `REPLACE_loyalties`, `REPLACE_loyalty`, `REPLACE_luck`, `REPLACE_lucky`, `REPLACE_luminaries`, `REPLACE_lunacies`, `REPLACE_lunacy`, `REPLACE_lunch`, `REPLACE_lure`, `REPLACE_lured`, `REPLACE_lures`, `REPLACE_luring`, `REPLACE_luxuries`, `REPLACE_luxury`, `REPLACE_lying`, `REPLACE_lymphadenopathy`, `REPLACE_lymphoma`, `REPLACE_lynx`, `REPLACE_lyrics`, `REPLACE_ma`, `REPLACE_machinery`, `REPLACE_made`, `REPLACE_madman`, `REPLACE_madmen`, `REPLACE_madness`, `REPLACE_magazine`, `REPLACE_magazines`, `REPLACE_magistracy`, `REPLACE_magnetised`, `REPLACE_mailbox`, `REPLACE_mailboxes`, `REPLACE_mailman`, `REPLACE_main`, `REPLACE_mainstream`, `REPLACE_major`, `REPLACE_majorities`, `REPLACE_majority`, `REPLACE_make`, `REPLACE_makes`, `REPLACE_making`, `REPLACE_mama`, `REPLACE_mammography`, `REPLACE_man`, `REPLACE_mango`, `REPLACE_mangoes`, `REPLACE_manned`, `REPLACE_mans`, `REPLACE_many`, `REPLACE_march`, `REPLACE_mares`, `REPLACE_marginalise`, `REPLACE_marginalised`, `REPLACE_market`, `REPLACE_married`, `REPLACE_marsh`, `REPLACE_marshal`, `REPLACE_marshaled`, `REPLACE_marshaling`, `REPLACE_marshalled`, `REPLACE_marshals`, `REPLACE_marshes`, `REPLACE_marvel`, `REPLACE_marveled`, `REPLACE_marvels`, `REPLACE_mas`, `REPLACE_masculinity`, `REPLACE_mass-produced`, `REPLACE_mastectomies`, `REPLACE_mastectomy`, `REPLACE_matches`, `REPLACE_materialise`, `REPLACE_materialised`, `REPLACE_maternity`, `REPLACE_matrix`, `REPLACE_matter`, `REPLACE_mattress`, `REPLACE_mattresses`, `REPLACE_maturities`, `REPLACE_maturity`, `REPLACE_mausoleum`, `REPLACE_max`, `REPLACE_maximise`, `REPLACE_maximum`, `REPLACE_may`, `REPLACE_maybe`, `REPLACE_mayoralties`, `REPLACE_mayoralty`, `REPLACE_me`, `REPLACE_meals`, `REPLACE_mean`, `REPLACE_meaning`, `REPLACE_means`, `REPLACE_meant`, `REPLACE_meat`, `REPLACE_mechanised`, `REPLACE_media`, `REPLACE_medicine`, `REPLACE_mediocrity`, `REPLACE_medium`, `REPLACE_mediums`, `REPLACE_meet`, `REPLACE_meeting`, `REPLACE_meetings`, `REPLACE_meets`, `REPLACE_meiosis`, `REPLACE_melanoma`, `REPLACE_melodies`, `REPLACE_melody`, `REPLACE_member`, `REPLACE_members`, `REPLACE_memorabilia`, `REPLACE_memorandum`, `REPLACE_memorandums`, `REPLACE_memories`, `REPLACE_memory`, `REPLACE_men`, `REPLACE_meningioma`, `REPLACE_meniscus`, `REPLACE_mentality`, `REPLACE_mentioned`, `REPLACE_mentions`, `REPLACE_mentor`, `REPLACE_mentored`, `REPLACE_mentoring`, `REPLACE_mentors`, `REPLACE_mercenaries`, `REPLACE_mercenary`, `REPLACE_merchandise`, `REPLACE_merchandising`, `REPLACE_mercury`, `REPLACE_mercy`, `REPLACE_mesh`, `REPLACE_mesmerising`, `REPLACE_mesothelioma`, `REPLACE_message`, `REPLACE_messages`, `REPLACE_messaging`, `REPLACE_mestizo`, `REPLACE_met`, `REPLACE_metadata`, `REPLACE_metamorphosis`, `REPLACE_meteorology`, `REPLACE_meter`, `REPLACE_meters`, `REPLACE_method`, `REPLACE_methodologies`, `REPLACE_methodology`, `REPLACE_methods`, `REPLACE_metre`, `REPLACE_metres`, `REPLACE_metrology`, `REPLACE_metropolis`, `REPLACE_mice`, `REPLACE_microbreweries`, `REPLACE_microscopy`, `REPLACE_microwave`, `REPLACE_middle`, `REPLACE_middleman`, `REPLACE_middlemen`, `REPLACE_midwife`, `REPLACE_midwives`, `REPLACE_might`, `REPLACE_milieu`, `REPLACE_milieux`, `REPLACE_militaries`, `REPLACE_military`, `REPLACE_militiaman`, `REPLACE_militiamen`, `REPLACE_millennia`, `REPLACE_millennium`, `REPLACE_millenniums`, `REPLACE_millimeter`, `REPLACE_millimetres`, `REPLACE_million`, `REPLACE_mimic`, `REPLACE_mimics`, `REPLACE_mind`, `REPLACE_minds`, `REPLACE_mine`, `REPLACE_mines`, `REPLACE_mini-series`, `REPLACE_miniaturised`, `REPLACE_minibus`, `REPLACE_minibuses`, `REPLACE_minimise`, `REPLACE_minimises`, `REPLACE_minimum`, `REPLACE_minimums`, `REPLACE_miniseries`, `REPLACE_ministries`, `REPLACE_ministry`, `REPLACE_minor`, `REPLACE_minorities`, `REPLACE_minority`, `REPLACE_minus`, `REPLACE_minuses`, `REPLACE_minute`, `REPLACE_minutes`, `REPLACE_minutiae`, `REPLACE_minx`, `REPLACE_misclassifying`, `REPLACE_misdiagnosed`, `REPLACE_miseries`, `REPLACE_misery`, `REPLACE_mishmash`, `REPLACE_mislabeled`, `REPLACE_miss`, `REPLACE_missed`, `REPLACE_missing`, `REPLACE_missionaries`, `REPLACE_missionary`, `REPLACE_misspelled`, `REPLACE_mistake`, `REPLACE_mistakes`, `REPLACE_mistress`, `REPLACE_mistresses`, `REPLACE_mix-up`, `REPLACE_mix-ups`, `REPLACE_mobilise`, `REPLACE_mobilised`, `REPLACE_mobilising`, `REPLACE_mobility`, `REPLACE_mockeries`, `REPLACE_mockery`, `REPLACE_model`, `REPLACE_modeled`, `REPLACE_modeling`, `REPLACE_modelled`, `REPLACE_modelling`, `REPLACE_models`, `REPLACE_modernise`, `REPLACE_modernised`, `REPLACE_modernising`, `REPLACE_modes`, `REPLACE_mold`, `REPLACE_moment`, `REPLACE_momentum`, `REPLACE_monarchy`, `REPLACE_monastery`, `REPLACE_monetise`, `REPLACE_money`, `REPLACE_monopolies`, `REPLACE_monopolising`, `REPLACE_monopoly`, `REPLACE_monstrosities`, `REPLACE_monstrosity`, `REPLACE_month`, `REPLACE_monthlies`, `REPLACE_monthly`, `REPLACE_months`, `REPLACE_morale`, `REPLACE_moralising`, `REPLACE_morality`, `REPLACE_morals`, `REPLACE_morass`, `REPLACE_moratorium`, `REPLACE_more`, `REPLACE_morning`, `REPLACE_morphed`, `REPLACE_morphogenesis`, `REPLACE_mortalities`, `REPLACE_mortality`, `REPLACE_mortuary`, `REPLACE_mosquito`, `REPLACE_mosquitoes`, `REPLACE_mosquitos`, `REPLACE_moss`, `REPLACE_mosses`, `REPLACE_most`, `REPLACE_mother`, `REPLACE_mother-in-law`, `REPLACE_mother-to-be`, `REPLACE_mothering`, `REPLACE_mothers-to-be`, `REPLACE_motorcycle`, `REPLACE_motorcycles`, `REPLACE_motorcycling`, `REPLACE_motto`, `REPLACE_mountains`, `REPLACE_mouse`, `REPLACE_mouthwash`, `REPLACE_move`, `REPLACE_moved`, `REPLACE_movie`, `REPLACE_movies`, `REPLACE_moving`, `REPLACE_much`, `REPLACE_mulberry`, `REPLACE_multi-tasking`, `REPLACE_multiplex`, `REPLACE_multiplexes`, `REPLACE_multitasking`, `REPLACE_mummies`, `REPLACE_mummy`, `REPLACE_municipalities`, `REPLACE_municipality`, `REPLACE_music`, `REPLACE_must`, `REPLACE_my`, `REPLACE_mycelium`, `REPLACE_myself`, `REPLACE_mysteries`, `REPLACE_mystery`, `REPLACE_mythologies`, `REPLACE_mythology`, `REPLACE_myxomatosis`, `REPLACE_name`, `REPLACE_named`, `REPLACE_names`, `REPLACE_nannies`, `REPLACE_nanny`, `REPLACE_nanotechnology`, `REPLACE_nappies`, `REPLACE_nation`, `REPLACE_nationalised`, `REPLACE_nationalities`, `REPLACE_nationality`, `REPLACE_native`, `REPLACE_natives`, `REPLACE_natural`, `REPLACE_naturalised`, `REPLACE_naturalising`, `REPLACE_navies`, `REPLACE_navy`, `REPLACE_near`, `REPLACE_nearby`, `REPLACE_nebula`, `REPLACE_necessary`, `REPLACE_necessities`, `REPLACE_necessity`, `REPLACE_necropolis`, `REPLACE_necropsy`, `REPLACE_need`, `REPLACE_needed`, `REPLACE_needing`, `REPLACE_needs`, `REPLACE_negative`, `REPLACE_negatives`, `REPLACE_negativity`, `REPLACE_neighbor`, `REPLACE_neighborhood`, `REPLACE_neighborhoods`, `REPLACE_neighboring`, `REPLACE_neighbors`, `REPLACE_neighbour`, `REPLACE_neighbourhood`, `REPLACE_neighbourhoods`, `REPLACE_neighbouring`, `REPLACE_neighbours`, `REPLACE_nemesis`, `REPLACE_nervous`, `REPLACE_neurobiology`, `REPLACE_neurology`, `REPLACE_neuroses`, `REPLACE_neurosurgery`, `REPLACE_neutralise`, `REPLACE_never`, `REPLACE_new`, `REPLACE_news`, `REPLACE_newsman`, `REPLACE_newspaper`, `REPLACE_newspaperman`, `REPLACE_newspapers`, `REPLACE_next`, `REPLACE_nexus`, `REPLACE_nice`, `REPLACE_niceties`, `REPLACE_nicety`, `REPLACE_night`, `REPLACE_nightdress`, `REPLACE_nine`, `REPLACE_no`, `REPLACE_nobility`, `REPLACE_nobleman`, `REPLACE_nobody`, `REPLACE_noodles`, `REPLACE_nor`, `REPLACE_normal`, `REPLACE_normalcy`, `REPLACE_normalise`, `REPLACE_normalises`, `REPLACE_normality`, `REPLACE_nostrum`, `REPLACE_not`, `REPLACE_nothing`, `REPLACE_notice`, `REPLACE_noticed`, `REPLACE_novella`, `REPLACE_novels`, `REPLACE_novelties`, `REPLACE_novelty`, `REPLACE_now`, `REPLACE_nowadays`, `REPLACE_nuclear`, `REPLACE_nuclei`, `REPLACE_numb`, `REPLACE_number`, `REPLACE_numbered`, `REPLACE_numbers`, `REPLACE_nurseries`, `REPLACE_nursery`, `REPLACE_oarsman`, `REPLACE_oarsmen`, `REPLACE_oases`, `REPLACE_oasis`, `REPLACE_obesity`, `REPLACE_obituaries`, `REPLACE_obituary`, `REPLACE_objectivity`, `REPLACE_obscenities`, `REPLACE_obscenity`, `REPLACE_obscurity`, `REPLACE_observatories`, `REPLACE_observatory`, `REPLACE_occupancies`, `REPLACE_occupancy`, `REPLACE_occur`, `REPLACE_occured`, `REPLACE_occuring`, `REPLACE_occurred`, `REPLACE_occurring`, `REPLACE_occurs`, `REPLACE_octopi`, `REPLACE_octopus`, `REPLACE_oddities`, `REPLACE_oddity`, `REPLACE_oedema`, `REPLACE_of`, `REPLACE_off`, `REPLACE_off-load`, `REPLACE_offered`, `REPLACE_offerings`, `REPLACE_offers`, `REPLACE_office`, `REPLACE_offices`, `REPLACE_offset`, `REPLACE_offsetting`, `REPLACE_offshoot`, `REPLACE_offshoots`, `REPLACE_offspring`, `REPLACE_often`, `REPLACE_oilman`, `REPLACE_okay`, `REPLACE_old`, `REPLACE_older`, `REPLACE_ombudsman`, `REPLACE_ombudsmen`, `REPLACE_on`, `REPLACE_once`, `REPLACE_oncology`, `REPLACE_one`, `REPLACE_ones`, `REPLACE_online`, `REPLACE_only`, `REPLACE_onto`, `REPLACE_open`, `REPLACE_opened`, `REPLACE_opening`, `REPLACE_opera`, `REPLACE_opinion`, `REPLACE_opinions`, `REPLACE_opportunities`, `REPLACE_opportunity`, `REPLACE_optimise`, `REPLACE_optimum`, `REPLACE_option`, `REPLACE_optioned`, `REPLACE_options`, `REPLACE_opus`, `REPLACE_or`, `REPLACE_order`, `REPLACE_ordered`, `REPLACE_orderly`, `REPLACE_organise`, `REPLACE_organised`, `REPLACE_organising`, `REPLACE_orgy`, `REPLACE_orthodoxy`, `REPLACE_osteoporosis`, `REPLACE_ostriches`, `REPLACE_other`, `REPLACE_others`, `REPLACE_our`, `REPLACE_ourselves`, `REPLACE_out`, `REPLACE_out-competing`, `REPLACE_outbid`, `REPLACE_outcrops`, `REPLACE_output`, `REPLACE_outside`, `REPLACE_outsource`, `REPLACE_outsourced`, `REPLACE_outsourcing`, `REPLACE_ovaries`, `REPLACE_over`, `REPLACE_over-fishing`, `REPLACE_over-react`, `REPLACE_over-riding`, `REPLACE_over-stretched`, `REPLACE_overbooking`, `REPLACE_overextended`, `REPLACE_overfishing`, `REPLACE_overlaid`, `REPLACE_overseas`, `REPLACE_overspend`, `REPLACE_overstressed`, `REPLACE_overstretched`, `REPLACE_own`, `REPLACE_owned`, `REPLACE_owns`, `REPLACE_oxen`, `REPLACE_oxymoron`, `REPLACE_paddies`, `REPLACE_paddy`, `REPLACE_paid`, `REPLACE_pain`, `REPLACE_pair`, `REPLACE_paired`, `REPLACE_pal`, `REPLACE_pale`, `REPLACE_pall`, `REPLACE_pals`, `REPLACE_palsy`, `REPLACE_pancreas`, `REPLACE_panoply`, `REPLACE_panties`, `REPLACE_pantries`, `REPLACE_pantry`, `REPLACE_pants`, `REPLACE_papacy`, `REPLACE_paper`, `REPLACE_papillomavirus`, `REPLACE_papyrus`, `REPLACE_paradigm`, `REPLACE_paradigms`, `REPLACE_paradox`, `REPLACE_paradoxes`, `REPLACE_paragliding`, `REPLACE_parallel`, `REPLACE_parallels`, `REPLACE_paralyse`, `REPLACE_paralysed`, `REPLACE_paralysing`, `REPLACE_paralysis`, `REPLACE_parcel`, `REPLACE_parcels`, `REPLACE_parentheses`, `REPLACE_parents`, `REPLACE_parents-in-law`, `REPLACE_parish`, `REPLACE_parishes`, `REPLACE_parity`, `REPLACE_parodies`, `REPLACE_parody`, `REPLACE_pars`, `REPLACE_part`, `REPLACE_parted`, `REPLACE_particular`, `REPLACE_partied`, `REPLACE_parties`, `REPLACE_parts`, `REPLACE_party`, `REPLACE_partying`, `REPLACE_pass`, `REPLACE_passed`, `REPLACE_passersby`, `REPLACE_passes`, `REPLACE_passing`, `REPLACE_past`, `REPLACE_pasta`, `REPLACE_paste`, `REPLACE_pastor`, `REPLACE_pastors`, `REPLACE_pastries`, `REPLACE_pastry`, `REPLACE_pasty`, `REPLACE_paternity`, `REPLACE_pathology`, `REPLACE_patient`, `REPLACE_patients`, `REPLACE_patina`, `REPLACE_patriarchy`, `REPLACE_patrimony`, `REPLACE_patrolmen`, `REPLACE_patroness`, `REPLACE_patronised`, `REPLACE_pattern`, `REPLACE_patterned`, `REPLACE_patterns`, `REPLACE_patties`, `REPLACE_patty`, `REPLACE_paucity`, `REPLACE_paunch`, `REPLACE_pavement`, `REPLACE_pay`, `REPLACE_paying`, `REPLACE_pays`, `REPLACE_peculiarities`, `REPLACE_pedal`, `REPLACE_pedaled`, `REPLACE_pedalling`, `REPLACE_pedals`, `REPLACE_pelvis`, `REPLACE_penalised`, `REPLACE_penalises`, `REPLACE_penalising`, `REPLACE_penalties`, `REPLACE_penalty`, `REPLACE_pence`, `REPLACE_penciled`, `REPLACE_penciling`, `REPLACE_pendulum`, `REPLACE_penis`, `REPLACE_penises`, `REPLACE_penitentiary`, `REPLACE_pennies`, `REPLACE_penny`, `REPLACE_people`, `REPLACE_perfect`, `REPLACE_performance`, `REPLACE_performances`, `REPLACE_period`, `REPLACE_periphery`, `REPLACE_perjury`, `REPLACE_person`, `REPLACE_persona`, `REPLACE_personalise`, `REPLACE_personalised`, `REPLACE_personalities`, `REPLACE_personality`, `REPLACE_personas`, `REPLACE_persons`, `REPLACE_pertussis`, `REPLACE_phalanx`, `REPLACE_pharmacies`, `REPLACE_pharmacy`, `REPLACE_phenomena`, `REPLACE_phenomenon`, `REPLACE_philanthropy`, `REPLACE_philology`, `REPLACE_philosophies`, `REPLACE_philosophy`, `REPLACE_phone`, `REPLACE_phones`, `REPLACE_photo`, `REPLACE_photography`, `REPLACE_photos`, `REPLACE_photosynthesis`, `REPLACE_phrase`, `REPLACE_phrases`, `REPLACE_phylum`, `REPLACE_physiology`, `REPLACE_physiotherapy`, `REPLACE_pick`, `REPLACE_picnic`, `REPLACE_picnics`, `REPLACE_pics`, `REPLACE_picture`, `REPLACE_pictures`, `REPLACE_piety`, `REPLACE_piggy`, `REPLACE_piggyback`, `REPLACE_piggybacking`, `REPLACE_pin-point`, `REPLACE_pinky`, `REPLACE_piracy`, `REPLACE_pituitary`, `REPLACE_place`, `REPLACE_placebo`, `REPLACE_placed`, `REPLACE_places`, `REPLACE_plan`, `REPLACE_planned`, `REPLACE_planning`, `REPLACE_plans`, `REPLACE_planted`, `REPLACE_plants`, `REPLACE_plateau`, `REPLACE_plateaued`, `REPLACE_play`, `REPLACE_play-off`, `REPLACE_play-offs`, `REPLACE_played`, `REPLACE_player`, `REPLACE_players`, `REPLACE_playing`, `REPLACE_playoff`, `REPLACE_playoffs`, `REPLACE_plays`, `REPLACE_plead`, `REPLACE_pleaded`, `REPLACE_pleading`, `REPLACE_pleasantries`, `REPLACE_please`, `REPLACE_pleased`, `REPLACE_pled`, `REPLACE_plenary`, `REPLACE_plenipotentiary`, `REPLACE_plotting`, `REPLACE_plug`, `REPLACE_plugged`, `REPLACE_plugging`, `REPLACE_plugs`, `REPLACE_plus`, `REPLACE_pluses`, `REPLACE_pneumonia`, `REPLACE_pneumoniae`, `REPLACE_podium`, `REPLACE_point`, `REPLACE_pointed`, `REPLACE_points`, `REPLACE_polarised`, `REPLACE_policeman`, `REPLACE_policemen`, `REPLACE_policewoman`, `REPLACE_policewomen`, `REPLACE_policies`, `REPLACE_policy`, `REPLACE_policy-makers`, `REPLACE_policy-making`, `REPLACE_policyholder`, `REPLACE_policyholders`, `REPLACE_policymaker`, `REPLACE_policymakers`, `REPLACE_policymaking`, `REPLACE_politicise`, `REPLACE_politicised`, `REPLACE_pomposity`, `REPLACE_ponies`, `REPLACE_pony`, `REPLACE_pooch`, `REPLACE_pooches`, `REPLACE_pool`, `REPLACE_poor`, `REPLACE_poppies`, `REPLACE_poppy`, `REPLACE_popular`, `REPLACE_popularised`, `REPLACE_popularising`, `REPLACE_popularity`, `REPLACE_porch`, `REPLACE_porches`, `REPLACE_portico`, `REPLACE_possibilities`, `REPLACE_possibility`, `REPLACE_possible`, `REPLACE_post`, `REPLACE_posted`, `REPLACE_posterity`, `REPLACE_postman`, `REPLACE_postmarked`, `REPLACE_potato`, `REPLACE_potatoes`, `REPLACE_potency`, `REPLACE_pottery`, `REPLACE_potty`, `REPLACE_poultry`, `REPLACE_practicalities`, `REPLACE_practice`, `REPLACE_practiced`, `REPLACE_practices`, `REPLACE_practicing`, `REPLACE_practise`, `REPLACE_pre-date`, `REPLACE_pre-dated`, `REPLACE_pre-dates`, `REPLACE_pre-determined`, `REPLACE_pre-empt`, `REPLACE_pre-empted`, `REPLACE_pre-empting`, `REPLACE_pre-existing`, `REPLACE_pre-filled`, `REPLACE_pre-loaded`, `REPLACE_pre-ordered`, `REPLACE_pre-orders`, `REPLACE_pre-paid`, `REPLACE_pre-printed`, `REPLACE_prefer`, `REPLACE_pregnancies`, `REPLACE_pregnancy`, `REPLACE_preheat`, `REPLACE_preliminary`, `REPLACE_premier`, `REPLACE_premiere`, `REPLACE_premiered`, `REPLACE_premieres`, `REPLACE_premiering`, `REPLACE_preorders`, `REPLACE_prep`, `REPLACE_prepare`, `REPLACE_prepared`, `REPLACE_preparing`, `REPLACE_prepping`, `REPLACE_preps`, `REPLACE_present`, `REPLACE_presented`, `REPLACE_presents`, `REPLACE_presidencies`, `REPLACE_presidency`, `REPLACE_president-elect`, `REPLACE_pressurised`, `REPLACE_pretty`, `REPLACE_prevail`, `REPLACE_prevent`, `REPLACE_preview`, `REPLACE_previewed`, `REPLACE_previews`, `REPLACE_previous`, `REPLACE_price`, `REPLACE_priced`, `REPLACE_prices`, `REPLACE_priestess`, `REPLACE_primaries`, `REPLACE_primary`, `REPLACE_princess`, `REPLACE_princesses`, `REPLACE_principality`, `REPLACE_priorities`, `REPLACE_prioritise`, `REPLACE_prioritised`, `REPLACE_prioritising`, `REPLACE_prioritize`, `REPLACE_prioritizing`, `REPLACE_priority`, `REPLACE_privatise`, `REPLACE_privatised`, `REPLACE_privatising`, `REPLACE_privy`, `REPLACE_prized`, `REPLACE_probabilities`, `REPLACE_probability`, `REPLACE_probably`, `REPLACE_problem`, `REPLACE_problems`, `REPLACE_proceedings`, `REPLACE_prodigy`, `REPLACE_product`, `REPLACE_productivity`, `REPLACE_products`, `REPLACE_profanities`, `REPLACE_profanity`, `REPLACE_proficiency`, `REPLACE_profile`, `REPLACE_profiled`, `REPLACE_profiles`, `REPLACE_profiling`, `REPLACE_profitability`, `REPLACE_profundity`, `REPLACE_prognoses`, `REPLACE_prognosis`, `REPLACE_program`, `REPLACE_programing`, `REPLACE_programmes`, `REPLACE_programming`, `REPLACE_programs`, `REPLACE_promised`, `REPLACE_promontory`, `REPLACE_properties`, `REPLACE_property`, `REPLACE_prophecy`, `REPLACE_proprietary`, `REPLACE_proselytising`, `REPLACE_prospectus`, `REPLACE_prosthesis`, `REPLACE_prototype`, `REPLACE_prototypes`, `REPLACE_provide`, `REPLACE_provides`, `REPLACE_proxies`, `REPLACE_proximity`, `REPLACE_proxy`, `REPLACE_psoriasis`, `REPLACE_psychiatry`, `REPLACE_psychoanalysis`, `REPLACE_psychology`, `REPLACE_psychosis`, `REPLACE_puberty`, `REPLACE_pubis`, `REPLACE_public`, `REPLACE_publicise`, `REPLACE_publicised`, `REPLACE_pungency`, `REPLACE_puppies`, `REPLACE_puppy`, `REPLACE_purgatory`, `REPLACE_purity`, `REPLACE_purpose`, `REPLACE_pus`, `REPLACE_pussy`, `REPLACE_put`, `REPLACE_putamen`, `REPLACE_puts`, `REPLACE_putt`, `REPLACE_putts`, `REPLACE_pygmies`, `REPLACE_quackery`, `REPLACE_qualities`, `REPLACE_quality`, `REPLACE_quandary`, `REPLACE_quantities`, `REPLACE_quantity`, `REPLACE_quantum`, `REPLACE_quarrel`, `REPLACE_quarreled`, `REPLACE_quarterly`, `REPLACE_question`, `REPLACE_questions`, `REPLACE_queue`, `REPLACE_queued`, `REPLACE_queues`, `REPLACE_queuing`, `REPLACE_quickly`, `REPLACE_quiet`, `REPLACE_quis`, `REPLACE_quit`, `REPLACE_quite`, `REPLACE_rabbit`, `REPLACE_rabbits`, `REPLACE_rack`, `REPLACE_racked`, `REPLACE_racking`, `REPLACE_racks`, `REPLACE_radicalised`, `REPLACE_radicalized`, `REPLACE_radio-frequency`, `REPLACE_radiofrequency`, `REPLACE_radiology`, `REPLACE_radiotherapy`, `REPLACE_radius`, `REPLACE_rail`, `REPLACE_railings`, `REPLACE_railroad`, `REPLACE_railroads`, `REPLACE_rain`, `REPLACE_rained`, `REPLACE_raining`, `REPLACE_rains`, `REPLACE_rainy`, `REPLACE_ran`, `REPLACE_rang`, `REPLACE_rankings`, `REPLACE_rape`, `REPLACE_raped`, `REPLACE_rapidity`, `REPLACE_rarefied`, `REPLACE_rarity`, `REPLACE_raspberries`, `REPLACE_raspberry`, `REPLACE_ratchet`, `REPLACE_ratcheted`, `REPLACE_ratcheting`, `REPLACE_ratchets`, `REPLACE_rated`, `REPLACE_rather`, `REPLACE_ratings`, `REPLACE_rationalised`, `REPLACE_rationality`, `REPLACE_re-acquired`, `REPLACE_re-acquiring`, `REPLACE_re-arrange`, `REPLACE_re-arranged`, `REPLACE_re-balance`, `REPLACE_re-build`, `REPLACE_re-defined`, `REPLACE_re-directing`, `REPLACE_re-elect`, `REPLACE_re-elected`, `REPLACE_re-emerge`, `REPLACE_re-emerging`, `REPLACE_re-emphasised`, `REPLACE_re-enact`, `REPLACE_re-enacted`, `REPLACE_re-enacting`, `REPLACE_re-engage`, `REPLACE_re-engineered`, `REPLACE_re-engineering`, `REPLACE_re-enter`, `REPLACE_re-entered`, `REPLACE_re-entry`, `REPLACE_re-establish`, `REPLACE_re-establishing`, `REPLACE_re-evaluate`, `REPLACE_re-evaluated`, `REPLACE_re-examine`, `REPLACE_re-examined`, `REPLACE_re-examining`, `REPLACE_re-export`, `REPLACE_re-install`, `REPLACE_re-introduce`, `REPLACE_re-launched`, `REPLACE_re-offending`, `REPLACE_re-open`, `REPLACE_re-opened`, `REPLACE_re-opening`, `REPLACE_re-opens`, `REPLACE_re-route`, `REPLACE_re-routing`, `REPLACE_re-shape`, `REPLACE_re-start`, `REPLACE_re-starting`, `REPLACE_re-used`, `REPLACE_re-visits`, `REPLACE_re-write`, `REPLACE_re-writing`, `REPLACE_re-written`, `REPLACE_reach`, `REPLACE_reached`, `REPLACE_reaching`, `REPLACE_reactionary`, `REPLACE_read`, `REPLACE_readiness`, `REPLACE_reading`, `REPLACE_ready`, `REPLACE_real`, `REPLACE_realise`, `REPLACE_realised`, `REPLACE_realising`, `REPLACE_realities`, `REPLACE_reality`, `REPLACE_realize`, `REPLACE_realized`, `REPLACE_reallocated`, `REPLACE_really`, `REPLACE_reapply`, `REPLACE_reappoint`, `REPLACE_reappointed`, `REPLACE_reason`, `REPLACE_reasoning`, `REPLACE_reasons`, `REPLACE_reassembled`, `REPLACE_reassess`, `REPLACE_reassessing`, `REPLACE_reawakening`, `REPLACE_rebalanced`, `REPLACE_rebalancing`, `REPLACE_rebooked`, `REPLACE_reboot`, `REPLACE_rebound`, `REPLACE_rebounded`, `REPLACE_rebounding`, `REPLACE_rebounds`, `REPLACE_recap`, `REPLACE_recapping`, `REPLACE_receive`, `REPLACE_received`, `REPLACE_receiving`, `REPLACE_recent`, `REPLACE_recently`, `REPLACE_recharge`, `REPLACE_recharging`, `REPLACE_rechristened`, `REPLACE_recirculated`, `REPLACE_recognise`, `REPLACE_recognised`, `REPLACE_recognises`, `REPLACE_recognising`, `REPLACE_recommend`, `REPLACE_recommended`, `REPLACE_reconfigure`, `REPLACE_reconfigured`, `REPLACE_reconfirmed`, `REPLACE_reconquer`, `REPLACE_reconvene`, `REPLACE_record`, `REPLACE_recorded`, `REPLACE_records`, `REPLACE_recoveries`, `REPLACE_recovery`, `REPLACE_recross`, `REPLACE_redecorate`, `REPLACE_redecorating`, `REPLACE_rededicated`, `REPLACE_redeveloped`, `REPLACE_rediscovered`, `REPLACE_rediscovering`, `REPLACE_rediscovery`, `REPLACE_redraw`, `REPLACE_redrawn`, `REPLACE_reduce`, `REPLACE_redundancies`, `REPLACE_redundancy`, `REPLACE_refashioning`, `REPLACE_refer`, `REPLACE_referenda`, `REPLACE_referendum`, `REPLACE_referred`, `REPLACE_referring`, `REPLACE_refers`, `REPLACE_refineries`, `REPLACE_refinery`, `REPLACE_reflex`, `REPLACE_reflexes`, `REPLACE_refocus`, `REPLACE_reformulate`, `REPLACE_reframe`, `REPLACE_reframing`, `REPLACE_refuel`, `REPLACE_refueling`, `REPLACE_refuelling`, `REPLACE_regard`, `REPLACE_regarded`, `REPLACE_regarding`, `REPLACE_registered`, `REPLACE_registries`, `REPLACE_registry`, `REPLACE_regret`, `REPLACE_regularity`, `REPLACE_reintegrate`, `REPLACE_reinterpreted`, `REPLACE_reinterpreting`, `REPLACE_related`, `REPLACE_relationship`, `REPLACE_relationships`, `REPLACE_relax`, `REPLACE_relaxed`, `REPLACE_relaxing`, `REPLACE_relay`, `REPLACE_relayed`, `REPLACE_relays`, `REPLACE_release`, `REPLACE_released`, `REPLACE_relent`, `REPLACE_reliability`, `REPLACE_relieved`, `REPLACE_reloaded`, `REPLACE_reloading`, `REPLACE_remastered`, `REPLACE_remember`, `REPLACE_remembered`, `REPLACE_reminded`, `REPLACE_reminds`, `REPLACE_remix`, `REPLACE_remodel`, `REPLACE_remodeled`, `REPLACE_remodeling`, `REPLACE_remodelled`, `REPLACE_renationalised`, `REPLACE_reneged`, `REPLACE_rented`, `REPLACE_reoffend`, `REPLACE_reoffended`, `REPLACE_reoffending`, `REPLACE_reorder`, `REPLACE_reorganise`, `REPLACE_reorganised`, `REPLACE_repertory`, `REPLACE_replanted`, `REPLACE_reply`, `REPLACE_report`, `REPLACE_reported`, `REPLACE_reports`, `REPLACE_repositories`, `REPLACE_repository`, `REPLACE_representation`, `REPLACE_representations`, `REPLACE_reprocessing`, `REPLACE_reprogrammed`, `REPLACE_reprogramming`, `REPLACE_republished`, `REPLACE_republishing`, `REPLACE_require`, `REPLACE_required`, `REPLACE_requires`, `REPLACE_research`, `REPLACE_resent`, `REPLACE_residency`, `REPLACE_resisted`, `REPLACE_resize`, `REPLACE_resource`, `REPLACE_resources`, `REPLACE_responsibilities`, `REPLACE_responsibility`, `REPLACE_rest`, `REPLACE_restart`, `REPLACE_restarted`, `REPLACE_restarting`, `REPLACE_restaurant`, `REPLACE_restaurants`, `REPLACE_restyled`, `REPLACE_resubmit`, `REPLACE_resubmitted`, `REPLACE_result`, `REPLACE_resulted`, `REPLACE_results`, `REPLACE_retested`, `REPLACE_retesting`, `REPLACE_retina`, `REPLACE_retrain`, `REPLACE_retraining`, `REPLACE_retroviruses`, `REPLACE_retune`, `REPLACE_retuning`, `REPLACE_return`, `REPLACE_returned`, `REPLACE_returning`, `REPLACE_returns`, `REPLACE_reused`, `REPLACE_reusing`, `REPLACE_revel`, `REPLACE_reveled`, `REPLACE_revelling`, `REPLACE_revelries`, `REPLACE_revelry`, `REPLACE_revitalised`, `REPLACE_revolutionaries`, `REPLACE_revolutionary`, `REPLACE_revolutionised`, `REPLACE_rex`, `REPLACE_rhesus`, `REPLACE_rhinovirus`, `REPLACE_rhyme`, `REPLACE_rhymes`, `REPLACE_rice`, `REPLACE_richness`, `REPLACE_ricocheted`, `REPLACE_ricocheting`, `REPLACE_rifleman`, `REPLACE_right`, `REPLACE_rights`, `REPLACE_rigidities`, `REPLACE_rigidity`, `REPLACE_ring`, `REPLACE_ringed`, `REPLACE_ringing`, `REPLACE_rings`, `REPLACE_rising`, `REPLACE_risk`, `REPLACE_rival`, `REPLACE_rivaled`, `REPLACE_rivalled`, `REPLACE_rivalries`, `REPLACE_rivalry`, `REPLACE_rivals`, `REPLACE_roaches`, `REPLACE_road`, `REPLACE_roads`, `REPLACE_roared`, `REPLACE_robberies`, `REPLACE_robbery`, `REPLACE_robustness`, `REPLACE_rode`, `REPLACE_roof`, `REPLACE_roofed`, `REPLACE_roofs`, `REPLACE_rookery`, `REPLACE_room`, `REPLACE_roomful`, `REPLACE_rooms`, `REPLACE_rope`, `REPLACE_roped`, `REPLACE_ropes`, `REPLACE_rosary`, `REPLACE_rose`, `REPLACE_rotator`, `REPLACE_roughness`, `REPLACE_route`, `REPLACE_routs`, `REPLACE_roving`, `REPLACE_royalties`, `REPLACE_royalty`, `REPLACE_rubies`, `REPLACE_ruby`, `REPLACE_ruckus`, `REPLACE_ruled`, `REPLACE_rules`, `REPLACE_rumor`, `REPLACE_rumored`, `REPLACE_rumors`, `REPLACE_run`, `REPLACE_runner-up`, `REPLACE_runners-up`, `REPLACE_running`, `REPLACE_runs`, `REPLACE_s`, `REPLACE_sad`, `REPLACE_safe`, `REPLACE_said`, `REPLACE_salaries`, `REPLACE_salary`, `REPLACE_sales`, `REPLACE_salesman`, `REPLACE_salesmen`, `REPLACE_salespeople`, `REPLACE_salmonella`, `REPLACE_salvo`, `REPLACE_salvos`, `REPLACE_same`, `REPLACE_sanctuaries`, `REPLACE_sanctuary`, `REPLACE_sang`, `REPLACE_sank`, `REPLACE_sarcoidosis`, `REPLACE_sarcoma`, `REPLACE_sat`, `REPLACE_satiety`, `REPLACE_satirised`, `REPLACE_satisfied`, `REPLACE_satisfies`, `REPLACE_sauce`, `REPLACE_savagery`, `REPLACE_save`, `REPLACE_saved`, `REPLACE_saves`, `REPLACE_savor`, `REPLACE_savored`, `REPLACE_savors`, `REPLACE_savoury`, `REPLACE_saw`, `REPLACE_say`, `REPLACE_saying`, `REPLACE_says`, `REPLACE_scam`, `REPLACE_scammed`, `REPLACE_scamming`, `REPLACE_scams`, `REPLACE_scapegoat`, `REPLACE_scapegoating`, `REPLACE_scapegoats`, `REPLACE_scarcity`, `REPLACE_scared`, `REPLACE_scares`, `REPLACE_scarf`, `REPLACE_scarves`, `REPLACE_scary`, `REPLACE_scene`, `REPLACE_scenery`, `REPLACE_schedule`, `REPLACE_scheduled`, `REPLACE_schmaltz`, `REPLACE_school`, `REPLACE_schoolchildren`, `REPLACE_schools`, `REPLACE_sclerosis`, `REPLACE_scope`, `REPLACE_scopes`, `REPLACE_score`, `REPLACE_scored`, `REPLACE_scoring`, `REPLACE_scrutinise`, `REPLACE_scrutinised`, `REPLACE_scrutinising`, `REPLACE_scrutiny`, `REPLACE_seamen`, `REPLACE_seared`, `REPLACE_searing`, `REPLACE_season`, `REPLACE_seasonality`, `REPLACE_second`, `REPLACE_second-guess`, `REPLACE_second-guessing`, `REPLACE_secondary`, `REPLACE_seconds`, `REPLACE_secretaries`, `REPLACE_secretary`, `REPLACE_secretary-general`, `REPLACE_secrets`, `REPLACE_sector`, `REPLACE_securities`, `REPLACE_security`, `REPLACE_seductress`, `REPLACE_see`, `REPLACE_seeing`, `REPLACE_seem`, `REPLACE_seemed`, `REPLACE_seems`, `REPLACE_seen`, `REPLACE_sees`, `REPLACE_self`, `REPLACE_self-described`, `REPLACE_self-destruct`, `REPLACE_self-destructed`, `REPLACE_self-harm`, `REPLACE_self-identified`, `REPLACE_self-selected`, `REPLACE_self-sustaining`, `REPLACE_sell`, `REPLACE_selling`, `REPLACE_sells`, `REPLACE_selves`, `REPLACE_seminaries`, `REPLACE_seminary`, `REPLACE_seminomas`, `REPLACE_send`, `REPLACE_seniority`, `REPLACE_sensationalised`, `REPLACE_sense`, `REPLACE_sensibilities`, `REPLACE_sensibility`, `REPLACE_sensitivities`, `REPLACE_sensitivity`, `REPLACE_sent`, `REPLACE_sentence`, `REPLACE_sentenced`, `REPLACE_sentences`, `REPLACE_sentries`, `REPLACE_sentry`, `REPLACE_sepia`, `REPLACE_sepsis`, `REPLACE_sequence`, `REPLACE_sequenced`, `REPLACE_sequences`, `REPLACE_sequencing`, `REPLACE_serendipity`, `REPLACE_series`, `REPLACE_serious`, `REPLACE_seriously`, `REPLACE_serum`, `REPLACE_serve`, `REPLACE_service`, `REPLACE_serviceman`, `REPLACE_servicemen`, `REPLACE_services`, `REPLACE_set`, `REPLACE_sets`, `REPLACE_setting`, `REPLACE_settings`, `REPLACE_seventies`, `REPLACE_seventy`, `REPLACE_several`, `REPLACE_severity`, `REPLACE_sex`, `REPLACE_sexuality`, `REPLACE_shack`, `REPLACE_shacks`, `REPLACE_shadow`, `REPLACE_shanty`, `REPLACE_share`, `REPLACE_sharing`, `REPLACE_she`, `REPLACE_shelf`, `REPLACE_shellfish`, `REPLACE_shelves`, `REPLACE_sherry`, `REPLACE_shine`, `REPLACE_shined`, `REPLACE_shines`, `REPLACE_shining`, `REPLACE_shipping`, `REPLACE_shocked`, `REPLACE_shoe`, `REPLACE_shoehorn`, `REPLACE_shoes`, `REPLACE_shooting`, `REPLACE_shop`, `REPLACE_shoplifting`, `REPLACE_shopping`, `REPLACE_shops`, `REPLACE_short`, `REPLACE_short-change`, `REPLACE_short-circuiting`, `REPLACE_shortlist`, `REPLACE_shortlisted`, `REPLACE_shortlisting`, `REPLACE_shot`, `REPLACE_shots`, `REPLACE_should`, `REPLACE_shovel`, `REPLACE_shoveling`, `REPLACE_shovelling`, `REPLACE_shovels`, `REPLACE_show`, `REPLACE_showed`, `REPLACE_showing`, `REPLACE_showman`, `REPLACE_shown`, `REPLACE_shows`, `REPLACE_shrank`, `REPLACE_shrine`, `REPLACE_shrink`, `REPLACE_shrinking`, `REPLACE_shriveled`, `REPLACE_shrunk`, `REPLACE_sic`, `REPLACE_sick`, `REPLACE_sickness`, `REPLACE_side`, `REPLACE_side-stepping`, `REPLACE_sightings`, `REPLACE_sign`, `REPLACE_signal`, `REPLACE_signaled`, `REPLACE_signaling`, `REPLACE_signalled`, `REPLACE_signalling`, `REPLACE_signalman`, `REPLACE_signals`, `REPLACE_signatories`, `REPLACE_signatory`, `REPLACE_signed`, `REPLACE_signing`, `REPLACE_signs`, `REPLACE_silly`, `REPLACE_similar`, `REPLACE_similarities`, `REPLACE_similarity`, `REPLACE_simple`, `REPLACE_simplicity`, `REPLACE_since`, `REPLACE_sincerely`, `REPLACE_sing`, `REPLACE_singing`, `REPLACE_sings`, `REPLACE_sink`, `REPLACE_sinking`, `REPLACE_sinks`, `REPLACE_sinuses`, `REPLACE_siphon`, `REPLACE_siphoned`, `REPLACE_siphoning`, `REPLACE_sister-in-law`, `REPLACE_sit`, `REPLACE_site`, `REPLACE_sites`, `REPLACE_sitting`, `REPLACE_situation`, `REPLACE_situations`, `REPLACE_skew`, `REPLACE_skewed`, `REPLACE_ski`, `REPLACE_skies`, `REPLACE_skiing`, `REPLACE_skills`, `REPLACE_skis`, `REPLACE_sky`, `REPLACE_slackness`, `REPLACE_sledges`, `REPLACE_sleep`, `REPLACE_sleeping`, `REPLACE_sleepy`, `REPLACE_slept`, `REPLACE_slim`, `REPLACE_slimming`, `REPLACE_slow`, `REPLACE_slurry`, `REPLACE_smack`, `REPLACE_small`, `REPLACE_smoldered`, `REPLACE_smoldering`, `REPLACE_smooth`, `REPLACE_smoothes`, `REPLACE_snag`, `REPLACE_snagged`, `REPLACE_snagging`, `REPLACE_sneak`, `REPLACE_sneaked`, `REPLACE_sneaking`, `REPLACE_snobbery`, `REPLACE_snorkeling`, `REPLACE_snorkelling`, `REPLACE_snow`, `REPLACE_snowball`, `REPLACE_snowballed`, `REPLACE_snowballing`, `REPLACE_snowballs`, `REPLACE_snowboarding`, `REPLACE_snowman`, `REPLACE_so`, `REPLACE_soapbox`, `REPLACE_socialise`, `REPLACE_socialised`, `REPLACE_socialising`, `REPLACE_societies`, `REPLACE_society`, `REPLACE_sociology`, `REPLACE_sock`, `REPLACE_socks`, `REPLACE_software`, `REPLACE_solarium`, `REPLACE_sold`, `REPLACE_sole`, `REPLACE_solidarity`, `REPLACE_soliloquy`, `REPLACE_solo`, `REPLACE_solve`, `REPLACE_some`, `REPLACE_someone`, `REPLACE_somersaulting`, `REPLACE_something`, `REPLACE_sometimes`, `REPLACE_son`, `REPLACE_son-in-law`, `REPLACE_song`, `REPLACE_songs`, `REPLACE_sons`, `REPLACE_soon`, `REPLACE_sorceress`, `REPLACE_sorcery`, `REPLACE_sorority`, `REPLACE_sound`, `REPLACE_sounds`, `REPLACE_source`, `REPLACE_sourced`, `REPLACE_sources`, `REPLACE_sourcing`, `REPLACE_space`, `REPLACE_spasticity`, `REPLACE_spat`, `REPLACE_spats`, `REPLACE_speak`, `REPLACE_speakers`, `REPLACE_speaking`, `REPLACE_speaks`, `REPLACE_special`, `REPLACE_specialised`, `REPLACE_specialises`, `REPLACE_specialising`, `REPLACE_specialities`, `REPLACE_speciality`, `REPLACE_specialties`, `REPLACE_specialty`, `REPLACE_spectrometry`, `REPLACE_spectroscopy`, `REPLACE_spectrum`, `REPLACE_sped`, `REPLACE_speech`, `REPLACE_speeches`, `REPLACE_speed`, `REPLACE_speeded`, `REPLACE_speeding`, `REPLACE_speeds`, `REPLACE_spell`, `REPLACE_spelled`, `REPLACE_spelling`, `REPLACE_spend`, `REPLACE_spending`, `REPLACE_spent`, `REPLACE_sphinx`, `REPLACE_spill`, `REPLACE_spilled`, `REPLACE_spilling`, `REPLACE_spills`, `REPLACE_spiral`, `REPLACE_spiraled`, `REPLACE_spiraling`, `REPLACE_spiralled`, `REPLACE_spiralling`, `REPLACE_spirit`, `REPLACE_spirited`, `REPLACE_spirits`, `REPLACE_spit`, `REPLACE_spitting`, `REPLACE_split`, `REPLACE_spoil`, `REPLACE_spoiled`, `REPLACE_spoiling`, `REPLACE_spoils`, `REPLACE_spoke`, `REPLACE_spoken`, `REPLACE_spokesman`, `REPLACE_spokesmen`, `REPLACE_sport`, `REPLACE_sports`, `REPLACE_sportsman`, `REPLACE_sportsmen`, `REPLACE_sportswoman`, `REPLACE_spotlight`, `REPLACE_spotlighted`, `REPLACE_spotlights`, `REPLACE_sprang`, `REPLACE_spring`, `REPLACE_springing`, `REPLACE_springs`, `REPLACE_sprung`, `REPLACE_stabilise`, `REPLACE_stabilised`, `REPLACE_stabilises`, `REPLACE_stabilising`, `REPLACE_stability`, `REPLACE_stadium`, `REPLACE_stadiums`, `REPLACE_staff`, `REPLACE_staffed`, `REPLACE_staffing`, `REPLACE_staffs`, `REPLACE_stagecoach`, `REPLACE_staid`, `REPLACE_stamina`, `REPLACE_stand-off`, `REPLACE_standardised`, `REPLACE_standby`, `REPLACE_standoff`, `REPLACE_standoffs`, `REPLACE_star`, `REPLACE_stars`, `REPLACE_start`, `REPLACE_start-up`, `REPLACE_start-ups`, `REPLACE_started`, `REPLACE_starting`, `REPLACE_starts`, `REPLACE_startup`, `REPLACE_startups`, `REPLACE_statesman`, `REPLACE_statesmen`, `REPLACE_status`, `REPLACE_statuses`, `REPLACE_stave`, `REPLACE_stay`, `REPLACE_stayed`, `REPLACE_staying`, `REPLACE_stays`, `REPLACE_stench`, `REPLACE_stencils`, `REPLACE_stepchildren`, `REPLACE_stewardesses`, `REPLACE_stiffness`, `REPLACE_stigma`, `REPLACE_stilettos`, `REPLACE_still`, `REPLACE_stimulus`, `REPLACE_stink`, `REPLACE_stinking`, `REPLACE_stinks`, `REPLACE_stirred`, `REPLACE_stolen`, `REPLACE_stomachs`, `REPLACE_stood`, `REPLACE_stop`, `REPLACE_stopped`, `REPLACE_store`, `REPLACE_stores`, `REPLACE_stories`, `REPLACE_story`, `REPLACE_stove`, `REPLACE_strange`, `REPLACE_strategies`, `REPLACE_strategizing`, `REPLACE_strategy`, `REPLACE_strawberries`, `REPLACE_stress`, `REPLACE_stressed`, `REPLACE_stricken`, `REPLACE_strides`, `REPLACE_striding`, `REPLACE_strike`, `REPLACE_strikes`, `REPLACE_striking`, `REPLACE_stringency`, `REPLACE_strive`, `REPLACE_strives`, `REPLACE_striving`, `REPLACE_strode`, `REPLACE_stroke`, `REPLACE_strong`, `REPLACE_strongman`, `REPLACE_struck`, `REPLACE_stucco`, `REPLACE_student`, `REPLACE_students`, `REPLACE_studied`, `REPLACE_studies`, `REPLACE_study`, `REPLACE_studying`, `REPLACE_stuff`, `REPLACE_stupidity`, `REPLACE_style`, `REPLACE_stymie`, `REPLACE_subject`, `REPLACE_subjects`, `REPLACE_submerge`, `REPLACE_submerged`, `REPLACE_submerging`, `REPLACE_subsidiaries`, `REPLACE_subsidiary`, `REPLACE_subsidies`, `REPLACE_subsidise`, `REPLACE_subsidised`, `REPLACE_subsidises`, `REPLACE_subsidy`, `REPLACE_subtitles`, `REPLACE_subtleties`, `REPLACE_subtlety`, `REPLACE_succeed`, `REPLACE_success`, `REPLACE_successes`, `REPLACE_successful`, `REPLACE_such`, `REPLACE_suddenly`, `REPLACE_suffering`, `REPLACE_suggestions`, `REPLACE_suitability`, `REPLACE_summa`, `REPLACE_summaries`, `REPLACE_summary`, `REPLACE_summer`, `REPLACE_summon`, `REPLACE_summoned`, `REPLACE_summonses`, `REPLACE_sun`, `REPLACE_sung`, `REPLACE_sunglasses`, `REPLACE_sunk`, `REPLACE_superiority`, `REPLACE_support`, `REPLACE_supported`, `REPLACE_supporting`, `REPLACE_supposed`, `REPLACE_sure`, `REPLACE_surely`, `REPLACE_surfing`, `REPLACE_surged`, `REPLACE_surgeries`, `REPLACE_surgery`, `REPLACE_surplus`, `REPLACE_surpluses`, `REPLACE_surprise`, `REPLACE_surprised`, `REPLACE_susceptibility`, `REPLACE_sushi`, `REPLACE_swamped`, `REPLACE_swan`, `REPLACE_swathe`, `REPLACE_swathed`, `REPLACE_swathes`, `REPLACE_sweat`, `REPLACE_sweated`, `REPLACE_sweating`, `REPLACE_sweats`, `REPLACE_sweetness`, `REPLACE_swim`, `REPLACE_swimming`, `REPLACE_swing`, `REPLACE_swinging`, `REPLACE_switchover`, `REPLACE_swordfish`, `REPLACE_swung`, `REPLACE_syllabus`, `REPLACE_symbiosis`, `REPLACE_symbolised`, `REPLACE_symmetry`, `REPLACE_sympathies`, `REPLACE_sympathise`, `REPLACE_sympathised`, `REPLACE_sympathy`, `REPLACE_symphony`, `REPLACE_synchronise`, `REPLACE_synchronised`, `REPLACE_synergies`, `REPLACE_synergy`, `REPLACE_synthesize`, `REPLACE_synthesized`, `REPLACE_syringe`, `REPLACE_syringes`, `REPLACE_system`, `REPLACE_systems`, `REPLACE_t`, `REPLACE_taboo`, `REPLACE_taboos`, `REPLACE_tail`, `REPLACE_tailed`, `REPLACE_tailgating`, `REPLACE_tailing`, `REPLACE_tails`, `REPLACE_take`, `REPLACE_taken`, `REPLACE_takes`, `REPLACE_taking`, `REPLACE_talk`, `REPLACE_talked`, `REPLACE_talking`, `REPLACE_talks`, `REPLACE_tangoed`, `REPLACE_tank`, `REPLACE_tanked`, `REPLACE_tanker`, `REPLACE_tanks`, `REPLACE_tantalising`, `REPLACE_tapestries`, `REPLACE_tapestry`, `REPLACE_tar`, `REPLACE_target`, `REPLACE_targeted`, `REPLACE_targeting`, `REPLACE_targets`, `REPLACE_targetted`, `REPLACE_targetting`, `REPLACE_task`, `REPLACE_tasks`, `REPLACE_taste`, `REPLACE_tasted`, `REPLACE_tastes`, `REPLACE_tasting`, `REPLACE_taught`, `REPLACE_taxes`, `REPLACE_taxi`, `REPLACE_taxis`, `REPLACE_teach`, `REPLACE_teacher`, `REPLACE_teachers`, `REPLACE_teaches`, `REPLACE_teaching`, `REPLACE_team`, `REPLACE_teams`, `REPLACE_teargas`, `REPLACE_technicality`, `REPLACE_technologies`, `REPLACE_technology`, `REPLACE_teddies`, `REPLACE_teddy`, `REPLACE_tee`, `REPLACE_teeth`, `REPLACE_teleconference`, `REPLACE_telemetry`, `REPLACE_teleported`, `REPLACE_tell`, `REPLACE_telling`, `REPLACE_tells`, `REPLACE_telly`, `REPLACE_temperature`, `REPLACE_temperatures`, `REPLACE_template`, `REPLACE_templates`, `REPLACE_tempo`, `REPLACE_temporary`, `REPLACE_tempos`, `REPLACE_tenancies`, `REPLACE_tenancy`, `REPLACE_tendencies`, `REPLACE_tendency`, `REPLACE_tension`, `REPLACE_tensions`, `REPLACE_tent`, `REPLACE_tenure`, `REPLACE_tenured`, `REPLACE_term`, `REPLACE_terms`, `REPLACE_terra`, `REPLACE_terrible`, `REPLACE_territories`, `REPLACE_territory`, `REPLACE_terrorised`, `REPLACE_terrorising`, `REPLACE_test`, `REPLACE_test-driving`, `REPLACE_testimony`, `REPLACE_tests`, `REPLACE_tetanus`, `REPLACE_textbook`, `REPLACE_textbooks`, `REPLACE_texture`, `REPLACE_textured`, `REPLACE_textures`, `REPLACE_than`, `REPLACE_thank`, `REPLACE_thanks`, `REPLACE_that`, `REPLACE_the`, `REPLACE_their`, `REPLACE_them`, `REPLACE_themes`, `REPLACE_themselves`, `REPLACE_then`, `REPLACE_theology`, `REPLACE_theories`, `REPLACE_theory`, `REPLACE_therapies`, `REPLACE_therapy`, `REPLACE_there`, `REPLACE_these`, `REPLACE_thesis`, `REPLACE_they`, `REPLACE_thickness`, `REPLACE_thief`, `REPLACE_thieves`, `REPLACE_thing`, `REPLACE_things`, `REPLACE_think`, `REPLACE_thinking`, `REPLACE_thinks`, `REPLACE_third`, `REPLACE_this`, `REPLACE_those`, `REPLACE_though`, `REPLACE_thought`, `REPLACE_thoughts`, `REPLACE_threaten`, `REPLACE_three`, `REPLACE_thrive`, `REPLACE_thriving`, `REPLACE_thrombosis`, `REPLACE_through`, `REPLACE_throughout`, `REPLACE_thrush`, `REPLACE_thrushes`, `REPLACE_tibia`, `REPLACE_tickets`, `REPLACE_tie`, `REPLACE_tied`, `REPLACE_ties`, `REPLACE_tightness`, `REPLACE_time`, `REPLACE_time-scale`, `REPLACE_times`, `REPLACE_timescale`, `REPLACE_timescales`, `REPLACE_timetable`, `REPLACE_timetables`, `REPLACE_timetabling`, `REPLACE_timidity`, `REPLACE_timing`, `REPLACE_tinge`, `REPLACE_tired`, `REPLACE_tires`, `REPLACE_title`, `REPLACE_to`, `REPLACE_tobacco`, `REPLACE_today`, `REPLACE_together`, `REPLACE_toiletries`, `REPLACE_told`, `REPLACE_tolerability`, `REPLACE_tomato`, `REPLACE_tomatoes`, `REPLACE_tomorrow`, `REPLACE_tonight`, `REPLACE_too`, `REPLACE_took`, `REPLACE_tool`, `REPLACE_toolbox`, `REPLACE_tooth`, `REPLACE_toothbrush`, `REPLACE_top`, `REPLACE_topic`, `REPLACE_topics`, `REPLACE_topography`, `REPLACE_tories`, `REPLACE_tornado`, `REPLACE_tornadoes`, `REPLACE_torpedoed`, `REPLACE_torso`, `REPLACE_torsos`, `REPLACE_total`, `REPLACE_totaled`, `REPLACE_totaling`, `REPLACE_totality`, `REPLACE_totalled`, `REPLACE_totalling`, `REPLACE_totals`, `REPLACE_touch-screen`, `REPLACE_touchscreen`, `REPLACE_touchscreens`, `REPLACE_toughness`, `REPLACE_tourist`, `REPLACE_tourists`, `REPLACE_towards`, `REPLACE_towel`, `REPLACE_towels`, `REPLACE_tower`, `REPLACE_towering`, `REPLACE_towers`, `REPLACE_townspeople`, `REPLACE_toxicity`, `REPLACE_toxicology`, `REPLACE_toy`, `REPLACE_toying`, `REPLACE_toys`, `REPLACE_trade-off`, `REPLACE_trade-offs`, `REPLACE_tradeoff`, `REPLACE_tradeoffs`, `REPLACE_tradesmen`, `REPLACE_traditional`, `REPLACE_tragedies`, `REPLACE_tragedy`, `REPLACE_tragicomedy`, `REPLACE_train`, `REPLACE_trained`, `REPLACE_training`, `REPLACE_trains`, `REPLACE_trajectory`, `REPLACE_transition`, `REPLACE_transitioning`, `REPLACE_transitions`, `REPLACE_transparency`, `REPLACE_transport`, `REPLACE_trauma`, `REPLACE_traumas`, `REPLACE_traumatised`, `REPLACE_travel`, `REPLACE_traveled`, `REPLACE_traveling`, `REPLACE_travelled`, `REPLACE_travelling`, `REPLACE_travels`, `REPLACE_treachery`, `REPLACE_treasuries`, `REPLACE_treasury`, `REPLACE_treaties`, `REPLACE_treaty`, `REPLACE_trees`, `REPLACE_triage`, `REPLACE_trial`, `REPLACE_trials`, `REPLACE_tribesman`, `REPLACE_tribesmen`, `REPLACE_tributaries`, `REPLACE_tributary`, `REPLACE_triceps`, `REPLACE_tried`, `REPLACE_tries`, `REPLACE_trilogy`, `REPLACE_trim`, `REPLACE_trip`, `REPLACE_trips`, `REPLACE_trivia`, `REPLACE_trophies`, `REPLACE_trophy`, `REPLACE_trouble`, `REPLACE_troubled`, `REPLACE_troubleshoot`, `REPLACE_troubleshooting`, `REPLACE_truancy`, `REPLACE_true`, `REPLACE_truth`, `REPLACE_try`, `REPLACE_trying`, `REPLACE_tsunami`, `REPLACE_tuber`, `REPLACE_tuberculosis`, `REPLACE_tug-of-war`, `REPLACE_tummy`, `REPLACE_tunnel`, `REPLACE_tunnels`, `REPLACE_turf`, `REPLACE_turn`, `REPLACE_turned`, `REPLACE_turns`, `REPLACE_tuxedo`, `REPLACE_twice`, `REPLACE_two`, `REPLACE_two-time`, `REPLACE_two-times`, `REPLACE_tying`, `REPLACE_tympanum`, `REPLACE_type`, `REPLACE_types`, `REPLACE_typhoon`, `REPLACE_typography`, `REPLACE_tyranny`, `REPLACE_ultimatum`, `REPLACE_unable`, `REPLACE_under`, `REPLACE_underrepresented`, `REPLACE_understand`, `REPLACE_understanding`, `REPLACE_understands`, `REPLACE_understood`, `REPLACE_underused`, `REPLACE_underwhelming`, `REPLACE_uniqueness`, `REPLACE_units`, `REPLACE_unity`, `REPLACE_universality`, `REPLACE_universities`, `REPLACE_university`, `REPLACE_unravel`, `REPLACE_unraveled`, `REPLACE_unraveling`, `REPLACE_unravelled`, `REPLACE_unravelling`, `REPLACE_unrealised`, `REPLACE_unstuck`, `REPLACE_until`, `REPLACE_up`, `REPLACE_update`, `REPLACE_updated`, `REPLACE_updates`, `REPLACE_updating`, `REPLACE_upload`, `REPLACE_uploaded`, `REPLACE_uploading`, `REPLACE_upped`, `REPLACE_upping`, `REPLACE_uprated`, `REPLACE_ups`, `REPLACE_urgency`, `REPLACE_us`, `REPLACE_use`, `REPLACE_used`, `REPLACE_useful`, `REPLACE_usefulness`, `REPLACE_user`, `REPLACE_users`, `REPLACE_uses`, `REPLACE_using`, `REPLACE_usual`, `REPLACE_usually`, `REPLACE_uterus`, `REPLACE_utilise`, `REPLACE_utilities`, `REPLACE_utility`, `REPLACE_vacancies`, `REPLACE_vacancy`, `REPLACE_vacation`, `REPLACE_vacuum`, `REPLACE_vacuums`, `REPLACE_validity`, `REPLACE_valuable`, `REPLACE_van`, `REPLACE_vandalised`, `REPLACE_vanity`, `REPLACE_vapor`, `REPLACE_vapors`, `REPLACE_variability`, `REPLACE_varieties`, `REPLACE_variety`, `REPLACE_various`, `REPLACE_varsity`, `REPLACE_vegetables`, `REPLACE_velocity`, `REPLACE_verities`, `REPLACE_verity`, `REPLACE_versatility`, `REPLACE_version`, `REPLACE_versions`, `REPLACE_very`, `REPLACE_via`, `REPLACE_viability`, `REPLACE_vicinity`, `REPLACE_victimised`, `REPLACE_victories`, `REPLACE_victory`, `REPLACE_video`, `REPLACE_videography`, `REPLACE_videos`, `REPLACE_videotape`, `REPLACE_videotaped`, `REPLACE_videotapes`, `REPLACE_videotaping`, `REPLACE_view`, `REPLACE_viewed`, `REPLACE_views`, `REPLACE_villainy`, `REPLACE_virtuosi`, `REPLACE_virtuoso`, `REPLACE_virus`, `REPLACE_viruses`, `REPLACE_viscera`, `REPLACE_visibility`, `REPLACE_visionaries`, `REPLACE_visionary`, `REPLACE_visit`, `REPLACE_visited`, `REPLACE_visiting`, `REPLACE_visualised`, `REPLACE_vitality`, `REPLACE_vocabulary`, `REPLACE_volatility`, `REPLACE_volcano`, `REPLACE_volcanoes`, `REPLACE_voluntary`, `REPLACE_vortex`, `REPLACE_vortexes`, `REPLACE_voters`, `REPLACE_vox`, `REPLACE_vulgarity`, `REPLACE_vulnerabilities`, `REPLACE_vulnerability`, `REPLACE_wait`, `REPLACE_waited`, `REPLACE_waiting`, `REPLACE_waitress`, `REPLACE_waitresses`, `REPLACE_waits`, `REPLACE_wake`, `REPLACE_waking`, `REPLACE_walk`, `REPLACE_walked`, `REPLACE_walking`, `REPLACE_wallabies`, `REPLACE_wallaby`, `REPLACE_walrus`, `REPLACE_wander`, `REPLACE_want`, `REPLACE_wanted`, `REPLACE_wanting`, `REPLACE_wants`, `REPLACE_war`, `REPLACE_warehouseman`, `REPLACE_warm`, `REPLACE_warnings`, `REPLACE_warranties`, `REPLACE_warranty`, `REPLACE_warring`, `REPLACE_wars`, `REPLACE_was`, `REPLACE_wash-outs`, `REPLACE_washout`, `REPLACE_waste`, `REPLACE_watch`, `REPLACE_watched`, `REPLACE_watching`, `REPLACE_water`, `REPLACE_waterman`, `REPLACE_watermen`, `REPLACE_way`, `REPLACE_ways`, `REPLACE_we`, `REPLACE_weakness`, `REPLACE_weaknesses`, `REPLACE_wear`, `REPLACE_wearing`, `REPLACE_weather`, `REPLACE_weatherman`, `REPLACE_weathermen`, `REPLACE_weaved`, `REPLACE_weaving`, `REPLACE_website`, `REPLACE_websites`, `REPLACE_wee`, `REPLACE_weed`, `REPLACE_week`, `REPLACE_weekdays`, `REPLACE_weekend`, `REPLACE_weekends`, `REPLACE_weekly`, `REPLACE_weeks`, `REPLACE_weight`, `REPLACE_weird`, `REPLACE_well`, `REPLACE_went`, `REPLACE_were`, `REPLACE_werewolf`, `REPLACE_what`, `REPLACE_when`, `REPLACE_whenever`, `REPLACE_where`, `REPLACE_whether`, `REPLACE_which`, `REPLACE_while`, `REPLACE_whimsy`, `REPLACE_whingeing`, `REPLACE_whisky`, `REPLACE_whiz`, `REPLACE_whizzes`, `REPLACE_who`, `REPLACE_whole`, `REPLACE_whom`, `REPLACE_whoosh`, `REPLACE_whose`, `REPLACE_why`, `REPLACE_wife`, `REPLACE_wilderness`, `REPLACE_will`, `REPLACE_willed`, `REPLACE_willing`, `REPLACE_wills`, `REPLACE_win`, `REPLACE_wind`, `REPLACE_winding`, `REPLACE_winds`, `REPLACE_wineries`, `REPLACE_winery`, `REPLACE_winner`, `REPLACE_winning`, `REPLACE_wins`, `REPLACE_winter`, `REPLACE_wise`, `REPLACE_wised`, `REPLACE_wish`, `REPLACE_with`, `REPLACE_within`, `REPLACE_without`, `REPLACE_wives`, `REPLACE_woke`, `REPLACE_wolf`, `REPLACE_wolves`, `REPLACE_woman`, `REPLACE_women`, `REPLACE_won`, `REPLACE_wonder`, `REPLACE_wondered`, `REPLACE_wonderful`, `REPLACE_wondering`, `REPLACE_word`, `REPLACE_words`, `REPLACE_wore`, `REPLACE_work`, `REPLACE_work-life`, `REPLACE_worked`, `REPLACE_workers`, `REPLACE_working`, `REPLACE_workman`, `REPLACE_workmen`, `REPLACE_works`, `REPLACE_world`, `REPLACE_worried`, `REPLACE_worries`, `REPLACE_worry`, `REPLACE_worrying`, `REPLACE_worse`, `REPLACE_worship`, `REPLACE_worshiping`, `REPLACE_worshipping`, `REPLACE_worthy`, `REPLACE_would`, `REPLACE_wound`, `REPLACE_wounded`, `REPLACE_wounding`, `REPLACE_wounds`, `REPLACE_woven`, `REPLACE_wrap`, `REPLACE_wrapped`, `REPLACE_wrapping`, `REPLACE_write`, `REPLACE_writing`, `REPLACE_written`, `REPLACE_wrong`, `REPLACE_wrote`, `REPLACE_yachtsman`, `REPLACE_year`, `REPLACE_years`, `REPLACE_yen`, `REPLACE_yesterday`, `REPLACE_yet`, `REPLACE_you`, `REPLACE_young`, `REPLACE_younger`, `REPLACE_your`, `REPLACE_yourself`, `REPLACE_zig-zag`, `REPLACE_zig-zagged`, `REPLACE_zoology`, `REPLACE_ensnare`, `REPLACE_enigma`, `APPEND_Ministry`, `REPLACE_iron`, `APPEND_Pasquesi`, `REPLACE_deconstructing`, `APPEND_Estimates`, `APPEND_applications`, `REPLACE_under-performed`, `REPLACE_urbanised`, `APPEND_appreciated`, `APPEND_Ritz-Carltons`, `REPLACE_cornea`, `APPEND_murder`, `REPLACE_hobo`, `APPEND_needs`, `APPEND_west`, `APPEND_Lunt`, `REPLACE_under-reporting`, `REPLACE_sentencing`, `APPEND_IDs`, `REPLACE_commercialise`, `REPLACE_tailgate`, `APPEND_collaboration`, `APPEND_23-44`, `APPEND_nice`, `APPEND_capital`, `APPEND_voices`, `APPEND_Moreover`, `APPEND_attractive`, `REPLACE_reticulum`, `APPEND_reminded`, `REPLACE_airbrush`, `REPLACE_festival`, `REPLACE_tax`, `APPEND_Messi`, `REPLACE_shares`, `APPEND_critical`, `REPLACE_blended`, `REPLACE_instilled`, `APPEND_tie`, `APPEND_ahead`, `APPEND_Recently`, `REPLACE_corona`, `APPEND_mouth`, `APPEND_enthusiastic`, `REPLACE_knelt`, `REPLACE_oxygenates`, `REPLACE_floodlights`, `REPLACE_marshalling`, `APPEND_2012`, `REPLACE_nationalise`, `REPLACE_levity`, `REPLACE_mahogany`, `REPLACE_plow`, `APPEND_requires`, `REPLACE_insignia`, `REPLACE_dervishes`, `REPLACE_diagnoses`, `REPLACE_marksman`, `REPLACE_thrived`, `REPLACE_endoscopy`, `REPLACE_loses`, `APPEND_markets`, `APPEND_rode`, `REPLACE_pooh-poohed`, `APPEND_Greenford`, `REPLACE_mainstreaming`, `APPEND_district`, `REPLACE_crosses`, `APPEND_heading`, `APPEND_Wilson`, `APPEND_unveiling`, `APPEND_GP`, `REPLACE_uptake`, `APPEND_Davis`, `APPEND_restored`, `APPEND_550`, `REPLACE_jackasses`, `REPLACE_fallacy`, `REPLACE_dive-bomb`, `REPLACE_toothbrushes`, `APPEND_'Neill`, `APPEND_flying`, `REPLACE_estuaries`, `APPEND_Ranch`, `REPLACE_satirising`, `REPLACE_barbecued`, `REPLACE_inhomogeneities`, `REPLACE_gnawed`, `REPLACE_soil`, `REPLACE_workings`, `REPLACE_plurality`, `APPEND_synchronization`, `REPLACE_self-organizing`, `REPLACE_acclimate`, `APPEND_forged`, `APPEND_eliminate`, `APPEND_P`, `APPEND_t`, `APPEND_holes`, `APPEND_breezy`, `APPEND_statement`, `REPLACE_workbench`, `APPEND_culprits`, `APPEND_Board`, `APPEND_treatment`, `REPLACE_neuroblastoma`, `APPEND_preparing`, `REPLACE_digitising`, `REPLACE_slimmed`, `REPLACE_co-authoring`, `APPEND_police`, `APPEND_'Artaix`, `REPLACE_swans`, `REPLACE_kibbutzim`, `REPLACE_traps`, `REPLACE_wash`, `APPEND_forces`, `APPEND_ready`, `REPLACE_constabulary`, `REPLACE_overused`, `REPLACE_crowds`, `REPLACE_cleared`, `REPLACE_sledging`, `REPLACE_recommending`, `APPEND_Bondi`, `REPLACE_libidos`, `REPLACE_libretto`, `REPLACE_reintegrating`, `APPEND_shock`, `APPEND_airlines`, `REPLACE_aligning`, `APPEND_extends`, `APPEND_House`, `APPEND_seal`, `APPEND_Bureau`, `APPEND_air`, `REPLACE_radicalising`, `TRANSFORM_AGREEMENT_PLURAL`, `TRANSFORM_AGREEMENT_SINGULAR`, `TRANSFORM_CASE_CAPITAL`, `TRANSFORM_CASE_CAPITAL_1`, `TRANSFORM_CASE_LOWER`, `TRANSFORM_CASE_UPPER`, `TRANSFORM_CASE_UPPER_-1`, `TRANSFORM_VERB_VBD_VB`, `TRANSFORM_VERB_VBD_VBG`, `TRANSFORM_VERB_VBD_VBN`, `TRANSFORM_VERB_VBD_VBZ`, `TRANSFORM_VERB_VBG_VB`, `TRANSFORM_VERB_VBG_VBD`, `TRANSFORM_VERB_VBG_VBN`, `TRANSFORM_VERB_VBG_VBZ`, `TRANSFORM_VERB_VBN_VB`, `TRANSFORM_VERB_VBN_VBD`, `TRANSFORM_VERB_VBN_VBG`, `TRANSFORM_VERB_VBN_VBZ`, `TRANSFORM_VERB_VBZ_VB`, `TRANSFORM_VERB_VBZ_VBD`, `TRANSFORM_VERB_VBZ_VBG`, `TRANSFORM_VERB_VBZ_VBN`, `TRANSFORM_VERB_VB_VBD`, `TRANSFORM_VERB_VB_VBG`, `TRANSFORM_VERB_VB_VBN`, `TRANSFORM_VERB_VB_VBZ` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 96.39 |
|
coltranetorres/wav2vec2-base-finetuned-ks
|
coltranetorres
| 2023-02-18T13:08:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-02-16T02:52:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1761
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2158 | 1.0 | 40 | 2.1761 | 0.6209 |
| 2.1251 | 2.0 | 80 | 2.1767 | 0.6209 |
| 2.1362 | 3.0 | 120 | 2.1850 | 0.6209 |
| 0.0 | 4.0 | 160 | nan | 0.0384 |
| 0.0 | 5.0 | 200 | nan | 0.0384 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.14.0
- Tokenizers 0.10.3
|
ZhihongDeng/a2c-AntBulletEnv-v0
|
ZhihongDeng
| 2023-02-18T13:03:45Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T13:02:40Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1559.04 +/- 71.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shirshakach/function-arg-swap-model-148k-files-365k-samples
|
shirshakach
| 2023-02-18T12:44:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-07T08:58:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: function-arg-swap-model-148k-files-365k-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# function-arg-swap-model-148k-files-365k-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4822
- Accuracy: 0.8850
- Precision: 0.8850
- Recall: 0.8842
- F1 score: 0.8846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mtlulka/ppo-CartPole-v1
|
mtlulka
| 2023-02-18T12:41:44Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T12:41:36Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -208.74 +/- 108.02
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo-CartPole-v1'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.001
'num_envs': 8
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.9
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 10
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.0
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'mtlulka/ppo-CartPole-v1'
'batch_size': 8192
'minibatch_size': 2048}
```
|
hpoddar/ppo-LunarLander-v2-hbCourse
|
hpoddar
| 2023-02-18T12:22:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T12:22:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 223.89 +/- 19.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jondister/Soccer_JD
|
jondister
| 2023-02-18T12:22:29Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-02-18T12:22:20Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: jondister/Soccer_JD
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AntiSquid/Reinforce-Cartpole-v1
|
AntiSquid
| 2023-02-18T11:36:35Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T11:36:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Lxn3r/tortoise-tts-v2
|
Lxn3r
| 2023-02-18T11:24:09Z | 0 | 0 | null |
[
"arxiv:2102.12092",
"arxiv:2102.09672",
"arxiv:2106.07889",
"region:us"
] | null | 2023-02-18T11:06:08Z |
# TorToiSe
Tortoise is a text-to-speech program built with the following priorities:
1. Strong multi-voice capabilities.
2. Highly realistic prosody and intonation.
This repo contains all the code needed to run Tortoise TTS in inference mode.
### New features
#### v2.1; 2022/5/2
- Added ability to produce totally random voices.
- Added ability to download voice conditioning latent via a script, and then use a user-provided conditioning latent.
- Added ability to use your own pretrained models.
- Refactored directory structures.
- Performance improvements & bug fixes.
## What's in a name?
I'm naming my speech-related repos after Mojave desert flora and fauna. Tortoise is a bit tongue in cheek: this model
is insanely slow. It leverages both an autoregressive decoder **and** a diffusion decoder; both known for their low
sampling rates. On a K80, expect to generate a medium sized sentence every 2 minutes.
## Demos
See [this page](http://nonint.com/static/tortoise_v2_examples.html) for a large list of example outputs.
## Usage guide
### Colab
Colab is the easiest way to try this out. I've put together a notebook you can use here:
https://colab.research.google.com/drive/1wVVqUPqwiDBUVeWWOUNglpGhU3hg_cbR?usp=sharing
### Installation
If you want to use this on your own computer, you must have an NVIDIA GPU. First, install pytorch using these
instructions: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
Then:
```shell
git clone https://github.com/neonbjb/tortoise-tts.git
cd tortoise-tts
python setup.py install
```
### do_tts.py
This script allows you to speak a single phrase with one or more voices.
```shell
python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast
```
### read.py
This script provides tools for reading large amounts of text.
```shell
python tortoise/read.py --textfile <your text to be read> --voice random
```
This will break up the textfile into sentences, and then convert them to speech one at a time. It will output a series
of spoken clips as they are generated. Once all the clips are generated, it will combine them into a single file and
output that as well.
Sometimes Tortoise screws up an output. You can re-generate any bad clips by re-running `read.py` with the --regenerate
argument.
### API
Tortoise can be used programmatically, like so:
```python
reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths]
tts = api.TextToSpeech()
pcm_audio = tts.tts_with_preset("your text here", reference_clips, preset='fast')
```
## Voice customization guide
Tortoise was specifically trained to be a multi-speaker model. It accomplishes this by consulting reference clips.
These reference clips are recordings of a speaker that you provide to guide speech generation. These clips are used to determine many properties of the output, such as the pitch and tone of the voice, speaking speed, and even speaking defects like a lisp or stuttering. The reference clip is also used to determine non-voice related aspects of the audio output like volume, background noise, recording quality and reverb.
### Random voice
I've included a feature which randomly generates a voice. These voices don't actually exist and will be random every time you run
it. The results are quite fascinating and I recommend you play around with it!
You can use the random voice by passing in 'random' as the voice name. Tortoise will take care of the rest.
For the those in the ML space: this is created by projecting a random vector onto the voice conditioning latent space.
### Provided voices
This repo comes with several pre-packaged voices. You will be familiar with many of them. :)
Most of the provided voices were not found in the training set. Experimentally, it seems that voices from the training set
produce more realistic outputs then those outside of the training set. Any voice prepended with "train" came from the
training set.
### Adding a new voice
To add new voices to Tortoise, you will need to do the following:
1. Gather audio clips of your speaker(s). Good sources are YouTube interviews (you can use youtube-dl to fetch the audio), audiobooks or podcasts. Guidelines for good clips are in the next section.
2. Cut your clips into ~10 second segments. You want at least 3 clips. More is better, but I only experimented with up to 5 in my testing.
3. Save the clips as a WAV file with floating point format and a 22,050 sample rate.
4. Create a subdirectory in voices/
5. Put your clips in that subdirectory.
6. Run tortoise utilities with --voice=<your_subdirectory_name>.
### Picking good reference clips
As mentioned above, your reference clips have a profound impact on the output of Tortoise. Following are some tips for picking
good clips:
1. Avoid clips with background music, noise or reverb. These clips were removed from the training dataset. Tortoise is unlikely to do well with them.
2. Avoid speeches. These generally have distortion caused by the amplification system.
3. Avoid clips from phone calls.
4. Avoid clips that have excessive stuttering, stammering or words like "uh" or "like" in them.
5. Try to find clips that are spoken in such a way as you wish your output to sound like. For example, if you want to hear your target voice read an audiobook, try to find clips of them reading a book.
6. The text being spoken in the clips does not matter, but diverse text does seem to perform better.
## Advanced Usage
### Generation settings
Tortoise is primarily an autoregressive decoder model combined with a diffusion model. Both of these have a lot of knobs
that can be turned that I've abstracted away for the sake of ease of use. I did this by generating thousands of clips using
various permutations of the settings and using a metric for voice realism and intelligibility to measure their effects. I've
set the defaults to the best overall settings I was able to find. For specific use-cases, it might be effective to play with
these settings (and it's very likely that I missed something!)
These settings are not available in the normal scripts packaged with Tortoise. They are available, however, in the API. See
```api.tts``` for a full list.
### Prompt engineering
Some people have discovered that it is possible to do prompt engineering with Tortoise! For example, you can evoke emotion
by including things like "I am really sad," before your text. I've built an automated redaction system that you can use to
take advantage of this. It works by attempting to redact any text in the prompt surrounded by brackets. For example, the
prompt "\[I am really sad,\] Please feed me." will only speak the words "Please feed me" (with a sad tonality).
### Playing with the voice latent
Tortoise ingests reference clips by feeding them through individually through a small submodel that produces a point latent,
then taking the mean of all of the produced latents. The experimentation I have done has indicated that these point latents
are quite expressive, affecting everything from tone to speaking rate to speech abnormalities.
This lends itself to some neat tricks. For example, you can combine feed two different voices to tortoise and it will output
what it thinks the "average" of those two voices sounds like.
#### Generating conditioning latents from voices
Use the script `get_conditioning_latents.py` to extract conditioning latents for a voice you have installed. This script
will dump the latents to a .pth pickle file. The file will contain a single tuple, (autoregressive_latent, diffusion_latent).
Alternatively, use the api.TextToSpeech.get_conditioning_latents() to fetch the latents.
#### Using raw conditioning latents to generate speech
After you've played with them, you can use them to generate speech by creating a subdirectory in voices/ with a single
".pth" file containing the pickled conditioning latents as a tuple (autoregressive_latent, diffusion_latent).
### Send me feedback!
Probabilistic models like Tortoise are best thought of as an "augmented search" - in this case, through the space of possible
utterances of a specific string of text. The impact of community involvement in perusing these spaces (such as is being done with
GPT-3 or CLIP) has really surprised me. If you find something neat that you can do with Tortoise that isn't documented here,
please report it to me! I would be glad to publish it to this page.
## Tortoise-detect
Out of concerns that this model might be misused, I've built a classifier that tells the likelihood that an audio clip
came from Tortoise.
This classifier can be run on any computer, usage is as follows:
```commandline
python tortoise/is_this_from_tortoise.py --clip=<path_to_suspicious_audio_file>
```
This model has 100% accuracy on the contents of the results/ and voices/ folders in this repo. Still, treat this classifier
as a "strong signal". Classifiers can be fooled and it is likewise not impossible for this classifier to exhibit false
positives.
## Model architecture
Tortoise TTS is inspired by OpenAI's DALLE, applied to speech data and using a better decoder. It is made up of 5 separate
models that work together. I've assembled a write-up of the system architecture here:
[https://nonint.com/2022/04/25/tortoise-architectural-design-doc/](https://nonint.com/2022/04/25/tortoise-architectural-design-doc/)
## Training
These models were trained on my "homelab" server with 8 RTX 3090s over the course of several months. They were trained on a dataset consisting of
~50k hours of speech data, most of which was transcribed by [ocotillo](http://www.github.com/neonbjb/ocotillo). Training was done on my own
[DLAS](https://github.com/neonbjb/DL-Art-School) trainer.
I currently do not have plans to release the training configurations or methodology. See the next section..
## Ethical Considerations
Tortoise v2 works considerably better than I had planned. When I began hearing some of the outputs of the last few versions, I began
wondering whether or not I had an ethically unsound project on my hands. The ways in which a voice-cloning text-to-speech system
could be misused are many. It doesn't take much creativity to think up how.
After some thought, I have decided to go forward with releasing this. Following are the reasons for this choice:
1. It is primarily good at reading books and speaking poetry. Other forms of speech do not work well.
2. It was trained on a dataset which does not have the voices of public figures. While it will attempt to mimic these voices if they are provided as references, it does not do so in such a way that most humans would be fooled.
3. The above points could likely be resolved by scaling up the model and the dataset. For this reason, I am currently withholding details on how I trained the model, pending community feedback.
4. I am releasing a separate classifier model which will tell you whether a given audio clip was generated by Tortoise or not. See `tortoise-detect` above.
5. If I, a tinkerer with a BS in computer science with a ~$15k computer can build this, then any motivated corporation or state can as well. I would prefer that it be in the open and everyone know the kinds of things ML can do.
### Diversity
The diversity expressed by ML models is strongly tied to the datasets they were trained on.
Tortoise was trained primarily on a dataset consisting of audiobooks. I made no effort to
balance diversity in this dataset. For this reason, Tortoise will be particularly poor at generating the voices of minorities
or of people who speak with strong accents.
## Looking forward
Tortoise v2 is about as good as I think I can do in the TTS world with the resources I have access to. A phenomenon that happens when
training very large models is that as parameter count increases, the communication bandwidth needed to support distributed training
of the model increases multiplicatively. On enterprise-grade hardware, this is not an issue: GPUs are attached together with
exceptionally wide buses that can accommodate this bandwidth. I cannot afford enterprise hardware, though, so I am stuck.
I want to mention here
that I think Tortoise could do be a **lot** better. The three major components of Tortoise are either vanilla Transformer Encoder stacks
or Decoder stacks. Both of these types of models have a rich experimental history with scaling in the NLP realm. I see no reason
to believe that the same is not true of TTS.
The largest model in Tortoise v2 is considerably smaller than GPT-2 large. It is 20x smaller that the original DALLE transformer.
Imagine what a TTS model trained at or near GPT-3 or DALLE scale could achieve.
If you are an ethical organization with computational resources to spare interested in seeing what this model could do
if properly scaled out, please reach out to me! I would love to collaborate on this.
## Acknowledgements
This project has garnered more praise than I expected. I am standing on the shoulders of giants, though, and I want to
credit a few of the amazing folks in the community that have helped make this happen:
- Hugging Face, who wrote the GPT model and the generate API used by Tortoise, and who hosts the model weights.
- [Ramesh et al](https://arxiv.org/pdf/2102.12092.pdf) who authored the DALLE paper, which is the inspiration behind Tortoise.
- [Nichol and Dhariwal](https://arxiv.org/pdf/2102.09672.pdf) who authored the (revision of) the code that drives the diffusion model.
- [Jang et al](https://arxiv.org/pdf/2106.07889.pdf) who developed and open-sourced univnet, the vocoder this repo uses.
- [lucidrains](https://github.com/lucidrains) who writes awesome open source pytorch models, many of which are used here.
- [Patrick von Platen](https://huggingface.co/patrickvonplaten) whose guides on setting up wav2vec were invaluable to building my dataset.
## Notice
Tortoise was built entirely by me using my own hardware. My employer was not involved in any facet of Tortoise's development.
If you use this repo or the ideas therein for your research, please cite it! A bibtex entree can be found in the right pane on GitHub.
|
ZhihongDeng/ppo-SnowballTarget
|
ZhihongDeng
| 2023-02-18T11:19:09Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-02-18T11:19:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: ZhihongDeng/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
parsasam/q-Taxi-v3
|
parsasam
| 2023-02-18T11:13:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T11:07:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="parsasam/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
parsasam/q-FrozenLake-v1-4x4-noSlippery
|
parsasam
| 2023-02-18T11:05:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T11:05:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="parsasam/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnAmbitiousMonk/ppo-LunarLander-v11
|
AnAmbitiousMonk
| 2023-02-18T10:54:50Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T10:54:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.66 +/- 18.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/any_chikmix_likeaom2sfw
|
LarryAIDraw
| 2023-02-18T10:50:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-18T09:52:22Z |
---
license: creativeml-openrail-m
---
|
Yelinz/q-Taxi-v3-v1
|
Yelinz
| 2023-02-18T10:44:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T10:10:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.706732347314747
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
evaluating using gymnasium gives a weird score
value: 814.92 +/- 28.41
## Usage
```python
model = load_from_hub(repo_id="Yelinz/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pchelaEb/t5-russian-spell
|
pchelaEb
| 2023-02-18T10:34:42Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-16T14:52:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: t5-russian-spell
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-russian-spell
This model is a fine-tuned version of [UrukHan/t5-russian-summarization](https://huggingface.co/UrukHan/t5-russian-summarization) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pixelbangbang/esg-bank-setfit-v1
|
pixelbangbang
| 2023-02-18T10:28:33Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-02-18T10:28:09Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# esg-bank-setfit-v1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("esg-bank-setfit-v1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
KoRiF/ppo-Huggy
|
KoRiF
| 2023-02-18T10:12:06Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-01-22T13:55:51Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: KoRiF/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mili7522/a2c-AntBulletEnv-v0
|
mili7522
| 2023-02-18T09:38:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T09:37:37Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1673.64 +/- 163.88
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ragav/wav2vec2-tk
|
Ragav
| 2023-02-18T09:32:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-02-17T14:03:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-tk
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.6686753140639363
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-tk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6088
- Wer: 0.6687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0702 | 0.29 | 1000 | 1.4170 | 0.9376 |
| 1.1914 | 0.58 | 2000 | 1.0082 | 0.8331 |
| 0.8249 | 0.86 | 3000 | 0.6088 | 0.6687 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Honza/a2c-AntBulletEnv-v0
|
Honza
| 2023-02-18T09:31:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-16T21:29:10Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 958.87 +/- 297.83
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
koankoan/lilcyborg
|
koankoan
| 2023-02-18T09:26:34Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-18T09:24:56Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: lilcyborg
---
### neonz cyborgs Dreambooth model trained by koankoan with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
lilcyborg (use that on your prompt)

|
lnros/LunarLander-v2
|
lnros
| 2023-02-18T09:25:10Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T08:36:17Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 85.37 +/- 72.33
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 8
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.3
'clip_vloss': True
'ent_coef': 0.05
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'lnros/LunarLander-v2'
'batch_size': 512
'minibatch_size': 64}
```
|
jiaoqsh/mbart-large-50-finetuned-stocks-event-all
|
jiaoqsh
| 2023-02-18T09:06:19Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-02-18T08:37:54Z |
---
license: mit
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mbart-large-50-finetuned-stocks-event-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-stocks-event-all
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5518
- Rouge1: 0.5383
- Rouge2: 0.4868
- Rougel: 0.5387
- Rougelsum: 0.5362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.2097 | 1.0 | 97 | 0.5821 | 0.5174 | 0.4646 | 0.5137 | 0.5111 |
| 0.5315 | 2.0 | 194 | 0.4826 | 0.5169 | 0.4709 | 0.5186 | 0.5168 |
| 0.3602 | 3.0 | 291 | 0.4677 | 0.5319 | 0.4811 | 0.5344 | 0.5304 |
| 0.2639 | 4.0 | 388 | 0.4724 | 0.5319 | 0.4750 | 0.5335 | 0.5318 |
| 0.1715 | 5.0 | 485 | 0.4504 | 0.5331 | 0.4790 | 0.5337 | 0.5323 |
| 0.1136 | 6.0 | 582 | 0.4894 | 0.5321 | 0.4886 | 0.5324 | 0.5295 |
| 0.0618 | 7.0 | 679 | 0.5445 | 0.5456 | 0.4959 | 0.5473 | 0.5438 |
| 0.0347 | 8.0 | 776 | 0.5518 | 0.5383 | 0.4868 | 0.5387 | 0.5362 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nakanolab/dqn-SpaceInvadersNoFrameskip
|
nakanolab
| 2023-02-18T08:57:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T08:56:42Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 643.50 +/- 248.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nakanolab -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nakanolab -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nakanolab
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
LarryAIDraw/aom2nsfwVtubers31_v10
|
LarryAIDraw
| 2023-02-18T08:27:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-18T07:21:00Z |
---
license: creativeml-openrail-m
---
|
muhammadravi251001/tmp_trainer
|
muhammadravi251001
| 2023-02-18T07:52:02Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-02-18T07:51:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [julien-c/EsperBERTo-small](https://huggingface.co/julien-c/EsperBERTo-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
WRobinW/bert-finetuned-squad
|
WRobinW
| 2023-02-18T07:39:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-18T05:07:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LarryAIDraw/characterChisato_v10
|
LarryAIDraw
| 2023-02-18T07:12:38Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-17T11:46:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/8473/lycoris-recoil-nishikigi-chisato
|
LarryAIDraw/genshinImpactNoelle_nV1
|
LarryAIDraw
| 2023-02-18T07:11:59Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-17T12:01:51Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/9071/genshin-impact-noelle
|
junjuice0/GHIBA
|
junjuice0
| 2023-02-18T06:59:49Z | 454 | 4 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-17T16:05:57Z |
---
license: creativeml-openrail-m
---
|
AnAmbitiousMonk/ppo-LunarLander-v8
|
AnAmbitiousMonk
| 2023-02-18T06:54:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T06:54:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.14 +/- 20.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nakanolab/q-FrozenLake-v1-4x4-noSlippery
|
nakanolab
| 2023-02-18T06:17:01Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T06:16:58Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nakanolab/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Brhnglc/Suviii
|
Brhnglc
| 2023-02-18T06:16:04Z | 31 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-02-18T06:06:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Brhnglc/Suviii
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
0RisingStar0/FantaStel
|
0RisingStar0
| 2023-02-18T06:02:25Z | 9 | 11 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-02-18T04:14:43Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
<p align="center"><img src="https://huggingface.co/0RisingStar0/FantaStel/resolve/main/1.png">
<img src="https://huggingface.co/0RisingStar0/FantaStel/resolve/main/%EB%B0%B0%EA%B2%BD3.png"></p>
<center><b>FantaStel</b></center>
U-Net mixed model <b>specialized for fantasy landscape backgrounds(especially floating islands).</b>
Pastel style art for background.
<b>FP16 Pruned version</b>(No EMA).
(Quality change may occur in very small details on buildings' textures)
<b>Recommended prompts : </b>
(masterpiece, best quality, excellent quality), ((1girl, solo, cowboy shot)), (fantasy landscape, fictional landscape)
EasyNegative, cowboy, cowboy hat, cowboy western, cowboy boots, cowboy hat, fat, moss, phone, man, pedestrians, extras, border, outside border, white border, watermark, logo, signature
(EasyNegative is a negative embedding : https://huggingface.co/datasets/gsdf/EasyNegative)
<b>Recommended settings : </b>
Sampler : DPM++ 2M Karras
Sampling steps : 24
Resolution : 768x512
CFG Scale : 9.5
<b> Upscale is a must-do!! </b> Otherwise, you won't get intended results.
Upscaler : Latent (nearest)
Hires steps : 0
Denoise : 0.6
Upscale 2x
<b>Recommended VAEs : </b>
orangemix.vae.pt
<b> Mixed models : </b>
AikimiXPv1.0, colormixed, Counterfeit V2.0, Counterfeit V2.5, Dreamlike Diffusion 1.0, HighRiseMixV2, mouseymix-lignepatsel, powercolorV1, RoboeticInkPunkDreamShaperChromaV5
(Thanks to everyone who made above models!)
Feel free to give feedbacks as you wish, I'll try and work around with it.
|
musika/musika-grateful-dead-barton-hall
|
musika
| 2023-02-18T05:52:49Z | 0 | 0 | null |
[
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
] | null | 2023-02-18T05:36:07Z |
---
license: mit
thumbnail: "https://iscale.iheart.com/catalog/album/46707655"
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: musika-grateful-dead-barton-hall
## Model provided by: benwakefield
Pretrained model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
Trained on the [Cornell 5/8/77](https://en.wikipedia.org/wiki/Cornell_5/8/77) show performed by the Grateful Dead.
## How to use
You can generate music from this model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
Genresell/wav2vec2-base-finetuned-ks
|
Genresell
| 2023-02-18T05:35:35Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-02-18T02:43:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0812
- Accuracy: 0.9821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4499 | 1.0 | 798 | 0.2799 | 0.9719 |
| 0.2284 | 2.0 | 1596 | 0.1266 | 0.9773 |
| 0.1911 | 3.0 | 2394 | 0.0990 | 0.9793 |
| 0.1759 | 4.0 | 3192 | 0.0867 | 0.9807 |
| 0.1119 | 5.0 | 3990 | 0.0812 | 0.9821 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 1.14.0
- Tokenizers 0.13.2
|
huggingtweets/can63616e
|
huggingtweets
| 2023-02-18T05:18:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-18T05:17:09Z |
---
language: en
thumbnail: http://www.huggingtweets.com/can63616e/1676697505234/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1600361508818542594/4YXltA2t_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">can/john</div>
<div style="text-align: center; font-size: 14px;">@can63616e</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from can/john.
| Data | can/john |
| --- | --- |
| Tweets downloaded | 1623 |
| Retweets | 649 |
| Short tweets | 201 |
| Tweets kept | 773 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5ws83vu0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @can63616e's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wmz2gxeg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wmz2gxeg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/can63616e')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mafwalter/question_v_statement
|
mafwalter
| 2023-02-18T05:15:37Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-18T05:11:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: question_v_statement
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_v_statement
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0092
- Accuracy: 0.998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.0121 | 0.997 |
| 0.0593 | 2.0 | 500 | 0.0092 | 0.998 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mingdinghan/dqn-SpaceInvadersNoFrameskip-v4
|
mingdinghan
| 2023-02-18T05:13:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-18T05:12:44Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 556.00 +/- 186.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mingdinghan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mingdinghan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mingdinghan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
andrew234/bert-finetuned-squad
|
andrew234
| 2023-02-18T04:51:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-18T04:27:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pnparam/PNP_dys_asr_960h
|
pnparam
| 2023-02-18T03:56:27Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-02-12T11:41:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: PNP_dys_asr_960h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PNP_dys_asr_960h
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7169
- Wer: 1.4123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.4668 | 1.63 | 500 | 2.6987 | 1.0226 |
| 2.0533 | 3.26 | 1000 | 1.0528 | 2.4236 |
| 0.4828 | 4.89 | 1500 | 0.7560 | 1.3358 |
| 0.1604 | 6.51 | 2000 | 0.7169 | 1.4123 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
whatlurks/mj-gs
|
whatlurks
| 2023-02-18T03:26:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-18T03:26:24Z |
---
license: creativeml-openrail-m
---
|
strongwar/bert-finetuned-squad
|
strongwar
| 2023-02-18T03:14:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-17T22:54:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Foxify52/MultiMix
|
Foxify52
| 2023-02-18T03:09:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-16T22:30:15Z |
---
license: creativeml-openrail-m
---
|
ramelol/ppo-LunarLander-v2
|
ramelol
| 2023-02-18T01:03:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-08T22:23:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.12 +/- 10.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Gholamreza/tinybert-finetuned-squad
|
Gholamreza
| 2023-02-18T00:48:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-18T00:27:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: tinybert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinybert-finetuned-squad
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
took about 8 mins to fine-tune on google colab
{'exact_match': 32.04351939451277, 'f1': 44.36583937955441}
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mohamedlamine/t5-small-finetuned-agri
|
mohamedlamine
| 2023-02-18T00:11:54Z | 16 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-17T23:53:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mohamedlamine/t5-small-finetuned-agri
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mohamedlamine/t5-small-finetuned-agri
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9614
- Validation Loss: 2.7196
- Train Rouge1: 31.6785
- Train Rouge2: 14.8289
- Train Rougel: 26.6598
- Train Rougelsum: 26.5468
- Train Gen Len: 16.8987
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.9614 | 2.7196 | 31.6785 | 14.8289 | 26.6598 | 26.5468 | 16.8987 | 0 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
rodrfons/ppo-LunarLander-v2
|
rodrfons
| 2023-02-17T23:48:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T23:27:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.91 +/- 19.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Allayte/Disentangled-BETA-VAE-TMNIST
|
Allayte
| 2023-02-17T23:27:17Z | 0 | 0 |
pythae
|
[
"pythae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-02-17T23:27:15Z |
---
language: en
tags:
- pythae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
tatakof/ppo-Huggy
|
tatakof
| 2023-02-17T23:23:18Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-02-17T23:23:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: franfram/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bobobert4/sac-PandaReachDense-v2
|
bobobert4
| 2023-02-17T23:22:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T22:21:08Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.35 +/- 0.10
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaReachDense-v2**
This is a trained model of a **SAC** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jcramirezpr/Taxi-v3
|
jcramirezpr
| 2023-02-17T23:02:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T23:02:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jcramirezpr/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ArtYac/ppo-Pyramids
|
ArtYac
| 2023-02-17T22:59:05Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-02-17T22:58:59Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ArtYac/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pfunk/CartPole-v1-DQN_baseline_VIDEO-seed1
|
pfunk
| 2023-02-17T22:56:30Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T22:28:56Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 244.30 +/- 65.98
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline_VIDEO.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_baseline_VIDEO]"
python -m cleanrl_utils.enjoy --exp-name DQN_baseline_VIDEO --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline_VIDEO-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline_VIDEO-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline_VIDEO-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --exp-name DQN_baseline_VIDEO --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id CartPole-v1 --seed 1 --total-timesteps 100000
```
# Hyperparameters
```python
{'batch_size': 128,
'buffer_size': 10000,
'capture_video': False,
'cuda': True,
'end_e': 0.05,
'env_id': 'CartPole-v1',
'exp_name': 'DQN_baseline_VIDEO',
'exploration_fraction': 0.5,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.00025,
'learning_starts': 10000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 500,
'tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 100000,
'track': True,
'train_frequency': 10,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
eLarry/Taxi-v3
|
eLarry
| 2023-02-17T22:49:43Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-02T02:51:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eLarry/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
eLarry/q-FrozenLake-v1-4x4-noSlippery
|
eLarry
| 2023-02-17T22:47:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-01T16:50:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eLarry/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
parsasam/ppo-Huggy
|
parsasam
| 2023-02-17T22:41:54Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-02-17T22:41:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: parsasam/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
icelab/cosmicroberta
|
icelab
| 2023-02-17T22:25:28Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-19T21:26:52Z |
---
license: mit
widget:
- text: "The closest planet to earth is <mask>."
- text: "Electrical power is stored on a spacecraft with <mask>."
---
### CosmicRoBERTa
This model is a further pre-trained version of RoBERTa for space science on a domain-specific corpus, which includes abstracts from the NTRS library, abstracts from SCOPUS, ECSS requirements, and other sources from this domain.
This totals to a pre-training corpus of around 75 mio words.
The model performs slightly better on a subset (0.6 of total data set) of the CR task presented in our paper [SpaceTransformers: Language Modeling for Space Systems](https://ieeexplore.ieee.org/document/9548078).
| | RoBERTa | CosmiRoBERTa | SpaceRoBERTa |
|-----------------------------------------------|----------------|---------------------|---------------------|
| Parameter | 0.475 | 0.515 | 0.485 |
| GN&C | 0.488 | 0.609 | 0.602 |
| System engineering | 0.523 | 0.559 | 0.555 |
| Propulsion | 0.403 | 0.521 | 0.465 |
| Project Scope | 0.493 | 0.541 | 0.497 |
| OBDH | 0.717 | 0.789 | 0.794 |
| Thermal | 0.432 | 0.509 | 0.491 |
| Quality control | 0.686 | 0.704 | 0.678 |
| Telecom. | 0.360 | 0.614 | 0.557 |
| Measurement | 0.833 | 0.849 | 0.858 |
| Structure & Mechanism | 0.489 | 0.581 | 0.566 |
| Space Environment | 0.543 | 0.681 | 0.605 |
| Cleanliness | 0.616 | 0.621 | 0.651 |
| Project Organisation / Documentation | 0.355 | 0.427 | 0.429 |
| Power | 0.638 | 0.735 | 0.661 |
| Safety / Risk (Control) | 0.647 | 0.727 | 0.676 |
| Materials / EEEs | 0.585 | 0.642 | 0.639 |
| Nonconformity | 0.365 | 0.333 | 0.419 |
| weighted | 0.584 | 0.652(+7%) | 0.633(+5%) |
| Valid. Loss | 0.605 | 0.505 | 0.542 |
### BibTeX entry and citation info
```
@ARTICLE{
9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659}
}
```
|
z4x/ppo-LunarLander-v2-CleanRL
|
z4x
| 2023-02-17T22:15:49Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T22:15:43Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -125.60 +/- 77.76
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'z4x/ppo-LunarLander-v2-CleanRL'
'batch_size': 512
'minibatch_size': 128}
```
|
ArtYac/ppo-SnowballTarget
|
ArtYac
| 2023-02-17T22:09:42Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-02-17T22:09:36Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: ArtYac/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
coddiw0mple/ppo-LunarLander-v2
|
coddiw0mple
| 2023-02-17T22:06:33Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T20:24:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.60 +/- 16.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hulkster/airobots-robot
|
hulkster
| 2023-02-17T21:59:28Z | 1 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-17T21:48:07Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a photo of airobots robot in the Acropolis
---
# DreamBooth model for the airobots concept trained by hulkster on the hulkster/airobotics dataset.
This is a Stable Diffusion model fine-tuned on the airobots concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of airobots robot**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `robot` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('hulkster/airobots-robot')
image = pipeline().images[0]
image
```

|
shaaaanya/taxi-v3-v2
|
shaaaanya
| 2023-02-17T21:47:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T21:46:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.58 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="shaaaanya/taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Yiliang/bert-finetuned-squad
|
Yiliang
| 2023-02-17T21:38:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-17T19:19:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
menoua/ML-Agents-SnowballTarget
|
menoua
| 2023-02-17T21:38:14Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-02-17T21:38:08Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: menoua/ML-Agents-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Cornegru/Unit1-Lander
|
Cornegru
| 2023-02-17T21:37:41Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T21:04:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 223.70 +/- 73.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jslowik/PPO-LunarLander-v2
|
jslowik
| 2023-02-17T21:30:09Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-17T21:29:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.75 +/- 22.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.