modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
liyingjian/ppo-LunarLander-v2
|
liyingjian
| 2023-07-06T07:38:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T06:36:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.29 +/- 21.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Vtmpas/ppo-LunarLander-v2
|
Vtmpas
| 2023-07-06T07:36:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T07:35:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.43 +/- 16.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Abinaya/opt-1.3b-lora-summary
|
Abinaya
| 2023-07-06T07:35:05Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T06:35:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "Abinaya/opt-1.3-b-lora"
config = PeftConfig.from_pretrained("Abinaya/opt-1.3b-lora-summary")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b")
model = PeftModel.from_pretrained(model, "Abinaya/opt-1.3b-lora-summary")
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
```
## For inference to get summary
```
batch = tokenizer("Natural language processing is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
```
|
Word2vec/nlpl_224
|
Word2vec
| 2023-07-06T07:31:46Z | 0 | 0 | null |
[
"word2vec",
"ukr",
"dataset:Ukrainian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:02:16Z |
---
language: ukr
license: cc-by-4.0
tags:
- word2vec
datasets: Ukrainian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 99884 corresponding to 299668196 tokens from the dataset `Ukrainian_CoNLL17_corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 10 and dimension of 200.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_224", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/224.zip
|
Word2vec/nlpl_208
|
Word2vec
| 2023-07-06T07:30:26Z | 0 | 0 | null |
[
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:25:40Z |
---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 35193029 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_208", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/208.zip
|
Word2vec/nlpl_206
|
Word2vec
| 2023-07-06T07:29:52Z | 0 | 0 | null |
[
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:09:12Z |
---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_206", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/206.zip
|
Word2vec/nlpl_205
|
Word2vec
| 2023-07-06T07:29:34Z | 0 | 0 | null |
[
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:04:52Z |
---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Continuous Bag-of-Words with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_205", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/205.zip
|
NTQAI/pedestrian_age_recognition
|
NTQAI
| 2023-07-06T07:28:59Z | 110,387 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"beit",
"image-classification",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-09T03:36:33Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pedestrian_age_recognition_local
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8073394495412844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pedestrian_age_recognition_local
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5004
- Accuracy: 0.8073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8849 | 1.0 | 2008 | 0.7939 | 0.6807 |
| 0.9836 | 2.0 | 4016 | 0.6694 | 0.7336 |
| 0.8128 | 3.0 | 6024 | 0.5768 | 0.7668 |
| 0.7611 | 4.0 | 8032 | 0.5541 | 0.7833 |
| 0.6441 | 5.0 | 10040 | 0.5473 | 0.7773 |
| 0.5696 | 6.0 | 12048 | 0.5187 | 0.7971 |
| 0.6925 | 7.0 | 14056 | 0.5082 | 0.8038 |
| 0.5711 | 8.0 | 16064 | 0.5092 | 0.8098 |
| 0.7741 | 9.0 | 18072 | 0.5026 | 0.8020 |
| 0.5269 | 10.0 | 20080 | 0.5004 | 0.8073 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
|
Word2vec/nlpl_200
|
Word2vec
| 2023-07-06T07:28:57Z | 0 | 0 | null |
[
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_October_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:56:11Z |
---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_October_2019
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249212 corresponding to 3530685741 tokens from the dataset `English_Wikipedia_Dump_of_October_2019`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 3 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_200", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/200.zip
|
Word2vec/nlpl_184
|
Word2vec
| 2023-07-06T07:28:01Z | 0 | 0 | null |
[
"word2vec",
"rus",
"dataset:Russian_News",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:55:10Z |
---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_News
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249318 corresponding to 2550000000 tokens from the dataset `Russian_News`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_184", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/184.zip
|
Word2vec/nlpl_183
|
Word2vec
| 2023-07-06T07:27:39Z | 0 | 0 | null |
[
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:54:53Z |
---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 248118 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_183", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/183.zip
|
Word2vec/nlpl_182
|
Word2vec
| 2023-07-06T07:27:18Z | 0 | 0 | null |
[
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:54:36Z |
---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 248978 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_182", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/182.zip
|
Word2vec/nlpl_180
|
Word2vec
| 2023-07-06T07:27:01Z | 0 | 0 | null |
[
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:54:19Z |
---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 189193 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 20 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_180", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/180.zip
|
Bugsys0302/m416
|
Bugsys0302
| 2023-07-06T07:16:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T07:06:10Z |
---
license: creativeml-openrail-m
---
|
Bugsys0302/beltbr
|
Bugsys0302
| 2023-07-06T06:59:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T06:57:43Z |
---
license: creativeml-openrail-m
---
|
guaguale/path-to-save-model
|
guaguale
| 2023-07-06T06:50:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-05T09:49:11Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - guaguale/path-to-save-model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
IliyanGochev/whisper-small-bg
|
IliyanGochev
| 2023-07-06T06:50:12Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"bg",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-05T08:04:03Z |
---
language:
- bg
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: whisper-small-bg
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_13_0 bg
type: mozilla-foundation/common_voice_13_0
config: bg
split: test
args: bg
metrics:
- name: Wer
type: wer
value: 44.67291341315287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-bg
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_13_0 bg dataset.
It achieves the following results on the evaluation set:
- Loss: 9.0612
- Wer: 44.6729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.9319 | 6.76 | 1000 | 10.0774 | 73.9892 |
| 2.6116 | 13.51 | 2000 | 11.4089 | 67.0484 |
| 0.9607 | 20.27 | 3000 | 11.8266 | 60.9448 |
| 0.3464 | 27.03 | 4000 | 9.9500 | 52.1213 |
| 0.0122 | 33.78 | 5000 | 9.0612 | 44.6729 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bugsys0302/fmmstrb
|
Bugsys0302
| 2023-07-06T06:46:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T06:40:45Z |
---
license: creativeml-openrail-m
---
|
JennnDexter/pokemon-lora
|
JennnDexter
| 2023-07-06T06:44:42Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-12T06:24:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - JennnDexter/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
NasimB/gpt2-concat-aochildes-16plus6k
|
NasimB
| 2023-07-06T06:39:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T04:47:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-16plus6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-16plus6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7265 | 0.3 | 500 | 5.6481 |
| 5.3801 | 0.59 | 1000 | 5.2065 |
| 5.0346 | 0.89 | 1500 | 4.9518 |
| 4.7589 | 1.19 | 2000 | 4.8123 |
| 4.6003 | 1.48 | 2500 | 4.6915 |
| 4.4941 | 1.78 | 3000 | 4.5806 |
| 4.3447 | 2.07 | 3500 | 4.5155 |
| 4.1761 | 2.37 | 4000 | 4.4640 |
| 4.1351 | 2.67 | 4500 | 4.4014 |
| 4.1043 | 2.96 | 5000 | 4.3576 |
| 3.8639 | 3.26 | 5500 | 4.3597 |
| 3.8432 | 3.56 | 6000 | 4.3266 |
| 3.8118 | 3.85 | 6500 | 4.2913 |
| 3.6736 | 4.15 | 7000 | 4.2957 |
| 3.5472 | 4.45 | 7500 | 4.2920 |
| 3.5398 | 4.74 | 8000 | 4.2794 |
| 3.507 | 5.04 | 8500 | 4.2806 |
| 3.3499 | 5.33 | 9000 | 4.2855 |
| 3.3504 | 5.63 | 9500 | 4.2851 |
| 3.3498 | 5.93 | 10000 | 4.2849 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hchung1017/aihub_012_streaming_conformer
|
hchung1017
| 2023-07-06T06:22:30Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"ko",
"dataset:aihub_012",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-07-06T06:22:07Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ko
datasets:
- aihub_012
license: cc-by-4.0
---
## ESPnet2 ASR model
### `hchung1017/aihub_012_streaming_conformer`
This model was trained by hchung1017 using aihub_012 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout f4d7fead71e2a99541a8d3d66d6e00a33d9e82df
pip install -e .
cd egs2/aihub_012/asr1
./run.sh --skip_data_prep false --skip_train true --download_model hchung1017/aihub_012_streaming_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Jul 5 15:19:05 KST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202304`
- pytorch version: `pytorch 1.13.1`
- Git hash: `f4d7fead71e2a99541a8d3d66d6e00a33d9e82df`
- Commit date: `Wed May 24 14:58:35 2023 -0400`
## exp/asr_train_asr_streaming_conformer_raw_ko_bpe5000_sp/decode_asr_streaming_asr_model_valid.acc.ave
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|3794053|89.7|9.1|1.2|1.4|11.8|28.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|17636048|94.8|3.0|2.2|1.6|6.8|28.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|4325914|88.1|8.2|3.7|1.5|13.4|28.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_streaming_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_streaming_conformer_raw_ko_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 51405
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- cer_ctc
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/train_sp/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/dev/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.003
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- ▁I
- ▁YOU
- ''''
- S
- ▁WHAT
- ▁A
- ▁IT
- ▁TO
- ▁IS
- ▁THE
- ▁ARE
- ▁CAN
- ▁OKAY
- ▁YES
- ▁DO
- ▁THAT
- ▁SEE
- T
- ▁HE
- ▁HOW
- ▁ME
- ▁HAVE
- ▁MY
- ▁GOOD
- ▁REALLY
- ▁SO
- ▁FOR
- ▁AM
- ▁SURE
- ▁OH
- ▁GO
- ▁WHY
- ▁NO
- ▁YOUR
- ▁RIGHT
- ▁HELP
- ’
- ▁DON
- ▁NOT
- ▁HI
- ▁HERE
- ▁DID
- ▁LIKE
- ▁AND
- ▁TOO
- ▁SHE
- ▁THIS
- ▁HELLO
- M
- ▁KNOW
- ▁WANT
- RE
- ▁NEED
- ▁WILL
- ▁ABOUT
- ▁THERE
- ▁LET
- ▁OF
- ▁IN
- ▁BE
- ▁BUT
- ▁THINK
- ▁SOMETHING
- ▁LOOK
- ▁NOW
- ▁NICE
- ▁THEN
- ▁
- ▁WE
- ▁GREAT
- ▁THANK
- ▁WITH
- ▁TELL
- ▁PROBLEM
- ▁HER
- ▁GOING
- ▁WAS
- ▁DOING
- ▁ASK
- ▁THANKS
- ▁HEY
- ▁BACK
- ▁WRONG
- ▁THEY
- ▁ON
- ▁HIM
- ▁UP
- ▁AT
- LL
- ▁WELL
- ▁GET
- ▁WHERE
- VERY
- ▁SOME
- ▁PEOPLE
- ▁ALL
- ▁MEAN
- ▁PLEASE
- ▁TIME
- ▁WHO
- ▁GOT
- ▁WELCOME
- ▁MAKE
- ▁COME
- ▁MEET
- ▁NEW
- ▁LOT
- ▁MOM
- ▁SAID
- ▁SHOULD
- ▁HAPPY
- ▁HIS
- ▁BUSY
- ▁BYE
- ▁QUESTION
- ▁SAY
- ▁TAKE
- ▁MORE
- ▁SORRY
- ▁IDEA
- ▁OUT
- ▁FINE
- ▁PLAY
- ▁ANY
- ▁AGAIN
- ▁BECAUSE
- ▁FROM
- ▁AN
- ▁WHEN
- ▁TRY
- ▁HAS
- ▁TODAY
- ▁READY
- ▁HOPE
- ▁GIVE
- ▁BIG
- ▁FRIEND
- ▁WRITE
- ▁EAT
- ▁ONE
- ▁BAD
- ▁MUCH
- ▁SOON
- ▁MANY
- ED
- ▁THEM
- ▁ANGRY
- ▁LATER
- ING
- ▁MAYBE
- ▁DAD
- ▁FIND
- ▁DOWN
- ▁WORRY
- ▁SHOW
- ▁COURSE
- ▁DAY
- ▁SOUNDS
- ▁DOES
- ▁STRANGE
- ▁TALK
- ▁FUN
- ▁REMEMBER
- ▁ANYTHING
- ▁BUY
- ▁LETTER
- ▁JUST
- ▁MADE
- ▁READ
- ▁CANNOT
- ▁WANTS
- ▁WOW
- ▁DIDN
- ▁IF
- ▁GLAD
- ▁WAY
- ▁MUST
- ▁SCHOOL
- ▁BOOK
- ▁LOOKING
- ▁TOLD
- ▁NAME
- ▁HEAR
- ▁TOY
- ▁TRUE
- ▁TEACHER
- ▁US
- ▁WORK
- ▁TWO
- ▁SONG
- ▁HARD
- ▁LOVE
- ▁THINGS
- ▁SING
- ▁BETTER
- ▁HOME
- ▁LINKER
- ▁UNDERSTAND
- ▁LOOKS
- ▁KIND
- ▁HOUSE
- LUE
- ▁DRESS
- ▁BY
- ▁BEST
- ▁LONG
- ▁NEWS
- ▁WENT
- ▁HAPPENED
- ▁OLD
- ▁KEEP
- ▁NEXT
- ▁CHECK
- D
- ▁SPECIAL
- ▁USE
- ▁LIKES
- ▁EVERYTHING
- ▁FEEL
- ▁ROBOT
- ▁SAD
- ▁PLEASURE
- ▁JOE
- ▁COOL
- ▁TOMORROW
- ▁LUCK
- ▁DOESN
- ▁BOX
- ▁AROUND
- ▁HOMEWORK
- ▁ALWAYS
- ▁MORGAN
- ▁PUT
- ▁THESE
- ▁GAVE
- ▁HEARD
- ▁WAIT
- ▁PRESENT
- ▁SOMEONE
- ▁PARTY
- ▁BIRTHDAY
- ▁RANDY
- ▁FRIENDS
- ▁MONEY
- ▁DONE
- ▁CAR
- ▁COFFEE
- ▁MUSIC
- ▁BEN
- ▁BEEN
- ▁STILL
- ▁GREEN
- ▁STAR
- ▁PERSON
- ▁WERE
- ▁STORY
- ▁ELSE
- ▁IDEAS
- ▁TOGETHER
- ▁MILK
- ▁WOULD
- ▁SOUND
- ▁THAN
- ▁TALKED
- ▁EVERY
- ▁NEEDS
- ▁SAW
- ▁HAIR
- ▁CHANGE
- ▁WORRIED
- ▁EASY
- ▁FOOD
- ▁DOG
- VE
- ▁CONCERT
- ▁MAKING
- ▁MONSTER
- ▁BOY
- ▁PHOTO
- ▁SCARY
- ▁RED
- ▁BROTHER
- ▁FIRST
- ▁DANCE
- ▁BEFORE
- ▁PRETTY
- ▁DRINK
- ▁WISH
- ▁HARRY
- ▁CALM
- ▁CAT
- ▁WEAR
- ▁BLUE
- ▁MESSAGE
- ▁TRUST
- ▁ONLY
- ▁HAD
- ▁THREE
- ▁AWAY
- ▁MIND
- ▁MAKES
- ▁GRANDMOTHER
- ▁WATCH
- ▁EMMA
- ▁AMY
- ▁TIRED
- ▁CLASS
- ▁MAN
- ▁DAN
- ▁COULD
- ▁BRING
- ▁SMALL
- ▁ANYWAY
- ▁OUR
- ▁ROOM
- ▁AFTER
- ▁BELIEVE
- ▁BOOKS
- ▁TEN
- ▁DEVILMON
- ▁JOB
- ▁OVER
- ▁COMING
- ▁STOP
- ▁FUNNY
- ▁DIANA
- ▁TOYS
- ▁FAST
- ▁MORNING
- ▁NUMBER
- ▁NOTHING
- ▁TOWN
- ▁OPEN
- ▁OTHER
- ▁PHONE
- ▁CARE
- ▁LEAVE
- ▁CONTEST
- ▁WOODY
- ▁THINKING
- Y
- ▁ANOTHER
- A
- ▁ENGLISH
- ▁SICK
- ▁BRAVE
- ▁TROY
- ▁EATING
- ▁SLEEP
- ▁THEIR
- ▁SELL
- ▁DELICIOUS
- ▁OFF
- ▁WATER
- ▁PICTURE
- ▁CAME
- ▁EVERYONE
- ▁PAPER
- ▁PARK
- ▁PAINT
- ▁SHOP
- ▁CREAM
- ▁TV
- ▁BOUGHT
- ▁CAREFUL
- ▁ROBBY
- ▁FOUND
- ▁STONE
- ▁SISTER
- ▁HURRY
- ▁BAG
- ▁WAKE
- ▁SYRUP
- ▁DRAW
- ▁ENERGY
- ▁SHOES
- ▁IMPORTANT
- ▁NEVER
- ▁LISTEN
- ▁WON
- ▁DOOR
- ▁POP
- ▁LAST
- ▁DIFFERENT
- ▁FISH
- ▁SAVE
- ▁HEALTHY
- ▁UNCLE
- ▁NIGHT
- UCH
- ▁PLACE
- ▁DARK
- ▁GUESS
- ▁LATE
- ▁PIE
- N
- ▁PRACTICE
- ▁MONICA
- ▁ANYONE
- ▁READING
- ▁COLOR
- ▁SALLY
- ▁BLACK
- ▁MOVIE
- ▁TROUBLE
- ▁COLD
- ▁STUDY
- ▁LITTLE
- ▁WHITE
- ▁CHEER
- ▁SCARED
- ▁POSTER
- ▁TALKING
- ▁TEACH
- ▁WALK
- ▁CAKE
- ▁INTO
- ▁FIGHT
- ▁ALREADY
- ▁SLEEPY
- ▁STRONG
- ▁OLIVIA
- ▁CALL
- ▁WROTE
- ▁ICE
- ▁OR
- ▁SCOTT
- ▁LIBRARY
- ▁NANCY
- ▁LUMY
- ▁HAT
- ▁YET
- ▁ALEX
- ▁SHORT
- ▁CLOTHES
- ▁YESTERDAY
- ▁FAVORITE
- ▁SWEET
- ▁FIVE
- ▁HOLD
- ▁LUNCH
- ▁PLAYING
- ▁GARY
- ▁HANDS
- ▁LEFT
- ▁ASKED
- ▁CHEESE
- ▁FACE
- ▁BORROW
- ▁SPEAK
- ▁INTERESTING
- ▁MAY
- ▁BEAR
- ▁SIGN
- ▁SHADOW
- ▁FLOWERS
- ▁PINO
- ▁ERIN
- ▁FOREST
- ▁GAME
- ▁MR
- ▁WANTED
- ▁RUN
- ▁SPELL
- ▁PEN
- ▁SHOPPING
- ▁COOK
- ▁DAYS
- ▁BED
- ▁BEAUTIFUL
- ▁MUSEUM
- ▁CLEAN
- ▁REST
- ▁SAME
- ▁DOCTOR
- ▁YOURSELF
- ▁DINNER
- ▁DANGEROUS
- ▁SECRET
- ▁STORE
- ▁TREE
- ▁MIGHT
- ▁MAYOR
- ▁CHARLIE
- ▁PIZZA
- ▁FOUR
- ▁SIR
- ▁SEEN
- ▁TURN
- ▁ENJOY
- ▁CLARA
- ▁ANYTIME
- ▁LIVE
- ▁LOST
- ▁SANDRA
- ▁DURING
- ▁MYSELF
- ▁TALL
- ▁MINE
- ▁CHOOSE
- ▁TOOK
- ▁WAITING
- ▁S
- ▁SUNNY
- ▁SINGING
- ▁ACADEMY
- ▁AHEAD
- ▁HURT
- ▁CLOCK
- ▁PAINTING
- ▁RAN
- ▁ALONE
- ▁USED
- ▁PLAN
- ▁THEATER
- ▁HAND
- ▁WEEK
- ▁CATCH
- ▁SEND
- ▁CUBE
- ▁ERIC
- ▁WOOD
- ▁HOT
- ▁DEVILMONS
- ▁FREE
- ▁STAY
- ▁PROMISE
- ▁RULE
- ▁HUNGRY
- ▁WORKING
- ▁HAPPEN
- ▁VIKI
- ▁FAMILY
- ▁CHICKEN
- ▁FORGET
- ▁YELLOW
- ▁BROWN
- ▁VACATION
- ▁KELLY
- ▁JACK
- ▁SINGER
- ▁HAMMER
- ▁SAYS
- ▁TRAIN
- ▁FIX
- ▁CUTE
- ▁EVEN
- ▁SANTA
- ▁SLEEPING
- ▁BUS
- ▁BARBECUE
- ▁AGREE
- ▁COULDN
- ▁MISS
- E
- ▁GRACE
- ▁TRASH
- ▁BABY
- ▁LUMA
- ▁CHILDREN
- ▁EXCUSE
- ▁DPOP
- ▁OUTSIDE
- ▁ORDER
- ▁MATTER
- ▁RIDE
- ▁SUMMER
- ▁CLOSE
- ▁MOVE
- ▁JUICE
- ▁TOUCH
- ▁CARD
- ▁THOSE
- ▁HAIRSTYLE
- ▁RICH
- ▁BREAK
- ▁ANYMORE
- ▁TRIP
- ▁EYES
- ▁LEARN
- IC
- ▁YOUNGER
- ▁SMELLS
- ▁CHRIS
- ▁ITEMS
- ▁STONES
- ▁CUT
- ▁STUDENT
- ▁CALLED
- ▁SHINE
- ▁ATE
- ▁PERFECT
- ▁BETIA
- ▁MOVING
- LY
- ▁FIRE
- ▁D
- ▁CHRISTMAS
- ▁RUNNING
- ▁LINE
- ▁JACKET
- ▁WHICH
- ▁GIFT
- ▁SMILE
- ▁WEARING
- ▁STELLA
- ▁SEVEN
- ▁ANSWER
- ▁YEAR
- ▁MOST
- ▁WENDY
- RA
- ▁BALL
- ▁THING
- ▁FIFTY
- ▁YOUNG
- ▁FRONT
- ▁LIKED
- ▁WINDOW
- ▁BEING
- ▁RICE
- ▁HOBBY
- ▁BRUCE
- ▁ALVIN
- ▁CHAIR
- ▁ELEVEN
- ▁INTERVIEW
- ▁TRUMPET
- ▁DRAWING
- ▁WHILE
- ▁HAV
- ▁NEWSPAPER
- ▁WRITING
- ▁FRUIT
- ▁BEHIND
- ▁EVENT
- ▁HAVEN
- ▁BELLOW
- ▁YEARS
- ▁DIV
- ▁VICTORIA
- ▁SENT
- ▁STYLE
- ▁LUNA
- ▁AUNT
- ▁DREAM
- ▁PICTURES
- ▁LEO
- ▁QUESTIONS
- ▁PRICE
- ▁APPLE
- ▁SCHEDULE
- ▁TABLE
- ▁PLANT
- ▁BELL
- ▁SUSAN
- ▁SHIRT
- ▁GRANDFATHER
- ▁EXPENSIVE
- ▁GUYS
- ▁THOUGHT
- ▁OSCAR
- ▁TIMES
- ▁ACTUALLY
- ▁CHANCE
- ▁PAY
- ▁WASH
- ▁JUGGLING
- ▁JULIA
- ▁MAKEUP
- ▁PIANO
- ▁GOES
- ▁QUIZ
- ▁OFTEN
- ▁THIRTY
- ▁SMART
- ▁WEEKEND
- ▁CHOCOLATE
- ▁BATHROOM
- ▁CANDY
- ▁SPEECH
- ▁FEELING
- ▁RADIO
- ▁HECTOR
- ▁KNOWS
- ▁GRANDMA
- ▁SEEM
- ER
- ▁START
- ▁PENCIL
- ▁SUNDAY
- ▁WORD
- ▁MOUSE
- ▁PLAYGROUND
- ▁BREAD
- ▁MAGIC
- ▁CD
- ▁BROKEN
- ▁COLIN
- ▁DIRTY
- ▁MOTHER
- ▁DESK
- ▁BORING
- ▁SOUP
- ▁ONCE
- ▁WORKED
- ▁COUNT
- ▁EXCITED
- ▁PARADE
- ▁GUITAR
- ▁PM
- ▁FINISH
- ▁BLOCK
- ▁FISHING
- ▁VOICE
- ▁ROGER
- ▁WORKS
- ▁PLAYER
- ▁GLASSES
- ▁LAB
- ▁SIGH
- ▁LOVES
- ▁MODEL
- ▁EXERCISE
- ▁O
- ▁POINT
- ▁SWIMMING
- ▁MARKET
- ▁NOTE
- ▁SECOND
- ▁LUCKY
- ▁BROKE
- ▁CAVE
- ▁SHALL
- ▁KID
- ▁HANG
- ▁MICHAEL
- ▁DANCING
- ▁COM
- ▁MASK
- TING
- ▁KYLE
- ▁FRIDAY
- ▁MELOD
- ▁DOUGLAS
- ▁ENOUGH
- ▁LEARNED
- ▁ALICE
- ▁NEWSPAPERS
- ▁NEAR
- ▁GIRL
- ▁LAURA
- ▁BANK
- ▁ORANGE
- ▁HEART
- ▁SNACKS
- ▁BANANA
- ▁AFRAID
- ▁NOISE
- ▁AARON
- ▁SIDE
- ▁POSSIBLE
- ▁ISN
- ▁UPSET
- ▁KATHY
- ▁ENTER
- ▁STATUE
- ▁FAVOR
- ▁CAPSULE
- ▁CLUB
- ▁BORED
- ▁STREET
- ▁FAR
- ▁BROUGHT
- ▁HENRY
- ▁BRIAN
- ▁FLOOR
- ▁RECORD
- ▁SUN
- ▁BORN
- ▁GONE
- ▁ELEPHANT
- ▁FATHER
- ▁BEAT
- ▁MISTAKE
- NY
- ▁MEGAN
- ▁JIN
- ▁CARL
- ▁FACTORY
- ▁HORSE
- ▁STANLEY
- ▁WIN
- ▁AFTERNOON
- ▁LIVED
- ▁HIGH
- ▁LEAVING
- ▁MINUTES
- ▁WALL
- ▁SURPRISE
- ▁DAVID
- ▁TWENTY
- ▁BIRD
- ▁NICK
- ▁REASON
- ▁OWN
- ▁STEVE
- ▁LADY
- ▁COMES
- ▁STATION
- ▁DOLL
- ▁JADE
- ▁STAND
- ▁FAMOUS
- ▁PLAYED
- ▁TSHIRT
- ▁HUEY
- ▁SEA
- ▁SIX
- ▁REPORT
- ▁POPULAR
- ▁PICK
- ▁TONY
- ▁TINA
- ▁KIDS
- ▁WEATHER
- ▁TREES
- ▁TIFFANY
- ▁WONDERFUL
- ▁RING
- ▁SOMEWHERE
- ▁LIGHT
- ▁NOSE
- ▁AUDREY
- ▁CAMERA
- ▁GARDEN
- ▁SOCCER
- ▁PIG
- ▁FRESH
- ▁NOBODY
- ▁AMANDA
- ▁SURPRISED
- ▁STOPPED
- ▁CITY
- ▁KOREAN
- ▁HISTORY
- ▁STUDENTS
- ▁COOKING
- L
- ▁LOUD
- ▁LOSE
- ▁PINK
- ▁LIE
- ▁CRAYONS
- ▁HEALTH
- ▁HANDWRITING
- ▁JOIN
- ▁THROW
- ▁INFORMATION
- ▁DIFFICULT
- ▁SOMETIMES
- ▁BIKE
- ▁WOMAN
- ▁FLOWER
- ▁WORDS
- ▁GHOST
- ▁RICKY
- R
- ▁TEETH
- ▁SAYING
- ▁PIECE
- ▁DR
- ▁CHANGED
- ▁SIT
- ▁ARTICLE
- ▁ARM
- ▁BECOME
- ▁MONKEY
- ▁YEAH
- ▁JUDY
- ▁FOLLOW
- ▁ALSO
- ▁GAMES
- ▁BAND
- ▁COMPUTER
- ▁ANDRE
- ▁EATS
- ▁MATH
- ▁EXACTLY
- ▁ART
- ▁JUMP
- ▁FOODS
- ▁PRESENTS
- ▁RABBIT
- ▁SMELL
- ▁HEAVY
- ▁SWIM
- ▁RICHARD
- ▁GRASS
- ▁BOTHER
- ▁PANTS
- ES
- ▁ALMOST
- ▁HELPING
- ▁ZOO
- ▁SHOULDN
- ▁FAN
- ▁EGGS
- ▁ELLA
- ▁RESTAURANT
- ▁CHIPS
- ▁BIGGER
- ▁MONDAY
- ▁CATS
- ▁STUDYING
- ▁TONIGHT
- ▁BRADY
- ▁SERIOUS
- ▁FORGOT
- ▁VISIT
- ▁BUILDING
- ▁SET
- ▁HANDSOME
- ▁CLAUS
- ▁RALPH
- ▁COMPANY
- ▁SEAT
- ▁ANDREW
- ▁WITHOUT
- EN
- ▁MEAT
- ▁BOARD
- ▁CLASSES
- ▁FLY
- ▁BIT
- ▁ANGELA
- ▁POLICE
- ▁BET
- ▁FINISHED
- ▁EITHER
- ▁SKY
- ▁POLIA
- ▁EIGHT
- ▁AMAZING
- ▁INSIDE
- ▁SATURDAY
- ▁DINOSAUR
- ▁DEVERYTHING
- ▁BRUSH
- ▁VIVIEN
- ▁BREAKFAST
- ▁QUICKLY
- ▁HEAD
- ▁CAROL
- ▁EACH
- ▁BANANAS
- ▁JAZZ
- ▁OWEN
- ▁LEAVES
- ▁HELPED
- ▁WINTER
- ▁REAL
- ▁TRUTH
- ▁RIVER
- ▁ROAD
- ▁ANNA
- ▁INTERESTED
- ▁EVERYBODY
- ▁HIMSELF
- ▁TAKES
- ▁LADDER
- ▁BOTH
- ▁CLASSROOM
- ▁STUDIED
- ▁HALL
- MAS
- ▁STARTED
- ▁THO
- ▁REFUND
- ▁EARLY
- ▁MARK
- ▁TRIED
- ▁CRY
- ▁CUP
- ▁DEAL
- ▁LEGS
- ▁PARTNER
- ▁NINE
- ▁MONTH
- ▁CRYSTAL
- ▁MRS
- ▁WHOM
- ▁QUIET
- ▁TICKET
- ▁TRYING
- ▁JELLY
- ▁TEST
- ▁OFFICE
- ▁BICYCLE
- ▁HOSPITAL
- ▁POOL
- ▁DOGS
- ▁LIVES
- ▁NOISY
- ▁TASTE
- ▁FEET
- ▁PASTA
- ▁HANS
- AL
- ▁PAST
- ▁PRIZE
- ▁KEY
- ▁COUPON
- ▁TIMMY
- ▁AREN
- ▁MEMO
- ▁TEACHE
- ▁PRACTICING
- ▁ANIMAL
- ▁MOUTH
- ▁WORLD
- ▁UNDER
- ▁WATCHING
- ▁FELL
- ▁DRIVE
- ▁BEACH
- ▁CLEAR
- ▁JOKES
- ▁GAVIN
- ▁ADD
- CLOCK
- ▁HELPER
- ▁JULIE
- ▁WEIRD
- ▁SINCE
- ▁MILLER
- ▁TIE
- ▁FRUITS
- ▁HOUR
- ▁ANIMALS
- ▁TWICE
- ▁WARM
- ▁LARGE
- ▁UNTI
- ▁JAMES
- ▁DOLLARS
- ▁STORIES
- ▁MEAL
- ▁APPLES
- ▁CRYING
- ▁DIET
- ▁HEADPHONES
- ▁MEMORI
- ▁COMPLIMENT
- ▁TRIANGLE
- ▁DIARY
- ▁TOWER
- ▁EYE
- ▁SALE
- ▁BUILT
- ▁CARROT
- ▁ORDERED
- ▁ITEM
- ▁SLOW
- ▁NAOMI
- ▁TUESDAY
- ▁SENSE
- ▁PARENTS
- ▁GIV
- ▁BUSINESS
- ▁EVER
- ▁TYLER
- ▁FORWARD
- ▁CELL
- ▁SHUT
- ▁COAT
- ▁PRINCE
- ▁HATE
- ▁PUPPET
- ▁FULL
- ▁WOULDN
- ▁TERRIBLE
- ▁CARDS
- ▁MAP
- ▁STAMP
- ▁SNACK
- ▁SNOW
- ▁RUBY
- ▁SLOWLY
- ▁EDDY
- ▁EASILY
- ▁LAZY
- ▁BLOCKS
- ▁EARS
- ▁COLORS
- ▁TTEOKBOKKI
- ▁CAREFULLY
- ▁MARRIED
- ▁VILLAGE
- ▁HEADACHE
- ▁MOUNTAIN
- ▁PETER
- ▁FAT
- ▁MARRY
- WEEN
- ▁RYAN
- ▁DISHES
- ▁JIM
- ▁FIELD
- ▁CINDY
- ▁FEW
- ▁STARS
- ▁UMBRELLA
- ▁GROW
- ▁FROG
- ▁RULER
- ▁BASKETBALL
- ▁PART
- ▁ORLANDO
- ▁CORRECT
- ▁GRANDPA
- ▁ADVICE
- ▁ARMS
- SE
- ▁PHOTOS
- ▁KICKBOARD
- ▁JACOB
- ▁DANGER
- ▁BOOTS
- ▁GIANT
- ▁BATH
- ▁VISITOR
- ▁PROMISED
- ▁SNAKE
- ▁GLASS
- ▁RAISE
- ▁SPICY
- ▁TURNED
- ▁MEETING
- ▁VIOLIN
- ▁MINUTE
- ▁DAISY
- ▁BUTTON
- ▁OTHERS
- ▁DELIVERY
- ▁WASN
- ▁JOGGING
- ▁SOFA
- ▁FINGERS
- ▁NICOLE
- ▁TALLER
- ▁RUNS
- ▁BENJAMIN
- ▁GOLD
- ▁LUCAS
- ▁SNOWMAN
- ▁LOVED
- ▁SANDWICH
- ▁STRAIGHT
- ▁AGAINST
- ▁BALLOONS
- ▁KEPT
- ▁CLOSED
- ▁PENS
- ▁MAX
- ▁LEG
- ▁FILL
- ▁QUIT
- ▁ANYBODY
- ▁JEFF
- ▁ANN
- ▁EVAN
- ▁MISSED
- ▁TAEKWONDO
- ▁JOY
- ▁PUSH
- ▁WOODWARD
- ▁ROSS
- ▁LISA
- ▁PULL
- ▁NECTAR
- ▁VASE
- ▁RABBITS
- ▁BOW
- ▁BUGS
- ▁SAFE
- GETTING
- ▁CASH
- ▁LAMP
- ▁DOLLS
- ▁YUMMY
- ▁MEDICINE
- ▁SPORTS
- ▁ENDS
- ▁BASEBALL
- ▁THROUGH
- ▁CENTER
- ▁FIGHTER
- ERS
- ▁PACKAGE
- ▁WORMS
- ▁SHAPE
- ▁DISAPPOINTED
- ▁PHILLIP
- ▁DINOSAURS
- ▁SALAD
- ▁HAMBURGER
- ▁COOKIES
- ▁PASS
- ▁CHEAP
- ▁STAGE
- ▁COLORED
- ▁TYPE
- ▁EVENING
- ▁CRIED
- ▁SHOWER
- ▁WALLET
- ▁FIFTEEN
- ▁HERO
- ▁USUALLY
- ▁GATE
- ▁TEAM
- ▁PLANE
- ▁DRESSES
- ▁SOLD
- ▁CRAYON
- LE
- ▁HIDE
- ▁BODY
- ▁MEN
- ▁HAIRSTYLES
- ▁BOAT
- ▁WONDER
- ▁RAIN
- ▁FEELS
- ▁NERVOUS
- ▁CHILD
- ▁MIRROR
- ▁BUG
- ▁LONGER
- ▁LOUIS
- ▁AIR
- ▁STOMACHACHE
- ▁ASKING
- ▁OWNER
- ▁KNEW
- ▁BELT
- I
- ▁MAGAZINE
- ▁HOP
- ▁SUGAR
- ▁END
- ▁TAKING
- ▁LIGHTS
- ▁EMPTY
- ▁PUPPY
- ▁DUCK
- ▁SUPERMARKET
- ▁APARTMENT
- ▁ADDRESS
- ▁MACHINE
- ▁JASON
- ▁CARRY
- ▁DRY
- ▁EXCITING
- ▁BOTTLE
- ▁RIDING
- ▁CHARCOAL
- ▁TRAVIS
- ▁UGLY
- ▁CAUGHT
- ▁PROBAB
- ▁PROJECT
- ▁LISTENING
- ▁JUGGLE
- ▁ROPE
- ▁BILL
- ▁HOURS
- ▁MOLLY
- ▁SOPHIE
- ▁WEARS
- ▁LIFE
- ▁CAFE
- ▁HURTS
- ▁RELAX
- ▁TED
- ▁COPY
- ▁COTTON
- ▁ALONG
- ▁OFFER
- ▁DATE
- ▁LI
- ▁YOUTUBE
- ▁JOKE
- ▁BARREL
- ▁DIED
- ▁SINGS
- ▁SEVERAL
- ▁TALENT
- ▁CARTER
- ▁PASSWORD
- ▁CASE
- ▁SCISSORS
- ▁YORK
- ▁FANTASTIC
- ▁CLOUDY
- ▁ROUND
- ▁BUILD
- ▁PRINCESS
- ▁RAINY
- ▁GRAPES
- ▁SKIRT
- ▁LION
- ▁FASTER
- ▁FASHION
- ▁AD
- ▁EXPLAIN
- ▁DOCK
- ▁MATCH
- ▁BOMB
- ▁STADIUM
- ▁WOODS
- ▁FALL
- ▁MAD
- ▁TRUCK
- ▁STEP
- ▁ANSWERS
- ▁KIDDING
- ▁MOON
- ▁BEAN
- ▁PICKED
- ▁LESSON
- ▁KNOWN
- ▁HAPPENING
- ▁BLUEBERRIES
- ▁SANDWICHES
- ▁BUTTER
- ▁BEDROOM
- ▁ABOVE
- ▁LEGO
- ▁HELENA
- ▁FOOTPRINT
- ▁SHIP
- ▁TAP
- ▁HILL
- ▁CHURCH
- ▁GOODBYE
- ▁LEMON
- ▁HUNDRED
- ▁COWARD
- ▁ARRIVED
- ▁WATERMELON
- ▁BOXES
- ▁FINALLY
- ▁MAIN
- ▁KEVIN
- BINGO
- ▁BONES
- ▁SPOKE
- ▁DONUTS
- ▁HENNA
- ▁LETTERS
- ▁PAM
- ▁LESS
- ▁WEDDING
- ▁POCKET
- ▁SHY
- ▁NOWHERE
- ▁MIC
- ▁NAMES
- ▁SONGS
- MED
- ▁DECIDED
- ▁KITCHEN
- ▁SHINING
- ▁LOVELY
- ▁SEASON
- ▁STEAK
- ▁DRUM
- ▁TEDDY
- ▁SHINY
- ▁GIRLS
- ▁AUDITION
- ▁ACTING
- ▁NECK
- ▁ROSA
- ▁SNEAKERS
- ▁SHOE
- ▁QUITE
- ▁HOTEL
- ▁LEATHER
- ▁WIND
- ▁COUSIN
- ▁JANET
- ▁ONIONS
- ▁DEAD
- ▁PROUD
- ▁PET
- ▁HELPFUL
- ▁TOILET
- ▁FORTY
- ▁JAKE
- ▁BUTTERFLY
- ▁KICK
- ▁BIRDS
- ▁ABROAD
- ▁TEA
- ▁STARTS
- ▁MEALS
- ▁AIRSHIPS
- ▁SOFT
- ▁MATT
- ▁BLANKET
- ▁WINDY
- ▁PLAYS
- ▁COVER
- ▁WEIGHT
- ▁PURPLE
- ▁HIDING
- ▁TAGS
- ▁F
- ▁WHATEVER
- ▁AIRSHIP
- ▁LIVING
- ▁MAT
- ▁KINDERGARTEN
- ▁POND
- ▁LAUNDRY
- O
- ▁NOTEBOOK
- ▁HELEN
- ▁SWEATER
- ▁TEACHING
- ▁FAULT
- ▁SQUARE
- ▁HONEST
- ▁LOUDER
- CAME
- ▁3
- ▁DROP
- ▁GUY
- ▁GIRLFRIEND
- ▁RAINING
- ▁SPIDER
- ▁FLYER
- ▁WATCHED
- ▁B
- ▁LOW
- ▁COUSINS
- ▁OLDER
- DY
- ▁ROCK
- ▁MOMENT
- ▁SHEET
- ▁LAUGH
- ▁BLUEBERRY
- ▁NEIGHBORHOOD
- ▁GRADE
- ▁STICKER
- ▁OPENING
- ▁ALRIGHT
- ▁OFFICER
- ▁PI
- ▁WEDNESDAY
- ▁BITE
- ▁CONTINUE
- TIME
- ▁SAIN
- ▁COSTUME
- ▁MOVED
- ▁BOOKCASE
- ▁DENTIST
- ▁STOPS
- ▁SAM
- ▁APRIL
- ▁THIRSTY
- ▁MOOD
- ▁PEA
- ▁ENTRY
- ▁SERVICE
- ▁ABLE
- ▁FRIED
- ▁W
- ▁FLASH
- ▁KATRINA
- ▁REPAIR
- ▁TI
- ▁GIMBAP
- NDA
- ▁ANNIVERSARY
- ▁NAMED
- ▁WRITTEN
- ▁CUSTOMERS
- ▁COLLECT
- ▁BONGOS
- ▁EGG
- ▁BAT
- ▁RIBS
- ▁SAT
- ▁RETURN
- LIGHT
- BACK
- CA
- NESS
- ▁FACES
- ▁CALLING
- ▁HOLIDAY
- ▁HOLE
- ▁MILLION
- ▁DELIVER
- ▁10
- ▁TAXI
- ▁HASN
- ▁MINDS
- ▁DONALD
- ▁MISTAKES
- ▁SPRING
- ▁MENTION
- ▁NEITHER
- ▁TOWEL
- ▁BEANS
- ▁WILLIAM
- ▁BRIGHT
- ▁STOMACH
- ▁CANDIES
- ▁BURGERS
- ▁FEAR
- ▁DECIDE
- ▁FEVER
- ▁FANS
- ▁STUDIO
- ▁LIAR
- ▁BREAKING
- ▁SLEPT
- ▁TAIL
- ▁BURGER
- ▁MOVIES
- ▁SMOKE
- ▁DANIEL
- ▁WAITER
- ▁PENCILS
- ▁CROSS
- ▁KOREA
- ▁GUARD
- ▁LEARNING
- ▁SUBWAY
- ▁CARS
- ▁SKIP
- ▁MIX
- ▁JEANS
- ▁LIST
- ▁POST
- ▁TRAVEL
- ▁BORROWED
- ▁AWESOME
- ▁RECORDER
- ▁FLOUR
- ▁COW
- ▁CAMPING
- ▁DRIVING
- ▁FELT
- ▁WINNER
- ▁CHARACTER
- ▁BALLOON
- ▁RIDDLE
- W
- FUL
- ▁NECKLACE
- ▁GLOVES
- ▁CHANGING
- ▁CRACKED
- ▁DROPPED
- ▁ROBERT
- ▁BAKERY
- ▁GRILL
- ▁INVITED
- ▁LAND
- ▁PORK
- ▁TELEPHONE
- ▁SKI
- ▁GUEST
- ▁AMBER
- ▁SHARP
- ▁KITE
- ▁DELI
- ▁MART
- ANNA
- ▁CIRCLE
- ▁FLYING
- ▁SHAKE
- ▁DANCER
- ▁POLICEMAN
- ▁DESSERT
- ▁SHOCK
- ▁BLOOD
- ▁MENU
- ▁BUMP
- ▁NOVEL
- ▁SKIN
- ▁SHOULDERS
- ▁MICHELLE
- ▁CROSSED
- ▁TICKETS
- ▁DRANK
- ▁OUTFIT
- ▁LAKE
- ▁PAINTER
- ▁ALIEN
- ▁RAINBOW
- ▁WORE
- ▁BAR
- ▁BROTHERS
- ▁DISH
- ▁SIMILAR
- ▁DISPLAY
- ▁GIRAFFE
- ▁FANCY
- ▁THIEF
- ▁HALLWAY
- ▁WAVE
- ▁CARROTS
- PE
- ▁ELDER
- ▁SOMEBODY
- ▁TRAFFIC
- ▁ACTOR
- ▁RUMORS
- ▁CHOSE
- ▁CAUS
- ▁DRESSED
- ▁ROSE
- ▁LYING
- ▁PANDA
- ▁PEAR
- ▁SUGGEST
- ▁DECISION
- ▁NOISES
- ▁TAKEN
- ▁GARLIC
- ▁CHINESE
- ▁ITCHY
- ▁SWORD
- ▁WAITED
- ▁NONE
- ▁SIZE
- ▁ACCEPT
- ▁CAPTAIN
- ▁GRAY
- ▁IDOL
- ▁SMALLER
- ▁USUAL
- ▁THOUSAND
- ▁LONELY
- ▁RETURNED
- ▁JENNY
- ▁PRACTICED
- ▁NEEDED
- ▁PAIN
- ▁RAP
- ▁THIN
- ▁EVERYWHERE
- ▁SUIT
- ▁BUSH
- ▁SON
- ▁COMPLIMENTS
- ▁FAILED
- ▁RUG
- ▁PAID
- ▁MANGO
- ▁BOYFRIEND
- ▁SCARF
- ELA
- ▁CROWD
- ▁ONLINE
- ▁GREW
- ▁SOCKS
- ▁SEAGULLS
- ▁USING
- ▁MELTED
- ▁OIL
- ▁ADULTS
- ▁KATE
- ▁WHISTLING
- ▁PRAY
- ▁POOR
- ▁SAUCE
- ▁PACKED
- ▁HATS
- ▁BUYING
- ▁AGO
- ▁SCIENCE
- ▁TUNNEL
- ▁DRESSING
- ▁MISSING
- ▁FESTIVAL
- ▁THURSDAY
- ▁PAIR
- ▁SITTING
- ▁SUITCASE
- ▁SHAPES
- ▁WILLY
- ▁HUGE
- ▁SHOUTED
- EVER
- ▁FAIR
- ▁TASTES
- ▁CAFETERIA
- ▁BINGO
- ▁BEGINS
- ▁DOLLAR
- ▁GRILLING
- ▁ALIVE
- ▁DINO
- ▁LIFT
- ▁TOP
- ION
- ▁STUFF
- ▁FROZEN
- ▁ACROSS
- ▁SEOUL
- ▁FRIES
- ▁TAUGHT
- ▁VIDEO
- ▁CREDIT
- ▁HAPPENS
- ▁RACE
- ▁TOUR
- ▁SPAGHETTI
- ▁SWING
- ▁INVITATION
- ▁COUNTRYSIDE
- ▁STAIRS
- ▁HIGHER
- ▁RANGER
- BAG
- ▁PULLED
- ▁LIPSTICK
- ▁VALLEY
- ▁NAP
- ▁FUTURE
- ▁SILENT
- ▁SPEAKER
- ▁GIVEN
- ▁JUMPING
- ▁AUTUMN
- ▁HOLDING
- ▁BOB
- ▁PLANNING
- ▁SUPPOSE
- ▁CLUES
- ▁ANSWERED
- ▁STICK
- ▁WASHED
- ▁CURLY
- ▁RUINED
- ▁SMILING
- ▁UNHAPPY
- ▁KIMBAP
- ▁CAUSE
- ▁CHUNKMONS
- ▁REPEAT
- STOOD
- ▁8
- ▁SHEEP
- ▁LOUDLY
- ▁SLIDE
- ▁KING
- ▁LIME
- ▁SKATING
- ▁SERVE
- ▁SAND
- ▁POWER
- ▁MUSICIANS
- ▁RESTROOM
- ▁SOMEDAY
- ▁GYM
- ▁GOD
- ▁COOKIE
- ▁NUMBERS
- ▁WARNING
- ▁CLASSMATE
- ▁COMPLAIN
- ▁LAUGHED
- ▁BEES
- ▁SAFELY
- ▁DESIGNER
- ▁ORANGES
- B
- ▁RETURNS
- ▁SPEAKING
- ▁GINA
- ▁MARTI
- ▁FEELINGS
- MAN
- ▁TULIP
- ▁BAZAAR
- ▁EMAIL
- ▁STRAWBERRY
- ▁PRESS
- ▁SALT
- ▁PHEW
- ▁COWS
- ▁ENTRANCE
- ▁LEAF
- ▁PAN
- ▁SOUR
- ▁DISEASE
- ▁OPENED
- ▁LUGGAGE
- ▁SWIMSUIT
- ▁PASSED
- ▁ALISON
- ▁SHOVELS
- ▁SENTENCES
- ▁GROUND
- ▁STAYING
- ▁SALES
- ▁JAM
- ▁WRAP
- ▁LATELY
- ▁SHRIMP
- ▁TWELVE
- ▁CHEAPER
- ▁CHECKING
- ▁SEAWEED
- ▁LO
- ▁TURTLES
- ▁DNN
- ▁WHE
- ▁ACT
- ▁LIZARD
- ▁SUCCEED
- ▁STRING
- ▁BASKET
- ▁HINT
- ▁VEGETABLES
- ▁FOOL
- ▁SHOT
- ▁ADULT
- ▁GREG
- ▁TASTY
- ▁FARM
- ▁LIPS
- ▁STARFISH
- ▁NAILS
- C
- ▁FR
- ▁TEARS
- ▁SUPERSTAR
- ▁CLEANS
- ▁HEAT
- ▁SILLY
- ▁WIG
- ▁BELLA
- WOKE
- ▁5
- ▁BOYS
- IVA
- ▁IMAGINE
- ▁LAUGHING
- ▁WASHING
- ▁FLAT
- ▁STICKERS
- ▁PRETTIER
- ▁KILL
- ▁FLIGHT
- ▁WOMEN
- ▁MOMMY
- ▁CAMP
- ▁MEMBERS
- ▁CUSTOMER
- ▁E
- ▁SINGERS
- 'ON'
- ▁CONTROL
- ▁TIGER
- ▁ZEBRA
- ▁IMPOSSIBLE
- ▁CONSOLE
- ▁CLUE
- ▁FOLD
- ▁BEE
- ▁ANDY
- ▁SEATS
- ▁POUND
- ▁SANG
- ▁DIAMOND
- ▁BATS
- ▁ARTIST
- ▁BABIES
- ▁GARAGE
- ▁INSTEAD
- ▁OLDFASHION
- ▁GIFTS
- ▁RODE
- BIG
- ▁MOUNTAINS
- ▁THUNDER
- ▁DONKEY
- ▁PIGEON
- ROOM
- ▁WORSE
- ▁HAMBURGERS
- ▁ERASER
- ▁TAMBOURINE
- ▁BREATH
- ▁ANNOYED
- ▁HALLOWEEN
- ▁KNOCK
- ▁STUPID
- ▁BANDAGE
- ▁PINEAPPLE
- OUT
- ▁SALTY
- ▁POTATO
- ▁MILES
- ▁COMMENT
- ▁TREATED
- ▁EAR
- ▁SLEDDING
- ▁VIOLET
- ▁BOTTLES
- ▁BRILLIANT
- ▁AUNTIE
- ▁SPEND
- ▁REACH
- ▁PAYING
- ▁APOLOGIZE
- ▁CORNER
- ▁FORGIVE
- ▁RELIEF
- ▁BEHAVE
- ▁DIE
- ▁PRETTIEST
- ▁H
- ▁HEN
- ▁POUR
- ▁NEEDLE
- ▁WORRIES
- ▁LARGER
- ▁CRAZY
- TYFIVE
- ▁DISCOUNT
- ▁HEADED
- ▁TWENTYFIVE
- ▁SOMETIME
- ▁REPORTER
- ▁FEED
- ▁KIMCHI
- ▁TENNIS
- ▁DOLPHIN
- ▁SUNGLASSES
- ▁THREW
- ▁COUNTRY
- ▁HUSBAND
- ▁JAPAN
- ▁TOMATOES
- ▁OK
- ▁POET
- ▁LUKE
- ▁LEND
- ▁LOWER
- ▁SHOVEL
- ▁AMERICA
- ▁BLOSSOMS
- OH
- K
- ▁SAFETY
- TALK
- ▁ASLEEP
- ▁MINER
- ▁PERIOD
- ▁STORYBOOK
- ▁BOWLS
- ▁DOUBT
- ▁MEMORY
- ▁SKINNY
- ▁EARTHQUAKE
- ▁2
- ▁BALLS
- ▁POTATOES
- ▁TROUSERS
- ▁WAR
- ▁FUR
- ▁RUMOR
- ▁CONGRATULATIONS
- ▁EASYGOING
- ▁NURSE
- ▁FLIES
- ▁GROWING
- ▁SMILES
- ▁CHOICE
- ▁ERASE
- ▁COMFORTABLE
- ▁GUIDE
- ▁PE
- ▁CLEVER
- ▁PEACE
- ▁AFTERSCHOOL
- ▁SOAP
- ▁POPCORN
- ▁SUNBLOCK
- ▁INVITE
- ▁AWAKE
- ▁FEMALE
- ▁HIKING
- ▁FOLLOWED
- ▁BUMPER
- ▁FILLED
- ▁HIPPO
- ▁COMEDIAN
- ▁SILK
- ▁COST
- IES
- ▁AWFUL
- ▁SIBLING
- ▁PIES
- ▁BURNING
- ▁CRASH
- ZIPPED
- ▁SPACE
- ▁LYRICS
- ▁HANDMADE
- ▁PER
- ▁ROUGH
- ▁THROWING
- ▁STATIONERY
- ▁WORM
- ▁PAGE
- ▁CLASSMATES
- ▁EXAM
- ▁FINAL
- ▁BLOW
- ▁CHINA
- U
- TH
- ▁BATTER
- ▁HONEY
- ▁MISTAKEN
- ▁DEPARTMENT
- GREAT
- ▁SHIRTS
- ▁COMPETITION
- ▁YOGURT
- MBER
- ▁DRINKS
- ▁WOLF
- ▁ISLAND
- ▁GROCER
- ▁SHARON
- ▁BREATHE
- ▁ANNOYING
- ▁LIED
- ▁SPA
- ▁KANGAROOS
- ▁ALIKE
- ▁PENGUIN
- ▁BRIGHTCOLORED
- ▁4
- ▁MESSAGES
- ▁INVENTION
- ▁WIPE
- BIRD
- ▁PRECIOUS
- ▁FLEW
- ▁CH
- ▁APART
- ▁MIDNIGHT
- ▁SPEN
- ▁SHELLS
- ▁GIN
- ▁NATURAL
- ▁THIRD
- ▁BADLY
- ▁PLATES
- ▁JOSHUA
- ▁MIDDLE
- ▁SWEAT
- ▁TOES
- ▁TIP
- ▁TEASE
- ▁BOOKSHOP
- ▁COUGHING
- ▁GUN
- ▁WASTE
- UMOR
- AR
- ▁SPREAD
- ▁GOAT
- ▁SPROUTS
- ▁BALLET
- ▁SNAKES
- ▁SCRATCHED
- ▁AMONG
- DANGER
- KGO
- NISH
- ▁FEE
- ▁JANE
- ▁TEMPER
- ▁CROWDED
- ▁BONO
- ▁CHEF
- ▁SAMPLE
- ▁LIONS
- ▁RULES
- ▁DREW
- ▁WORTH
- ▁MAGICIAN
- ▁GLUE
- ▁TOUGH
- ▁TOUCHE
- ▁TUNA
- ▁BAKE
- ▁LAUGHTER
- ▁HALF
- ▁HELMET
- ▁UH
- ▁COPIES
- ▁DIFFERENCE
- ▁FORK
- ▁STARTING
- ▁CRIES
- ▁SPROUT
- SNOW
- ▁SCARE
- ▁DRUMS
- ▁PHANTOPIA
- ▁VOUCHER
- ▁FARMER
- ▁CHANGES
- ▁SPILL
- AN
- ▁COMPLETELY
- ▁PRACTICES
- CHAIR
- ▁MISSE
- ▁RACHEL
- ▁SEEK
- EST
- ▁SISTERS
- ▁BLAME
- ▁PACK
- ▁BOIL
- ▁REQUEST
- ▁SH
- ▁WIRE
- ▁POT
- ▁ONION
- ▁CLOSER
- ▁MICE
- ▁SCRATCH
- ▁DUCKS
- THANK
- ▁RECEIVE
- ▁CABBAGE
- ▁SEEDS
- ▁JEJU
- ▁SUDDENLY
- RAY
- ▁KIWI
- ▁POWDER
- ERRY
- ▁MESSY
- ▁RID
- ▁CHAMPION
- ▁ARGUE
- ▁RECIPE
- ▁MICROPHONE
- ▁SCOLDED
- TRY
- ▁STRONGER
- ▁EXPECT
- ▁WEEKS
- AKER
- ▁JUMPED
- ▁RAINS
- ▁OREPHIA
- ▁PIGS
- LOSING
- ▁PRAYING
- ▁DUE
- ▁SOUTH
- ▁PUNCH
- ▁CREATIVE
- ▁FINISHING
- ▁HARMONI
- ▁CLOWN
- ▁SALON
- ▁SINK
- H
- ▁TOOL
- ▁ALARM
- VISION
- GY
- ▁FAIL
- ▁DRAWER
- ▁HAIRBAND
- ▁X
- ▁ARTICLES
- ▁DEEP
- ▁EARLIER
- ▁EXTRA
- ▁DOWNTOWN
- ▁LEFTHAND
- PTER
- ▁NOODLES
- ▁CONSIDER
- ▁ACCOUNT
- ▁DEER
- ▁SEAN
- RABBITS
- TY
- ▁CREAMS
- ▁LUCY
- ▁BOUN
- ▁HORNS
- EMENT
- ▁NOON
- ▁SMILED
- ▁NINETEEN
- ▁TURNS
- ▁MUFFLER
- ▁ROAR
- ▁HARDLY
- ▁SPELLED
- ▁SPOTS
- ▁SHORTS
- ▁JUMPS
- ▁RECENTLY
- ▁STOLEN
- ▁WITHIN
- ▁ENGLAND
- ▁PENDANT
- ▁MARY
- ▁AMUS
- ▁SERIOUSLY
- ▁FALLS
- ▁SPOONS
- ▁SAVED
- ▁STOLE
- ▁STUCK
- ▁G
- ▁DUMPLINGS
- ▁GERMAN
- ▁PLACES
- ▁OCARINA
- ▁QUEENSTEIN
- ▁BRANDON
- ▁DWARFS
- ▁TOFU
- ▁SPRAY
- PARD
- ▁CROSSING
- ▁PIGEONS
- ▁NOTICE
- CE
- LTY
- ▁BASEMENT
- ▁TABLET
- ▁COUPONS
- ▁PROGRAM
- ▁SOCK
- ▁GUI
- ▁NUT
- ▁OLIVE
- ▁PREFER
- ▁MUSHROOM
- ▁FIGHTING
- ▁DENERGY
- ▁STORAGE
- ▁POLITE
- IST
- ▁KICKBOARDS
- GAGE
- ▁DROWN
- ▁MANAGE
- ▁DRIVER
- P
- ▁WEEKENDS
- ▁SHOULDER
- ▁MUD
- ▁SEVENTY
- ALLY
- ▁POSTCARD
- ▁PIECES
- ▁HICCUPS
- ▁CHARACTERS
- ▁CLEANING
- ▁DIS
- ▁JG
- ▁JOSEPH
- ▁TITLE
- ▁CDS
- ▁BOSTON
- ▁BRACELET
- ▁PERMISSION
- ▁STEW
- ▁RAT
- ▁SKATE
- ▁CHEST
- ▁FOOT
- ▁CLIMB
- ▁AUDIENCE
- ▁DUFAR
- ▁GRANDPARENTS
- ▁FIT
- ▁TOUCHING
- ▁ELEPHANTS
- ▁TSHIRTS
- ▁APPOINTMENT
- ▁FOREVER
- ▁STARVING
- ▁LESSONS
- ▁COUPLE
- ▁TOTO
- ▁DRINKING
- ▁ARRIVE
- ▁GREE
- ▁SPOT
- ▁HELD
- ▁EARTH
- ▁DAUGHTER
- ▁SLICE
- ▁CASTLE
- ▁FEEDING
- ▁COVERED
- ▁FAM
- ▁AGE
- ▁AUSTIN
- ▁DEAR
- ▁NATI
- ▁CELEBRATE
- ▁MEATBALLS
- ▁STRETCH
- ▁SOLVE
- ▁USEFUL
- ▁SCAR
- DDING
- ▁ALLERG
- ▁RINGING
- ▁SAILING
- ▁SNOWING
- ▁LATEST
- ▁LIES
- ▁ACADEMIES
- ▁MUSICIAN
- ▁STA
- ▁FROGS
- ▁STOMP
- ▁KEYBOARD
- ▁FAIRY
- ▁CLAP
- ▁HAM
- ▁TOWARDS
- ▁RESERVATIONS
- ▁SHOUT
- SORRY
- ▁PUPPIES
- ▁WEAK
- ▁ORIGINAL
- ▁RESPECT
- ▁TABLES
- ▁COMPUTERS
- ▁TOWELS
- ▁CRAFTSMEN
- ▁ELE
- ▁REPAIRED
- ▁PRINT
- ▁BLOOM
- ▁WISELY
- ▁SCOLD
- ▁TWINKL
- ▁CANCEL
- ▁KIM
- ▁STAINED
- ▁LAP
- ▁DRI
- ▁SHARK
- ▁KANGAROO
- MENTARY
- THEY
- ▁DALLAS
- ▁SEESAW
- ▁WHISPER
- CAL
- ▁DWARF
- ▁SUNDAYS
- ALK
- ▁DOUBLE
- ▁SHAKING
- ▁PREPAR
- ▁YOYO
- ▁SKILLS
- ▁OCTOPUS
- ▁INSTRUMENTS
- ▁MAIL
- ▁ALIENS
- ▁JESSI
- ▁CHERRY
- ▁INCONVENIENCE
- ▁CERTAIN
- ▁BEEF
- CON
- 'OFF'
- ▁GATHERED
- ▁PRODUCTS
- CONVENIENCE
- ▁RESTAURANTS
- ▁MONKEYS
- ▁FIGURE
- ▁QUICK
- ▁GAIN
- ▁PENALTY
- ▁INLINE
- ▁INTRODUCE
- ▁OVERSLEPT
- ▁POL
- ▁HOWEVER
- ▁GORILLA
- ▁MEMBER
- ▁PLU
- ▁ANGER
- ▁AQUARIUM
- ▁GAS
- ELY
- ▁TIES
- ▁PUNISHED
- ▁CUCUMBERS
- ▁TINY
- ▁RISE
- ▁GHOSTS
- ▁WIFE
- MOND
- ▁RARE
- ▁BARN
- ▁SMELLY
- GAN
- ▁REASONS
- ▁BURNED
- ▁ANNOUNCE
- ▁CAPSULES
- ▁PICNIC
- ▁GLOVE
- FF
- RANCE
- ▁TREAT
- ▁JOG
- ▁BULLS
- ▁JJAKGUNG
- ▁PROVE
- ▁BAGS
- ▁RUDOLPH
- ▁MC
- ▁TRICKS
- RIOR
- ”
- ▁HAPPILY
- ▁REMIND
- ▁DIVER
- BE
- ▁HATES
- ▁SPOON
- ▁SIZES
- ▁THROAT
- ▁UN
- CRAFTS
- ▁BRIDGE
- ▁CONFUSED
- DONALD
- KEEPER
- ▁SIBLINGS
- ▁DENNIS
- ▁EMBARRASSED
- ▁PATRICK
- DWARFS
- ▁PREGNANT
- ▁VOTE
- ▁WHIPPED
- ▁10000
- ▁SUPPORT
- ▁TOOTH
- ▁STANDING
- ▁CLOSET
- ▁NEEDLES
- ▁SWEEP
- ▁RAISED
- ▁PEE
- ▁CONTACT
- ▁JEALOUS
- ▁SURVEY
- BOX
- ▁CROSSWALK
- ▁WALKING
- ▁SOP
- ▁SITE
- ▁OWE
- ▁FOURTEEN
- ▁PLANTING
- ▁CHANNELS
- ▁WIGGL
- ▁OURSELVES
- ▁SCENE
- ▁BAS
- ▁LETTUCE
- ▁NICKNAME
- ▁GRABB
- ▁ELEVATOR
- ▁COP
- ▁FALLING
- ▁DESERVE
- ▁FILM
- ▁SOPHOMORE
- ▁WOUND
- ▁PROTEST
- ▁PEACHES
- ▁CHILL
- ▁COURT
- ▁ROOF
- ▁CHARGE
- ▁FINGER
- ▁HANBOK
- ▁TAPDANCE
- ▁JAPANESE
- ▁MELON
- ▁BATTLE
- ▁LEAS
- ▁PARTS
- BATHING
- ▁CRUNCHY
- ▁PAUL
- ▁WHISTLE
- ▁CAKES
- ▁HEAL
- ▁SHELL
- ▁GUM
- ▁CARPENTER
- ▁HEAVILY
- ▁N
- ▁LEMONS
- ▁HARDER
- ▁ROW
- ▁STEAM
- ▁STUDIES
- ▁LOTTERY
- ▁BITTER
- ▁MOW
- ▁EATEN
- ▁SPORT
- ▁SHORTER
- ▁STEAL
- ▁GRADUATE
- ▁PUZZLE
- ▁CEREMONY
- ▁RAINCOAT
- ▁KISS
- HAP
- WAY
- ▁DEPART
- ▁LANGUAGE
- ▁BITTEN
- ▁BUSAN
- ▁L
- ▁TIGHT
- ▁BELOW
- ▁PERFECTLY
- KE
- ▁NATURE
- ▁MISUNDERST
- ▁CLOUD
- ▁DRAG
- ▁CARTOON
- ▁COCONUT
- ▁GOLF
- ▁THIRTEEN
- ▁DYING
- ▁PETE
- ▁MALL
- ▁BIN
- ICAL
- ▁ALIB
- ▁BREEZE
- ▁FRENCH
- ▁DATING
- ROW
- ▁WATERING
- ARD
- ▁DESERT
- ▁PRAISE
- ▁INTERNET
- ▁STRICT
- ▁MOSQUITOES
- TLE
- ▁SKILL
- ▁BEHAV
- ▁KTX
- ▁LONDON
- ▁TASTING
- ▁VAN
- ▁COUGHED
- ▁NICELY
- ▁HARM
- ▁BOOKSHELF
- ▁CRICKET
- ▁EDGE
- ▁PILLOW
- ▁RECTANGLE
- ▁STRESS
- ▁FOOTBALL
- ▁LAW
- ▁CHOPSTICKS
- WHAT
- ▁TWINS
- ▁AUSTRALIA
- ▁LAMB
- ▁MAYO
- ▁DESIGN
- ▁BLEW
- ▁GLORY
- ▁ROCKCLIMBING
- ▁DUTY
- ▁ENTERTAINMENT
- ▁THEMSELVES
- ▁YOG
- ▁BUCKET
- ▁BIRTH
- ▁FALSE
- ▁PATTERN
- ▁THREAD
- ▁SOLDIER
- ▁BATTERY
- ▁KNEES
- ▁HEADS
- ▁DELIVERED
- ROUTE
- ▁SIMPLE
- ▁WATERFALL
- ▁SWITCH
- ▁EFFORT
- ▁UNUSUAL
- ▁SLIPPED
- ▁REG
- ▁SUITS
- ▁CHANNEL
- ▁MINI
- ▁PLASTIC
- ▁RECOMMEND
- ▁RUBBER
- ▁THANKFUL
- ▁ROLL
- ▁SOLV
- ▁CLAPS
- ▁BUD
- ▁CINEMA
- ▁SHELF
- ▁LOSS
- ▁WOMANS
- ▁CANADA
- ▁EXPRESS
- ▁SHARING
- ▁LOOSEN
- ▁CHOCO
- ▁RUNNY
- ▁REPL
- ▁BOWL
- ▁FULLY
- ▁SOMEHOW
- ▁UNIQUE
- ▁CARES
- ▁NOODLE
- ▁JETLAG
- ▁LAPTOP
- ▁TOOTHPASTE
- ▁JON
- ▁AIRPORT
- ▁JOO
- YER
- ▁CAP
- ▁HOLLY
- ▁JOHNSON
- ▁ZERO
- ▁LEADER
- ▁OX
- ▁SQUEEZE
- PY
- GET
- ▁FIN
- ▁ZIP
- ▁SEPTEMBER
- ▁TEMPERATURE
- THIRTY
- ▁GOODLOOKING
- ▁GUAR
- ANTEE
- ▁LOG
- ▁WILD
- ▁BOOTH
- ▁PEPPERS
- ▁FORGOTTEN
- BALL
- ▁AB
- CALORIE
- ▁POLICY
- ICO
- ▁INCLUDED
- ▁LIGHTEN
- ▁BLAMED
- ▁LONGTIME
- OOD
- ▁JEAN
- ▁DECK
- ▁MANNER
- ALTH
- ▁PERSONALLY
- TRUCK
- PT
- ▁GUT
- ▁CRASHED
- ▁FLO
- ▁REACT
- ▁ABSENT
- KYO
- ▁BLUSH
- ▁DONATE
- DOCK
- ▁COMPLAINING
- ▁DESCRI
- ▁GEORG
- ▁RECOVER
- ▁WALNUT
- ▁LUNG
- ▁BUDDY
- ENSE
- ▁PASSES
- ▁PLUM
- HALF
- ▁SE
- ▁TURTLE
- ▁FRANC
- ▁KOALA
- ▁TURKEY
- ▁CARPET
- ▁ANYWHERE
- ▁R
- ▁SKIING
- ▁FOCUS
- ▁HARV
- ▁JANUARY
- ▁PRESIDENT
- ▁TWENTYONE
- ▁WRESTLE
- ▁CANCER
- ▁CHEATING
- ▁HOMEMADE
- ▁WEEKDAY
- ▁K
- THER
- ▁DREAMS
- ▁APPRECIATE
- ▁BRAIN
- ▁SAUSAGES
- SOMETHING
- GAR
- ▁SMOOTH
- ▁SLIM
- ▁FENCE
- JURY
- LIES
- ▁SPIDERS
- EADLINE
- EVEREST
- ▁SCORES
- ▁JOKING
- ▁REJECT
- ▁STEPMOTHER
- ▁CRIM
- ▁DIGGING
- ▁QUEEN
- ▁MALE
- ▁SNORES
- ▁EXPLAINED
- ▁HOUSEWORK
- ▁BEDTIME
- BEAT
- WORKING
- ▁SMELLING
- ▁GRAPE
- ▁INSTRUCTIONS
- ▁SUNSCREEN
- ▁WORKDAY
- ▁HOLES
- ATER
- UP
- RIDA
- ▁VINE
- ▁HERSELF
- ▁NIGHTMARE
- ▁SNAP
- ▁INSU
- ▁BURNS
- GIV
- ▁MOUNT
- ▁NEGATIVE
- ▁ADVANTAGE
- ▁DIFFICULTIES
- ▁7
- ▁REMAINS
- CHECK
- ▁TRAVELING
- ▁IMAGIN
- G
- ▁BENNY
- ▁JOHN
- ▁ATHLET
- ▁COOPE
- ▁DICTIONARY
- ▁HAPPINESS
- ▁RAPPER
- ▁SLIPPERY
- ▁SUNRISE
- ▁TAPDANCING
- ORABLE
- ▁NOTICING
- ▁WAITLIST
- ▁CUCUMBER
- FTH
- ▁GUESTS
- ▁COLLEGE
- ▁STOCK
- HH
- ▁TALE
- POP
- ▁MEXIC
- ▁FREEZER
- ▁REFUSE
- ▁SWIMMER
- ▁THOUGHTFUL
- DIVING
- WORKED
- ▁COURAGE
- ▁ERRANDS
- ▁LISTENED
- ▁GRUM
- ▁WEB
- ▁TWEL
- GED
- ▁CABIN
- ▁REHEARSAL
- ▁SKETCHBOOK
- ▁DAYCARE
- ▁PARTIES
- OBBY
- ▁SEAL
- WHERE
- ▁ROSES
- INE
- ▁ACCIDENT
- ▁PERSONALITY
- ▁SPECIFIC
- ▁RINGS
- ▁BLOOMED
- ▁AW
- YARD
- ▁ENTERED
- ▁BELLY
- ▁FUNNIER
- ▁NARROWMINDED
- USY
- ▁JOURNAL
- ▁JER
- ▁PRICES
- BREAK
- ▁BILLS
- SOLUT
- ▁11
- ▁REFILL
- ▁BAKED
- ▁ALPHABET
- CONNECTED
- ▁GOATS
- ▁WASHE
- ▁CHOP
- PHLE
- ▁NONSENSE
- ▁WADDL
- ▁PETS
- ▁DECORATE
- LUSH
- ▁FORGETTING
- ▁EMILY
- ▁BICYCLES
- ▁SHOWN
- ▁BUCK
- ▁BAIT
- ▁100
- ▁MOVER
- ▁HEL
- ▁WINNING
- ▁ROCKET
- ▁FANG
- ▁CA
- ▁DEPRESS
- ▁BEAUTY
- ▁DAILY
- ▁ENGINEER
- ▁MUFFIN
- ▁WRITER
- ▁OPINIONS
- ▁TRACKS
- ▁PAUSE
- ▁PUZZLED
- URE
- SEY
- ▁WRAPS
- ▁SOCIAL
- ▁GRADES
- ▁WARMLY
- ▁YOYOS
- ▁CHEW
- ▁BULGOGI
- ▁BARKING
- ▁SENTENCE
- ▁THOUGH
- ▁POO
- ALIAN
- ▁EVE
- ICED
- ▁RAIS
- ▁DISTURB
- ▁ITSELF
- ▁ORIGAMI
- ▁TISSUE
- ▁JOHNNY
- ▁BURN
- ▁COOKS
- ▁CANDLE
- ▁OBVIOUS
- ▁SANDPAPER
- ▁SUPPLIES
- ▁CHEWY
- ATIONS
- ▁FLAVOR
- ▁KIWIS
- ▁MASTER
- ▁YELLING
- ▁CUPS
- ▁BL
- LAINE
- ▁STIMULAT
- ▁TIRES
- ▁PRETEND
- ▁CLEANED
- ▁RUSSIA
- ▁FRECKLES
- ▁FART
- ▁CHEETAH
- ▁RUDE
- ▁TRAINS
- ▁LOTTE
- ▁PAGES
- ▁POSTCARDS
- ▁KEYS
- ME
- ▁BOOKSTORE
- ▁HOST
- ▁SHORTCUT
- ▁SHOOTS
- ▁OPINION
- ▁APRON
- ▁COPIED
- LLOWED
- ▁STICKY
- ▁PREPARE
- ▁HEADQUARTERS
- ▁REPAIRS
- ▁WHALE
- ▁POOP
- ▁RESEMBLE
- ▁SHARE
- ▁LOLL
- ▁EXERCISES
- ▁PROGRAMS
- ▁BLINK
- ▁FLAG
- ▁LAY
- ▁FASTEST
- ▁SNEEZE
- ▁ENDED
- J
- ▁MARKER
- HER
- ▁ASSISTANT
- ▁CURRY
- ▁PURSE
- ▁SLIPPERS
- ▁UNDERSTANDING
- ▁PIT
- ▁INDOOR
- ▁CROWN
- ▁CURIOUS
- ▁SYSTEM
- ▁CABLE
- ▁MOSQUITO
- ▁PHARMACY
- ▁EVERLAND
- ▁WINDOWS
- ▁BOOGER
- ▁TIRING
- ▁PAPERS
- ▁PEANUT
- ▁PARDON
- ▁AH
- ▁FOX
- ▁RESELL
- ▁RESULT
- ▁TWIST
- ▁SLED
- ▁TALLEST
- ▁RIBBONS
- ▁RECEI
- ▁SQUIRREL
- ▁CUTLET
- ▁HEIGHT
- ▁HURTING
- ▁TRAP
- ▁WRAPPER
- ITED
- ▁FRIGHTENED
- ▁PATIENT
- ▁CANCELED
- ▁SHELVE
- ▁NET
- OOPS
- ▁MESS
- ▁MERRY
- ▁PLATE
- ▁COMPLAINT
- ▁SITUATION
- ▁PARIS
- ▁STRAW
- ▁DIVIDE
- ▁GOAL
- ▁SHRIMPS
- X
- SPECIAL
- GOTTEN
- F
- ▁COLLECTED
- ▁AFFORD
- ▁HUNG
- ▁CHAMBER
- ▁AIRPLANE
- ▁CHA
- ▁WALLS
- ▁REGULAR
- ▁EXPERIENCE
- ▁PILOT
- ▁250
- ▁LEMONADE
- ▁FURTHER
- ▁RAC
- IN
- ▁SWALLOW
- ▁CLOSING
- ▁CLASSROOMS
- ACK
- ▁RENT
- ▁ADS
- ▁TENTH
- ▁FRY
- ▁HOTDOG
- ▁ANGEL
- ▁PEACH
- ▁HIDDEN
- ▁GOOSE
- ▁SMALLEST
- ▁ROCKS
- ▁COOKED
- ▁CORN
- ▁SIGNS
- ▁ANXIOUS
- ▁LIGHTNING
- ▁SNOWBALL
- ▁BESIDE
- ▁ANTS
- ▁ALLOWANCE
- ▁COUNTRIES
- ▁POUCH
- ▁SLIP
- ▁POEM
- ▁RAMEN
- ▁ROLLING
- ▁PATIENTS
- ▁SCREEN
- ▁PRESENTATION
- ▁CAST
- ▁FLUTE
- ▁HU
- ▁ZEBRAS
- ▁COMPARE
- ▁WIDE
- ▁FORSYTHIA
- ▁SENIOR
- ▁DONATED
- ▁FACTS
- RD
- ▁FOG
- ▁ROLE
- ▁PEARS
- ▁BUTTONS
- COME
- ▁HAIRCUT
- ONDE
- ▁ENV
- ▁CHASED
- THE
- '4'
- ▁TRACK
- ▁STRANGER
- ASOL
- ▁CHIN
- ▁PUBLI
- ▁DUN
- ▁JUNE
- ▁20
- ▁DOUGHNUT
- ▁DADDY
- PORT
- ▁EMBARRASSING
- ▁UNCOMFORTABLE
- ▁FOREHEAD
- ▁RELATIVES
- ▁DOODLE
- ▁GENTLEMAN
- ▁TAPE
- ▁BANKER
- ▁ACTRESS
- ▁SORT
- ▁REDESIGN
- ▁GRADERS
- ▁KICKING
- ▁LA
- UK
- ▁BARBECUING
- ▁BULLY
- RATE
- ▁JUN
- ▁KOREANS
- ▁CORPORATION
- ▁HEAVIE
- ▁IMPROVE
- ▁OCEAN
- ▁LG
- ▁LAYER
- ▁BRIGHTLY
- ▁CRABS
- ▁PAR
- ▁BLANK
- ▁CALENDAR
- ▁CROCODILE
- ▁SALARY
- ▁CHUSEOK
- ▁CUTEST
- ▁NOR
- ▁MYSTER
- ▁BEND
- ▁INCLUDE
- ▁EXCELLENT
- ▁PAINFUL
- ▁SKEWERS
- ▁CHEERING
- SIZE
- BELT
- RCH
- ▁PLEASANT
- ▁PATH
- ▁QUALITY
- ▁STINGS
- ▁REPAIRING
- ▁DELAY
- ▁RIDES
- ▁ELSA
- ▁SECURITY
- ▁TWENTIETH
- ▁PC
- AH
- ▁NOTES
- RAL
- ▁NORMAL
- ▁DIRECT
- ▁CENT
- ▁APOLOGY
- ▁GARBAGE
- ▁GEE
- ▁WATCHES
- ▁SCISSOR
- ▁CULT
- ▁ECONOMY
- ▁SEASHELL
- ▁HA
- ▁HORSES
- ▁WHEELS
- BYE
- ▁HABIT
- ▁VI
- OOKIE
- ▁BAKING
- ▁CHERISH
- ▁JESUS
- ▁KLEA
- ▁PARTICIPATE
- ▁NICER
- ▁LISTING
- ▁SUPP
- IELD
- ▁CRISPY
- ▁EYESIGHT
- ▁TWITCH
- ▁WORST
- ▁GREETING
- ▁DRYER
- ▁LINES
- ▁DEPRESSED
- RENT
- ▁ROLLS
- LAND
- ▁DOCUMENT
- ▁COCKROACH
- ▁TAX
- ▁LIBER
- ▁FRIGHT
- ▁GARDENVIEW
- ▁JAR
- ▁ONESELF
- ▁PELICAN
- ▁RUSH
- ▁BAKER
- ▁EXPLODED
- ▁CARNATIONS
- ▁BUBBLES
- ▁BREAKS
- ▁EUROPE
- ▁EXCHANGE
- ▁SMASH
- ▁TORONTO
- ▁CEO
- ▁BLEEDING
- ▁IMAGINED
- ▁KIL
- ▁POU
- ▁TAB
- ▁CRUS
- OGRAMS
- ▁ALASKA
- ▁FROWNED
- MAIL
- TWINKL
- ▁SINGLE
- ▁INVENT
- ▁ROD
- ▁EMERGENCY
- PORTER
- ▁COMB
- ▁HUG
- TI
- '...'
- SMITH
- ▁AVOID
- ▁JJAKKUNG
- ▁MATERIALS
- ▁LOSES
- ▁LU
- INA
- FREE
- ▁SERV
- ▁FLU
- ▁REEL
- ▁BACKPACK
- ▁REPRINT
- ▁SIXTEEN
- ▁ZENA
- ROL
- ▁AWARD
- ▁TENK
- ▁NETWORK
- ▁WORKER
- ▁REDUCE
- GUE
- ▁PROTECT
- ▁CONCERN
- ▁CRIMINAL
- ▁FIREFIGHTER
- ▁INCHEON
- ▁SUWON
- ▁VIEWER
- OVER
- ▁ELEVATORS
- OR
- ▁IMPRESSED
- ▁SHAME
- ▁STRAP
- ▁YIELD
- ▁WARNED
- ▁HANDOUT
- ▁LUNCHTIME
- URY
- IED
- AY
- WIFE
- GUN
- ▁ISSUE
- RRIE
- ▁SANDCASTLE
- ▁FIGURES
- ▁LOV
- ▁POKE
- ▁FREESTYLE
- ▁CHAIN
- ▁EVERYDAY
- OK
- ALY
- ▁RATING
- ▁SPIT
- ▁SAIL
- ▁AMBULANCE
- ▁ENORMOUS
- ▁SELFCONT
- ▁MEMORIZED
- ▁GIRAFFES
- ▁SNOWS
- ▁PLANTS
- ▁LEAD
- ▁EXHIBITION
- ▁FOUGHT
- ▁MARBLE
- 'YES'
- ▁PICKE
- ▁WRONGLY
- ▁HURR
- ▁CONVERSATION
- ▁DETAIL
- ▁WORRYING
- ▁SAVING
- ▁TU
- ▁SECRETLY
- AWAY
- ▁GROWS
- ▁CONTRA
- ▁SCRAMBLE
- BES
- ▁PROMISES
- ▁CHAIRS
- ▁GOGGLES
- ▁OTHERWISE
- ▁VICTOR
- ▁THORNS
- ▁WORTHWHILE
- ▁HIPPOS
- ▁TRICK
- ▁OBSERVATORY
- ▁SHAMPOO
- ▁COKE
- ▁DRAMA
- ▁DELAYED
- ▁GUTS
- ▁AZALEA
- ▁WRAPP
- TIE
- HEAD
- ▁BIGGEST
- ▁ENEMIES
- ▁PUMPKIN
- ▁DOCUMENTARY
- ▁ATOPY
- ▁COUGH
- ▁TOUCHED
- ▁AWARDS
- EWER
- VER
- ▁BEARS
- ▁CACTUS
- ▁LOCK
- ▁LIT
- ▁SKETCH
- ZEN
- ▁DRAGG
- ▁SQUEEZED
- ▁SCOT
- SHY
- ▁CALCULAT
- ▁APPEARED
- ▁RAINED
- ▁WINGS
- ▁CLOTH
- ▁DIG
- ▁DONGSENG
- ▁SPONGE
- ▁STUBBORN
- ▁WAIST
- ▁FLE
- ▁TAG
- CH
- ▁CR
- ▁UMBRELLAS
- ▁TOOTHBRUSH
- ▁POCKETS
- ▁PAJAMA
- ▁HALLA
- ▁GATHER
- ▁BOSS
- ▁DETERGENT
- ▁DOCUMENTS
- ▁GENEROUS
- ▁TOTAL
- ▁CURTAIN
- ▁PUDD
- ▁THICK
- NSIBLE
- ▁HOLIDAYS
- ▁TICKLES
- FLAVORED
- ▁COVID
- ▁GIFTWRAP
- ▁BLINKING
- ▁JUNG
- HOK
- LEANING
- ▁IDOLS
- ▁DRO
- ▁FOUNTAIN
- ▁PHYSIC
- ▁PRESCRIPTION
- ▁LATTE
- ▁TONGUE
- ▁NA
- WORLD
- ▁SURGERY
- ADLINE
- ▁STUFFY
- ▁WAFFLES
- ▁15
- ▁LOGO
- ▁SHORTCUTS
- ▁RESPECTED
- ▁INVENTIONS
- ▁ARTISTS
- RAFFI
- ▁FOSSIL
- ▁GOLDCREST
- ▁MALTESE
- UGGING
- ▁BUCKWHEAT
- ▁PROFESS
- ▁SQUID
- ▁CORRECTION
- IT
- LOOKING
- ▁GENIUS
- ▁WHALES
- ▁OPPA
- ▁DONKEYS
- ▁ELECTRIC
- ▁FAKE
- ▁JUNIOR
- ▁MEDAL
- ▁SONGPYEON
- ▁MO
- ▁LOCKED
- ▁MEMORIZE
- ▁DIZZY
- ▁CAMELS
- ▁Y
- ▁CARING
- ▁PERFORMANCE
- ▁ERRAND
- ▁STRIPE
- ▁SIL
- ▁REDESIGNED
- ▁TIPS
- SCRIPT
- ▁BISCUIT
- ▁TORN
- ▁BRUSHE
- ▁STREETS
- ▁RELIEVED
- ▁HOPS
- ESSER
- ▁INSTRUMENT
- ▁ADVANCE
- ▁GESTURE
- ▁MUGWORT
- ▁PROMOT
- ▁PIN
- ▁SHAD
- IONAL
- '72'
- ▁HEAVEN
- ▁SLOPE
- ▁HAIRDR
- YOU
- ▁OWNERS
- ▁PLANS
- ▁SUNFLOWERS
- ▁CHIMNEY
- ▁HIPHOP
- ▁FOURTH
- ▁C
- ▁COUNTS
- ▁BARK
- SCOPE
- ▁ATOPIC
- ▁DEATH
- ▁FORMALLY
- ▁TWIN
- ▁QUIETLY
- ▁TEAS
- ▁MIN
- ▁CE
- ▁DEPENDS
- ▁TRANSFERRED
- ▁HANDY
- ▁CLEARLY
- CHOCO
- ▁HOTDOGS
- ▁FROWN
- ▁RUB
- ▁PERFORM
- ▁ATTRACT
- ▁DUST
- ▁REVIEW
- ▁SIGNBOARD
- ▁ENDURE
- ▁RIDD
- CKED
- ▁CIRCLES
- ▁AIRPLANES
- ▁MI
- GING
- Q
- ▁YURI
- ▁30
- ▁OFFICERS
- ▁ALMONDS
- ▁SOLVED
- ▁WEREN
- ▁ALBUM
- ▁UNDERGROUND
- ▁WRINKLES
- IL
- ▁TALES
- SOKCHO
- ▁GROCERIES
- ▁RECEIV
- ▁BARE
- ▁PEEL
- ▁COCKROACHES
- ▁DEEPLY
- ▁STATIONS
- ▁DANCED
- ▁CHUBBY
- ▁SATURDAYS
- ▁WING
- ▁CRAFTSMAN
- ▁OCCASION
- ▁WINE
- ▁TELE
- ▁BLUETOOTH
- ▁DISAPPEARED
- ▁SUBM
- ▁FARTED
- ▁PREPARED
- LIST
- ▁CONDITION
- ▁PORTRAIT
- '23'
- ▁POINTS
- ▁TAMBOURINES
- ▁TEND
- ▁SELFISH
- ▁SUBJECT
- RUPTE
- ▁LICKING
- ▁WATERMELONS
- ▁DIES
- ▁BLOWING
- ▁SOIL
- NIFE
- ▁BLAND
- ▁RECYCLED
- ▁SIXTY
- ▁LENGTH
- ILING
- ▁SURVIVED
- ▁HABITS
- WANT
- ▁GRAND
- ▁SAVORY
- ▁APPLAUSE
- ▁APPLY
- ▁MEANER
- ▁DISEASES
- ▁FRUSTRATING
- ▁NOTIFICATION
- ▁CHEOMSEONGDAE
- ▁BADGE
- ▁ABOARD
- ▁DISNEYLAND
- ▁LEE
- ▁SHARPEN
- ▁KETTLES
- ▁HERESY
- ▁CRAM
- ▁BRONZE
- ▁HARSH
- ▁EBS
- ▁GREY
- ▁POSE
- ▁PICKLES
- ▁LEN
- ▁TIGERS
- ARY
- ▁CLAR
- ▁EDUCATION
- ▁NEIGH
- ▁ADDITION
- ▁REASONABLE
- ▁DUMPING
- ▁SPACES
- ▁LIGHTER
- ▁SPELLING
- Z
- ▁CATCHING
- ▁LEVEL
- ▁UPSTAIRS
- ▁RINK
- ▁HANDLE
- AVING
- ▁BOWED
- ▁BEAUTIFULLY
- ▁FARTS
- ▁BOLT
- ▁FAMILIAR
- BBLE
- DO
- ▁FILE
- ▁TREATMENT
- ▁PASTOR
- ▁EEK
- ▁BLOOMING
- CIAL
- TRAINED
- ▁APPEAR
- ▁KNEE
- ▁WHEEL
- RIAN
- ▁ATTEND
- ▁CONFESS
- ▁DVD
- ▁WITNESS
- ▁BATMAN
- ID
- ▁BANGS
- ▁YARD
- ▁LOTION
- ▁RECYCLE
- ▁PRI
- ▁BURDEN
- ▁SCRA
- ▁VEGETA
- ▁TOENAILS
- SUALLY
- ▁YAM
- FORD
- ▁FORMAL
- ▁POK
- ▁FROZE
- ▁MULTIPLICATION
- ▁SEJONG
- ▁TRIES
- ▁SUNSHINE
- ▁HERBS
- ▁STRIPES
- ▁CLIMBING
- ▁SKIPP
- FFE
- ▁DAMAGE
- ▁RIDICULOUS
- ▁QUACK
- ▁PINNOCHIO
- SIDE
- ▁STANDARD
- ▁TRADITION
- GIANT
- ▁YELL
- ▁SUPER
- ▁OVERREACT
- ▁PERFUME
- ▁UNDERCOOK
- BEC
- ▁MAPS
- ▁PARTNERS
- ▁SPINACH
- ▁TTEOKGUK
- ▁JAJANGMYEON
- ▁DIRECTLY
- VATE
- STEE
- ▁MOUSES
- ▁SNOWED
- ▁IGNORE
- GIFT
- ▁LOCKER
- ▁SURVIV
- ▁P
- BBLES
- DAIRY
- ▁TOOLS
- STAR
- LING
- ▁BB
- ▁ACCESSORIES
- ▁NINTENDO
- ▁BIBIMBAP
- ▁DERMATITIS
- ▁ANNOUNCED
- ▁LICK
- ▁AZALEAS
- ▁PEPPER
- VAS
- ▁BODIES
- ▁EXPAND
- PED
- FLOWING
- ▁MIXED
- ▁GROUP
- ▁SAUSAGE
- ▁CEREAL
- ▁EASIEST
- ▁OVERSLEEP
- ▁SATISF
- ▁150
- ▁BAY
- ▁DIP
- UN
- AK
- ▁COINS
- ▁SURPRISES
- ▁WAK
- OL
- ▁EVILDOING
- ▁EYEBROWS
- ▁HEADBAND
- ▁KETCHUP
- ▁PROPERLY
- ▁STRAWBERRIES
- ▁UNFORTUNATE
- ITY
- LIKE
- ONG
- ▁WISHES
- ▁CONSTRUCTION
- ▁RESEARCH
- ▁RIPPED
- ▁FOREIGNERS
- ▁SANDALS
- ▁GOLDEN
- ▁PERFORMANCES
- ▁STEALING
- HA
- ▁SPARE
- ▁KPOP
- ▁LEASH
- ▁TIGHTLY
- CM
- ▁COMME
- ▁500
- ▁ANCHOVIES
- ▁BANKBOOK
- ▁COVIDNINETEEN
- ▁DEFINIT
- ▁UPRIGHT
- ▁MISSION
- BAL
- PHONES
- HO
- ▁GENERAL
- ▁OVEN
- ▁MARCH
- V
- HU
- ▁GROWN
- ▁BROADCAST
- ▁GANGWONDO
- ▁REFRESHING
- ▁DICE
- ▁RACK
- ▁PERM
- ▁SUITCASES
- ▁16
- ▁ENVELOPE
- ▁HOOKED
- ▁ROOT
- ▁TEXT
- ▁CAGE
- GO
- ▁MUS
- ▁DOUGHNUTS
- ▁WASTING
- ▁BETIAN
- ▁PRESENTING
- ▁BRUISE
- ▁ALOUD
- ▁AUDITORIUM
- ▁BTS
- PLE
- RAISED
- MOTION
- ▁GENTLE
- ONIA
- ▁EASIER
- ▁FONDUE
- ▁SEASICK
- ▁VR
- ▁DOLPHINS
- ▁MATCHES
- UR
- ACHE
- ▁CICADAS
- ▁LEAN
- ▁REPORTS
- YING
- ▁CLOUDS
- ▁WOLVES
- ▁HEEL
- ▁FRESHMAN
- ▁SCREAMED
- ▁RELATIVE
- ARIN
- ▁BUR
- ▁PASTE
- ▁FRIENDLY
- ABLE
- ▁VISITING
- ▁INVIT
- ▁LOUDSPEAKERS
- ▁NNN
- ▁OINTMENT
- ▁SWAN
- CLES
- ▁GARDENING
- ▁HICCUP
- IM
- '0'
- ND
- BA
- ▁JULY
- ▁SEMESTER
- ▁SUSHI
- ▁UNIVERSE
- ▁TOSUN
- ▁PILLS
- ▁TAN
- ▁NEAT
- ▁FEATHER
- ▁ANNEX
- ▁PENGO
- ▁SICKNESS
- ▁CANDLES
- LO
- ▁SCRUB
- ▁SHOOT
- ▁TH
- ▁CRACK
- PLAIN
- ▁FRIDGE
- ▁ANSWERING
- ▁INDOORS
- ▁APOLOGIZED
- ▁COMEDIANS
- ▁WOR
- ▁SPIN
- ▁DRACULA
- ▁DRAGONFLIES
- ▁EXTINGUISHER
- ▁GRADUATION
- ▁LADIES
- ▁EX
- ▁PLANNED
- ▁50
- ▁MILLIONS
- ▁TANGERINES
- ▁DRAWN
- ▁CLEANER
- ▁DECORATIONS
- ▁SPI
- ▁VARI
- ▁DRAGONFLY
- ▁SCENT
- ▁GAYAGEUM
- ▁CL
- ▁MONTHS
- ▁PAJAMAS
- ▁RESTING
- ISE
- ▁BADGES
- WORK
- KY
- ▁ADORES
- ▁COLA
- ▁MOTOR
- ▁PRODUCE
- ▁THOROUGHLY
- ▁VOWELS
- ▁COMMON
- PING
- ▁SUNFLOWER
- ▁FOLDING
- ▁DECORAT
- '8'
- ▁SCREAM
- ▁CONNECT
- ▁AUGUST
- ▁PURPOSE
- ▁PIAN
- ▁CHIMNEYS
- ▁MONDAYS
- JU
- ▁BEETLE
- ▁PEED
- ▁INTEREST
- ▁BAN
- ▁SNOR
- ▁MA
- ▁SEW
- ▁COIN
- ▁HAN
- ▁ALPHABETS
- ▁TONKATSU
- ▁HOPEFULLY
- ▁ICECREAM
- ▁REGULARLY
- ▁GALBI
- ▁CHAS
- ▁REALIZE
- ▁WORKERS
- ▁BOATS
- ▁INTERRUPT
- ▁SUBTRACT
- ▁ORGANIZING
- ▁HISTORIC
- ▁POTTER
- ATION
- ▁CHARGER
- ▁BAL
- ▁SUNLIGHT
- ▁DYE
- ▁SHOELACES
- ▁EVENLY
- RY
- '30'
- BIKE
- ▁CRAWL
- ▁CHOOS
- ▁ROBBINS
- ▁SHOOK
- ▁SPLASH
- ASKIN
- ▁UNTIE
- YMP
- ▁STING
- IOUS
- ▁PA
- ▁CAROLS
- ▁SUDDEN
- ▁MACKEREL
- ▁NOSEBLEED
- ▁SCREW
- ▁HANOK
- TOMS
- ▁STRA
- DAY
- ▁RIBBON
- MILKY
- BEAN
- ▁TOMATO
- ▁NATIONAL
- ▁SPRITE
- ▁PANIX
- ▁WISE
- ZED
- ▁CHEWING
- ▁FOOTS
- ▁SHAKES
- ADA
- 'NO'
- ▁DIFFERENTLY
- SLEEVE
- ▁930
- ▁GYEONGJU
- ▁RAPUNZEL
- ▁ROMANTIC
- ▁FARTHER
- ▁CAPE
- IER
- ETY
- ▁HARDEST
- ▁TURNING
- ▁3000
- GENEROUS
- ▁BOO
- ▁ATTENTION
- ▁DWARVES
- ▁HAKNYEON
- ▁OUTDOOR
- ▁RESORT
- ▁SWOLLEN
- ▁PINCH
- ▁PURE
- STER
- ▁GRAB
- ▁BIO
- ▁HURRICANE
- ▁JUDGE
- ▁LANE
- ▁OINK
- ▁SPRAINED
- ▁THIEVES
- ▁TRAPPED
- BIL
- ▁RANCH
- ▁TWENTYTH
- ▁ANNE
- OLD
- NIGHT
- ▁HEIGHTS
- ▁BRICK
- ▁GRATEFUL
- ▁VITAMIN
- ▁HAMSTER
- ▁USELESS
- ▁INVENTOR
- ▁ULSAN
- ▁PRETENDING
- ▁PANDAS
- GGING
- UL
- AG
- COMING
- ▁HUNT
- ▁REMOVE
- ▁OCTOBER
- ▁SEPARATE
- ▁YAWN
- ▁PALE
- ▁UM
- ▁FLOATING
- ▁CO
- HAVE
- ▁SNOWY
- ▁SHOELACE
- GRAPHY
- ▁MELT
- ▁FISHBONE
- UG
- ▁CHIL
- ▁POOPED
- ▁YUT
- ▁PILL
- '0000'
- ▁SURVIVE
- ▁EXAMIN
- ▁TRU
- ▁BACKGROUND
- ▁BEGINNING
- ▁MACARONS
- ▁SURFING
- ▁VERANDA
- ▁ASSEMBLE
- ▁HANGUL
- ▁REACTION
- ▁DAUGHTERS
- MENT
- QUET
- RMALLY
- ANG
- ▁LID
- ▁RESERVATION
- SOON
- ▁FLIP
- CAN
- ▁JUICY
- ▁KINGDOM
- ▁SOCIETY
- ▁TADPOLE
- ▁JAMSIL
- ▁WI
- ▁GRADUATED
- ▁PRE
- ▁SCRATCHING
- ▁PO
- ▁APPEARS
- ILY
- FAT
- FOOD
- ▁DISAPPEAR
- ▁FAINT
- ▁FLOAT
- ▁RUBB
- ▁TRANSFER
- ▁COMFORT
- ▁BALLERINA
- ▁DESCRIPTION
- ▁GENTLY
- ▁HAPPIER
- ▁RINGTONE
- ▁ARGUING
- ▁CONDITIONER
- PM
- IET
- CU
- ▁EARTHQUAKES
- ▁CHICK
- ▁TR
- ▁TYPHOON
- ▁BUNS
- ▁RUNNER
- NDC
- ▁WAH
- ▁JELL
- ENDY
- ▁COMMU
- ▁FARMS
- ▁SLEEVES
- ▁BEETLES
- LOW
- ▁MEATBALL
- ALKIE
- ▁MAGNIF
- ▁CONNIE
- ▁NEIGHBOR
- ▁OPERA
- ▁PINOCCHIO
- ▁SHOEMAKER
- ▁CRAFT
- ▁ONESIX
- ▁FLOW
- WD
- HOO
- ▁PRESENTATIONS
- ▁CHIP
- ITE
- ▁ANIMAT
- ▁DUB
- ▁FLOOD
- ▁KAKAO
- ▁RESU
- ▁UNBELIEVABLE
- ▁GRIN
- ▁HEALTHIER
- ▁SIXTH
- ▁CHOSEN
- ▁LOSER
- ▁BLED
- REALLY
- ▁IGNOR
- ▁PRODUCT
- RIST
- ▁DISCOURAGED
- ▁DODGE
- ▁FORECAST
- ▁OWL
- ▁TREASURE
- ▁UNIFORM
- ▁LOCAT
- ▁TUBE
- DON
- ▁FOLDED
- ▁WEIGH
- ▁RUIN
- ▁CRUSH
- ▁PARAD
- ▁OBESE
- ▁ORGANIZE
- ▁PRINCIPAL
- ▁RATTLING
- ▁RESERVE
- ▁RHYM
- ▁SIP
- ▁UNDERWATER
- ▁TAEG
- ▁TRAVELLING
- ▁STACK
- ▁RI
- ▁BUNDLES
- YEAR
- SAME
- AND
- ▁CHEESECAKE
- ▁EPISODE
- ▁FAMILIES
- ▁FIFTH
- ▁RHINITIS
- ▁SAUNA
- NCHES
- ▁EXCE
- TIQUE
- ▁COMBO
- ▁STRINGS
- ▁COLORFUL
- ▁FLOWS
- ▁COOLEST
- ▁OPPAS
- ATING
- ATE
- ▁MELTS
- ▁CHOPSTICK
- ▁BRANCH
- ▁FRUSTRATED
- ▁GREASY
- ▁EXIST
- ▁WAVING
- ▁APP
- ▁SODA
- ▁FALLEN
- ▁PRO
- SHAPED
- NG
- ▁CONNECTED
- ▁12
- ▁BANDAID
- ▁DISTANCE
- ▁DRAIN
- ▁MEASURE
- ▁TEMPLE
- ▁WORKBOOK
- ▁EIGHTAM
- ▁WARN
- ▁BURNT
- BOARD
- ▁DE
- IFF
- RTH
- ▁MUSHROOMS
- ▁POWERFUL
- STICK
- ▁VOUCHERS
- ▁BLEED
- ▁BRAID
- ▁CREPE
- ▁HAWKING
- ▁FLAM
- ▁SCORE
- ▁RELEASED
- ▁TICKLED
- BU
- FISH
- ATIVE
- CLUSI
- ▁CLINIC
- ▁CROOKED
- ▁RELAY
- ▁SCOOTER
- ▁SEBASTIAN
- ▁SUFFER
- ▁TEENAGER
- ▁BATHHOUSE
- ▁WRIST
- ▁BAKERIES
- ▁BRANCHES
- ▁SAMYUKGU
- ▁SCU
- ENDER
- ▁INGREDIENTS
- ▁INVENTED
- ▁BOWING
- SSES
- WAR
- ▁PRESSED
- ▁SQUEEZ
- SIGNED
- WON
- ▁70
- ▁APPROACH
- ▁CHAPPED
- ▁DUMB
- ▁FREEZING
- ▁MAGNIFIER
- ENTIAL
- IE
- ▁CLOSELY
- ▁DIAPERS
- OUS
- ▁DIRT
- ▁CENTIMETER
- ▁FLOWERPOT
- ▁FOAM
- ▁POLITIC
- ▁PORRIDGE
- ▁PEDIATRICIAN
- ▁FIREWORKS
- ▁TROUBLEMAKER
- ▁PILLAR
- ▁EVACUATE
- ▁SILLA
- EUK
- ANDING
- ▁FAINTED
- ERMAN
- ▁SEAGULL
- ▁CHICKS
- ▁SWEATING
- INGO
- PAPER
- ▁AGREED
- ▁CLAPP
- VA
- ▁STRENGTH
- SOONGSIL
- ‘
- ▁CONVENIENT
- ▁DECEMBER
- ▁FORTUNATELY
- ▁FURNITURE
- ▁HAGWON
- ▁LOUNGE
- ▁MOKDONG
- ▁PALM
- ▁SPRINKLE
- ▁STIRFR
- RUNK
- ▁ANKLE
- ▁SELF
- ▁SEVENTH
- LESS
- ▁DIVING
- ADE
- ▁RANG
- SHINY
- WITH
- ▁BRAVELY
- ▁BADMINTON
- ▁BULGUKSA
- ▁KARAOKE
- ▁ADMIT
- ▁GINGER
- ▁LAID
- ▁SNOWBOARD
- ▁HOPPING
- ▁UDO
- ▁BULGING
- ▁CARP
- ▁FACT
- ▁GROUPS
- ▁ENTERING
- ▁RIP
- ▁MAR
- LOCK
- ▁JE
- ▁ADMISSION
- ▁CHRYSANTHEMUM
- ▁DIARIES
- ▁DISPOSABLE
- ▁LOACH
- ▁PARROT
- ▁SCULPTURE
- ▁TERRIF
- ▁VOLUME
- ▁REPRESENTATIVE
- ▁MEOW
- ▁CHEEK
- ▁JEJUDO
- ▁HARMFUL
- ▁BRUISED
- ▁MINERAL
- AINT
- ▁EDIT
- WARDS
- HY
- ▁VIEW
- ▁EXACT
- ROUGHT
- OCKPAPERSCISSORS
- ▁CHESTNUT
- ▁HAWAII
- ▁PIMPLES
- ▁REMOTE
- ▁SOLUTION
- ▁COMPETE
- ▁SOFTLY
- ▁BUNDLE
- ▁LIP
- ▁GRADER
- WOO
- RIS
- STORY
- DAYS
- COLORED
- FOR
- ▁COLLAPSE
- ▁STEPP
- ▁BRILL
- RSELVES
- ▁ACCORDING
- ▁BACON
- ▁BAEK
- ▁BUTTERFLIES
- ▁COSMOS
- ▁CYCLING
- ▁DISTRICT
- ▁ESTATE
- ▁HUMID
- ▁MERMAID
- ▁PAPRIKA
- ▁PHONICS
- ▁BELONG
- ▁YUKJANG
- ▁ANIMATION
- ▁FLIPP
- ▁DUMPLING
- ▁BLOSSOM
- UNG
- ▁EXPLORE
- ▁INSECTS
- ▁JI
- HEART
- GHTS
- ▁ASTRONAUT
- ▁BELLHAMMER
- ▁LICENSE
- ▁NEPTUNE
- ▁OPPOS
- ▁REFRIGERATOR
- ▁STONEBUSH
- ▁1000
- ▁APPLI
- ▁SUBTRACTION
- ▁HOOD
- ▁WIDER
- ▁BROOM
- ▁UNIVERSITY
- ▁PRINCESSES
- ▁MINT
- ▁PARENT
- ▁PEEING
- ▁ADORE
- DONG
- ▁SP
- ANCE
- ▁EXPLOR
- TTEOKBOKKI
- WHEEL
- ▁ABANDONED
- ▁CALLUSES
- ▁COSMETICS
- ▁LADYBUG
- ▁MARIA
- ▁PRONUNCIATION
- ▁BOUQUET
- ▁SOGGY
- ▁LEFTOVERS
- ▁MIKE
- ▁TANK
- ▁SPAC
- ▁FRAME
- MADE
- IVAL
- ▁YE
- ▁GATHERING
- IAN
- ▁KITTENS
- IBLE
- ▁ABBREVIAT
- ▁CHAPAGETTI
- ▁ENGINES
- ▁EQUIPMENT
- ▁INTERSECTION
- ▁SANITIZER
- ▁DOKDO
- ▁GENERATOR
- ▁MEDIUM
- ▁BALANCE
- ▁CHART
- ▁TELEVISION
- ▁JAJANG
- ▁LOLLY
- ▁PHOTOGRAPH
- ORD
- ▁KKA
- ▁SOLES
- ▁BALM
- ▁DECORATION
- ▁THORN
- ▁ARMY
- ▁YU
- EEK
- NK
- BOY
- LENGTH
- TONY
- HEN
- ▁RELEASE
- ▁LOOSE
- ▁COMPLETE
- KYOCHON
- ▁ARCADE
- ▁BRIM
- ▁CORONA
- ▁CRANE
- ▁CUPCAKE
- ▁KITCHENWARE
- ▁LULLABY
- ▁MODER
- ▁MUSKET
- ▁OBEDIEN
- ▁PIKACHU
- ▁PROVERBS
- ▁SALMON
- ▁YUKGAEJANG
- ▁TANNED
- ▁VILLA
- ▁DIRECTIONS
- ▁CLAY
- ▁ADMIR
- ▁DIRECTOR
- ▁DAMAGED
- ▁BURST
- ▁TOPIC
- ▁DOODLED
- ▁COMPAR
- ▁BUBBLE
- ▁HO
- ▁KISSE
- ▁JO
- ▁BLOATED
- ▁CONSONANTS
- ▁DOWNLOAD
- ▁ELBOW
- ▁FUNNIEST
- ▁PORORO
- ▁SLOTS
- ▁VACUUM
- ▁BOTTOM
- ▁MANDELA
- ▁IMSIL
- ▁VIP
- ▁TOMMY
- EATURE
- ▁PINE
- ▁EIGHTTHIRTY
- ▁HIDEANDSEEK
- ▁COLLAPSED
- ▁UNDERSTOOD
- ▁CRUSHED
- ▁TRI
- OF
- ▁DI
- ▁CARNATION
- ORY
- NAILS
- LENT
- ▁PUBLISH
- PLACE
- ▁CLIP
- ILLA
- ▁SUNSHIN
- ▁ACTUAL
- ▁SUCCESS
- COCK
- ▁60
- ▁BENEFITS
- ▁CLAW
- ▁HAUNT
- ▁LIBRARIES
- ▁LOTTERIA
- ▁MERCURY
- ▁MITTEN
- ▁SWAM
- ▁ROTTEN
- ▁SERVANT
- DENTAL
- ▁LEGEND
- ▁ROT
- ▁PRICKED
- ▁230
- ▁TUB
- ▁WINK
- ▁HUNTER
- ▁SCREAMING
- ▁FINALE
- ▁SOAPY
- ▁REDESIGNING
- NNA
- ▁DIAPER
- ▁BANG
- IK
- CHAN
- TIER
- ▁MOR
- ▁METERS
- ▁HUGG
- DAE
- FTER
- CHO
- SHIP
- EITHER
- CTIVE
- ▁KI
- ▁RU
- ▁BRAND
- ▁AMOUNT
- ▁EXPLANATION
- ▁HAIRPIN
- ▁HORRIBLE
- ▁INTERIOR
- ▁LANDSLIDE
- ▁NEVERTHELESS
- ▁PERSIMMON
- ▁POSTPONE
- ▁SCIENTIST
- ▁SLACK
- ▁STORM
- ▁STREAM
- ▁SURPRISING
- ▁URGENT
- ▁ZOMBIE
- ▁STOOL
- ▁LOAD
- NAMBU
- ▁ANNOUNCEMENT
- IKES
- GRAN
- ▁ABC
- ▁COMPLE
- ▁FASCINATING
- ▁REMOVED
- ▁CRAWLING
- ▁INTERRUPTING
- RELLA
- RAGE
- ▁PEELING
- ▁HUMANS
- ▁MON
- ▁BEGIN
- ▁VEGETABLE
- ▁SLEEVE
- GLE
- ▁THA
- ISH
- TRAINER
- '7'
- ROAD
- DRIVER
- ▁PRETEN
- ▁ALLOW
- UZZLE
- ▁DEMONSTRAT
- ▁STIR
- ▁BROC
- ▁CARCASON
- ▁EQUALLY
- ▁EXPERIMENT
- ▁HESITAT
- ▁SPINNING
- ▁MENTOR
- ▁ABBREVIATION
- ▁RASHES
- ▁ASSEMBLING
- ▁DUNG
- MEMOR
- ▁PEACEFUL
- ▁HARDENS
- OSU
- SSUED
- ▁FRECKLE
- TIOUS
- ▁REALIZ
- ▁SQUA
- LIFE
- THINK
- ▁BIK
- ▁KNIT
- ZZA
- ▁ALITTLE
- ▁BAREFOOT
- ▁CONCENTRATE
- ▁DALGONA
- ▁GUIDEBOOK
- ▁KIDZANIA
- ▁PALACE
- ▁ROSHEN
- ▁TEXTBOOK
- ▁TUNAKIMBAP
- OTTEOK
- ▁830
- ▁HOSE
- ITIES
- NIX
- ▁FIFTEENCM
- ▁IMAGE
- ▁CHEESEKIMBAP
- ▁HOTTER
- ▁PATT
- ▁CLIPPE
- ▁FOXES
- EAGLE
- ▁QUE
- NDING
- ▁DETER
- AP
- YEO
- UED
- ▁PAI
- ▁EXCITEDLY
- ▁WAVED
- ▁BUL
- BUT
- ▁METER
- KIMBAP
- HAND
- WATCHING
- ▁CONVERS
- ▁FLICK
- ▁PEDIATRIC
- NAMENT
- REIGN
- ▁BIKINI
- ▁BUCKWHEATCREPE
- ▁JENGA
- ▁LAUNCH
- ▁OPTICIAN
- ▁PIGTAIL
- ▁SIMON
- ▁SUBSCRIBE
- ▁TICKLISH
- NELS
- ▁PINWHEEL
- INATED
- ▁DRUG
- ▁ONESIXCM
- ▁EIGHTH
- ▁SMARTEST
- ▁HUNTING
- ▁PIL
- UMMY
- ITION
- UNNI
- ▁SU
- ▁POWERFULL
- ▁WAFFLE
- DIA
- ▁TICK
- EIGHT
- PICKED
- FIFTY
- WENT
- ▁BOT
- ▁REPRESENT
- OKKI
- ▁COCOA
- ▁CUSHION
- ▁FARTHEST
- ▁PENTAGON
- ▁SLIDING
- ▁SWEAR
- ▁MOLD
- ▁BBOY
- ▁80
- ▁WATERPROOF
- ▁RAIL
- ▁CREATED
- ▁CHIRPING
- ▁SEARCH
- SEOK
- ▁TOAST
- ▁BETRAYE
- JOR
- ▁NI
- ZI
- ▁SLAMM
- ▁GU
- ▁NAG
- ▁SERVED
- UFFY
- ▁INSECT
- ▁ZIPPE
- LP
- YEONG
- ESSION
- IPPED
- ▁CELEBRAT
- ▁CHANG
- '50'
- POST
- ENTI
- ▁DISAPPOINT
- ▁QU
- ▁FOREIGN
- ▁POSSIB
- ▁CONGRATULAT
- ADOW
- ▁TAE
- CAFÉ
- ▁COURIER
- ▁DAEJEON
- ▁DOWNSTAIRS
- ▁EXPER
- ▁PREFERENCE
- ▁LACT
- ▁OCCUR
- ORIENT
- ▁SPACIOUS
- INARY
- ▁KNITTING
- ▁LIBERTY
- VILLE
- RB
- ▁BARKED
- DAN
- ▁TIN
- ATOR
- ▁PHO
- RIED
- ▁JINDA
- OUND
- HOE
- ▁STRETCHE
- ▁SNEEZ
- EVI
- QUALITY
- MOM
- ▁BLIND
- HYEON
- ECTION
- ROKE
- ▁ANCHOVY
- ▁ASHAMED
- ▁COASTER
- ▁CONFUSING
- ▁CYCLIST
- ▁DANDELION
- ▁FIREFLIES
- ▁HYUNG
- ▁KNOWLEDGE
- ▁NARACULA
- ▁SCAB
- ▁VOCABULARY
- ▁CONFIDENT
- ▁RELAT
- ▁FOOLISH
- ▁NINEAM
- ▁ZO
- ▁BOU
- ▁FLATTERED
- ▁BLINDING
- ▁SKATER
- ▁ROLLER
- ▁FIRM
- COTT
- NURI
- ▁WARMER
- ▁LONGEST
- ▁TICKLE
- ▁AMERICAN
- GI
- AGGED
- CHARGE
- TODAY
- ▁CREATE
- UMPING
- JJAEK
- ▁BEGINNER
- ▁CLICKING
- ▁CORRIDORS
- ▁DAZZLING
- ▁DERMATOLOGIST
- ▁DILIGENT
- ▁FEBRUARY
- ▁FISHBOWL
- ▁GARAETTEOK
- ▁GARGLE
- ▁INJURED
- ▁MANTISES
- ▁NAKSEONGDAE
- ▁ROAST
- ▁SNITCH
- ▁SLIMMER
- ▁DISCHARGE
- ▁SOAKED
- ▁SELECTED
- ▁VICE
- ▁INFECT
- ▁CONTAINER
- ▁NEATLY
- ▁STARSHAPED
- LOTTEWORLD
- ▁SUPPLEMENT
- ▁EIGHTTH
- ISTERS
- ▁TICKL
- ▁STRAIGHTEN
- ▁SKINN
- RANGE
- ▁TANGERINE
- ▁STO
- PREPARED
- SPROUT
- TWELVE
- TONIGHT
- ▁RECOGNI
- VAN
- BEEN
- ▁EXPLODE
- ▁CHUBB
- ANGGU
- ▁SAVI
- ▁950
- ▁ADJUST
- ▁CASTANETS
- ▁FAITH
- ▁GONGJU
- ▁GRAIN
- ▁GROSS
- ▁JUPITER
- ▁MAGPIE
- ▁SAIPAN
- ▁SKULL
- ▁SPARROW
- ▁VACCINATED
- ▁VIGOROUSLY
- ▁AUTOMATIC
- ▁NEARBY
- SEVENTEEN
- ▁TWENTI
- ▁NIKE
- ▁SEORA
- DATORS
- ▁PONG
- ▁730
- ▁SCARIER
- ▁TRUNK
- ▁BETRAYER
- ▁CHEESEGIMBAP
- ONGDAE
- ▁SEVERE
- ▁SPOONFUL
- CTATION
- ▁WITCH
- ▁LIMIT
- ▁EATTTEOKBOKKI
- GEOUS
- ▁CRAWLED
- ▁SUC
- AVED
- AGE
- ▁KITTEN
- ▁SKEWER
- IZED
- ▁TEAR
- WAVE
- ▁RACI
- ▁CONTAIN
- ▁TRO
- ▁GUGUDAN
- ▁GEPPET
- ▁PHARMACI
- MULGUK
- PPAK
- SAMJANG
- ▁ACORN
- ▁APPETITE
- ▁BRUNCH
- ▁BUMMER
- ▁DIARRHEA
- ▁FLAP
- ▁GERMS
- ▁GWANSUN
- ▁HOMETOWN
- ▁KILOMETERS
- ▁MARRIAGE
- ▁PRANKS
- ▁RADISH
- '5'
- ′
- 수
- '2'
- ́
- 子
- 예
- 요
- '3'
- É
- '6'
- '9'
- “
- .
- '1'
- 단
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/ko_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ko_bpe5000_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: contextual_block_conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
block_size: 40
hop_size: 16
look_ahead: 16
init_average: true
ctx_pos_enc: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Waterhorse/chessgpt-chat-v1
|
Waterhorse
| 2023-07-06T06:20:40Z | 124 | 10 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:Waterhorse/chess_data",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:OpenAssistant/oasst1",
"dataset:vicgalle/alpaca-gpt4",
"arxiv:2306.09200",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-03T21:18:08Z |
---
license: apache-2.0
language:
- en
datasets:
- Waterhorse/chess_data
- anon8231489123/ShareGPT_Vicuna_unfiltered
- OpenAssistant/oasst1
- vicgalle/alpaca-gpt4
---
# Chessgpt-Chat-v1
Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1.
- Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1)
- Chat Version: [Chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1)
Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact xidong.feng.20@ucl.ac.uk.
## Model Details
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model in Chess.
## GPU Inference
This requires a GPU with 8GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-chat-v1")
model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-chat-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
# Conversation between two
prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1:"
# Conversation between more than two
#prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1: Sicilian defense.<|endoftext|>Human 2:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
```
# Uses
Excluded uses are described below.
### Direct Use
`chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.
#### Out-of-Scope Use
`chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.
#### Bias, Risks, and Limitations
Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.
# Evaluation
Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results.
# Citation Information
```bash
@article{feng2023chessgpt,
title={ChessGPT: Bridging Policy Learning and Language Modeling},
author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
journal={arXiv preprint arXiv:2306.09200},
year={2023}
}
```
|
Waterhorse/chessgpt-base-v1
|
Waterhorse
| 2023-07-06T06:19:40Z | 83 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:Waterhorse/chess_data",
"arxiv:2306.09200",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T22:03:14Z |
---
license: apache-2.0
language:
- en
datasets:
- Waterhorse/chess_data
---
# Chessgpt-Base-3B-v1
Chessgpt-Base-v1 is the base model of Chessgpt.
- Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1)
- Chat Version: [chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1)
Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact xidong.feng.20@ucl.ac.uk.
## Model Details
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model in Chess.
## GPU Inference
This requires a GPU with 8GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-base-v1")
model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-base-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
# Conversation between two
prompt = "Q: 1.e4 c5, what is the name of this opening?A:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
```
# Uses
Excluded uses are described below.
### Direct Use
`chessgpt-base-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.
#### Out-of-Scope Use
`chessgpt-base-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.
#### Bias, Risks, and Limitations
Just as with any language model, chessgpt-base-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.
# Evaluation
Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results.
# Citation Information
```bash
@article{feng2023chessgpt,
title={ChessGPT: Bridging Policy Learning and Language Modeling},
author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
journal={arXiv preprint arXiv:2306.09200},
year={2023}
}
```
|
LarryAIDraw/sakurako
|
LarryAIDraw
| 2023-07-06T06:00:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:27:47Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/100652/sakurako-busujima-grand-blue
|
aroot/eng-guj-simcse_random
|
aroot
| 2023-07-06T05:52:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T05:29:24Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2895
- Bleu: 2.6173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/nkbllcfrmgtvrvcv2275pchsnltrx
|
nolanaatama
| 2023-07-06T05:50:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:46:52Z |
---
license: creativeml-openrail-m
---
|
nolanaatama/3drndrngstyl
|
nolanaatama
| 2023-07-06T05:37:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:19:33Z |
---
license: creativeml-openrail-m
---
|
Ryukijano/whisper-small-dv
|
Ryukijano
| 2023-07-06T05:36:17Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-05T06:25:50Z |
---
license: mit
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
---
---
# Whisper Small DV Model

## Model Description
The `whisper-small-dv` model is an advanced Automatic Speech Recognition (ASR) model, trained on the extensive [Mozilla Common Voice 13.0](https://commonvoice.mozilla.org/en/datasets) dataset. This model is capable of transcribing spoken language into written text with high accuracy, making it a valuable tool for a wide range of applications, from transcription services to voice assistants.
## Training
The model was trained using the PyTorch framework and the Transformers library. Training metrics and visualizations can be viewed on TensorBoard.
## Performance
The model's performance was evaluated on a held-out test set. The evaluation metrics and results can be found in the "Eval Results" section.
## Usage
The model can be used for any ASR task. To use the model, you can load it using the Transformers library:
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load the model
model = Wav2Vec2ForCTC.from_pretrained("Ryukijano/whisper-small-dv")
processor = Wav2Vec2Processor.from_pretrained("Ryukijano/whisper-small-dv")
# Use the model for ASR
inputs = processor("path_to_audio_file", return_tensors="pt", padding=True)
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0])
```
## License
This model is released under the MIT license.
---
P
|
eigenscribe/etzHayim
|
eigenscribe
| 2023-07-06T05:34:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:33:49Z |
---
license: creativeml-openrail-m
---
|
mazeinmouse/a2c-PandaReachDense-v2
|
mazeinmouse
| 2023-07-06T05:32:52Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T05:29:58Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.88 +/- 0.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
insub/distilbert-base-uncased-finetuned-imdb
|
insub
| 2023-07-06T05:22:05Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-06T05:17:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_random
|
aroot
| 2023-07-06T05:13:07Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T04:53:15Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1475
- Bleu: 31.8135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ashmitg/model_lora
|
ashmitg
| 2023-07-06T05:11:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-04T22:28:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
tuanio/WhisperCTC
|
tuanio
| 2023-07-06T05:06:09Z | 0 | 1 | null |
[
"summarization",
"dataset:mozilla-foundation/common_voice_13_0",
"arxiv:1910.09700",
"region:us"
] |
summarization
| 2023-07-06T04:55:16Z |
---
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
pipeline_tag: summarization
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
```python
class WhisperCTC(nn.Module):
def __init__(
self,
encoder_id: str = "tuanio/whisper-encoder.tiny.en",
dropout: float = 0.1,
vocab_size: int = 47,
):
super().__init__()
self.encoder = WhisperEncoder.from_pretrained(encoder_id)
print("Freezing Whisper Encoder...")
self.encoder._freeze_parameters()
print("Freezed!")
self.lm_head = nn.Sequential(
nn.SiLU(),
nn.Dropout(dropout),
nn.Linear(self.encoder.config.d_model, vocab_size),
)
nn.init.kaiming_uniform_(
self.lm_head[-1].weight, mode="fan_in", nonlinearity="relu"
)
def forward(self, feat: Tensor, attn_mask: Tensor):
enc = self.encoder(
input_features=feat, attention_mask=attn_mask
).last_hidden_state
logits = self.lm_head(enc)
log_probs = nn.functional.log_softmax(logits, dim=-1)
return log_probs
```
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
- IndictTTS: https://www.kaggle.com/datasets/tuannguyenvananh/indictts-english
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
```yaml
data_cfg:
dataset:
processor:
feat_extractor_id: ${model_cfg.model.encoder_id}
tokenizer_id: ${model_cfg.tokenizer_id}
path:
base:
indict_tts: ../IndicTTS
cv: ../
train:
- train_data/indict_tts_train.jsonl
# - train_data/cv_train.jsonl
test:
- train_data/indict_tts_test.jsonl
# - train_data/cv_test.jsonl
dev:
- train_data/indict_tts_dev.jsonl
# - train_data/cv_dev.jsonl
dataloader:
batch_size: 46
num_workers: 8
pin_memory: True
model_cfg:
tokenizer_id: tuanio/wav2vec2-phoneme-ipa-ctc
model:
dropout: 0.1
encoder_id: tuanio/whisper-encoder.medium.en
optim:
lr: 1.25e-05
betas: [0.9, 0.998]
weight_decay: 0.01
scheduler:
name: linear
total_steps: -1
warmup_ratio: 0.05
interval: step
frequency: 1
trainer_cfg:
log:
wandb: True
logger_wandb:
project: aped_indian-lish
name: whisper-medium-indict-tts-only-from-epoch1
log_model: all
arguments:
accelerator: gpu
devices: -1
max_epochs: 10
log_every_n_steps: 1
enable_checkpointing: True
accumulate_grad_batches: 2
inference_mode: True
gradient_clip_val: 5.0
check_val_every_n_epoch: 1
val_check_interval: null
experiment_cfg:
train: True
valid: True
test: True
ckpt:
resume_ckpt: True
ckpt_path: ckpt/medium.epoch3.ckpt
```
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s45
|
squeeze-ai-lab
| 2023-07-06T04:46:32Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-06T03:46:53Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
mazeinmouse/a2c-AntBulletEnv-v0
|
mazeinmouse
| 2023-07-06T04:34:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T04:33:37Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1651.08 +/- 126.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/gpt2-concat-cbt-rarity-2k-p3k
|
NasimB
| 2023-07-06T04:28:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T02:13:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-rarity-2k-p3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-rarity-2k-p3k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7186 | 0.29 | 500 | 5.6281 |
| 5.3685 | 0.58 | 1000 | 5.1947 |
| 5.0278 | 0.87 | 1500 | 4.9465 |
| 4.7459 | 1.17 | 2000 | 4.8014 |
| 4.5838 | 1.46 | 2500 | 4.6757 |
| 4.4777 | 1.75 | 3000 | 4.5664 |
| 4.3633 | 2.04 | 3500 | 4.4935 |
| 4.1601 | 2.33 | 4000 | 4.4512 |
| 4.1388 | 2.62 | 4500 | 4.3967 |
| 4.1004 | 2.91 | 5000 | 4.3434 |
| 3.9085 | 3.21 | 5500 | 4.3385 |
| 3.8559 | 3.5 | 6000 | 4.3100 |
| 3.8409 | 3.79 | 6500 | 4.2772 |
| 3.7507 | 4.08 | 7000 | 4.2758 |
| 3.5677 | 4.37 | 7500 | 4.2717 |
| 3.5771 | 4.66 | 8000 | 4.2566 |
| 3.5653 | 4.95 | 8500 | 4.2354 |
| 3.3565 | 5.24 | 9000 | 4.2632 |
| 3.3184 | 5.54 | 9500 | 4.2598 |
| 3.3222 | 5.83 | 10000 | 4.2510 |
| 3.2596 | 6.12 | 10500 | 4.2621 |
| 3.1718 | 6.41 | 11000 | 4.2643 |
| 3.1656 | 6.7 | 11500 | 4.2647 |
| 3.1666 | 6.99 | 12000 | 4.2645 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
omnitron/PPO-Huggy
|
omnitron
| 2023-07-06T04:23:24Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-06T04:22:59Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: omnitron/PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ocisd4/openllama-zh-7B
|
ocisd4
| 2023-07-06T04:13:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T03:46:10Z |
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
import transformers
tokenizer = LlamaTokenizer.from_pretrained(
'ocisd4/openllama-zh',
add_bos_token=False,
add_eos_token=False,
use_auth_token=True,
use_fast=False)
model = LlamaForCausalLM.from_pretrained('ocisd4/openllama-zh', device_map='auto',use_auth_token=True)
prompt = '關於華碩的傳說'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=256,
do_sample=True, top_k=40, top_p=0.95, temperature=0.7, repetition_penalty=1.08,
)
print(tokenizer.decode(generation_output[0]))
```
The is a 7B pretrain model, train from openllama pretrain weight, context size=2048
**keep updating new model**
|
lovelyxs/PPO-LunarLander-v2
|
lovelyxs
| 2023-07-06T04:11:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T03:54:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.53 +/- 16.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NiscR/Reinforce-1
|
NiscR
| 2023-07-06T03:45:26Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T03:45:16Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 491.60 +/- 25.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zhundred/ppo-LunarLander-v2
|
zhundred
| 2023-07-06T03:38:13Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T03:37:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.86 +/- 20.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sandrro/text_to_subfunction_v6
|
Sandrro
| 2023-07-06T03:24:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T20:05:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: text_to_subfunction_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_to_subfunction_v6
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2720
- F1: 0.4415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5055 | 1.0 | 4365 | 3.4067 | 0.1639 |
| 2.5598 | 2.0 | 8730 | 2.6935 | 0.2833 |
| 2.1499 | 3.0 | 13095 | 2.3594 | 0.3420 |
| 1.6575 | 4.0 | 17460 | 2.2243 | 0.3921 |
| 1.2463 | 5.0 | 21825 | 2.1722 | 0.4105 |
| 0.9624 | 6.0 | 26190 | 2.1955 | 0.4341 |
| 0.7407 | 7.0 | 30555 | 2.2434 | 0.4449 |
| 0.5608 | 8.0 | 34920 | 2.3604 | 0.4329 |
| 0.4233 | 9.0 | 39285 | 2.4747 | 0.4361 |
| 0.2433 | 10.0 | 43650 | 2.5562 | 0.4404 |
| 0.2154 | 11.0 | 48015 | 2.6678 | 0.4374 |
| 0.1811 | 12.0 | 52380 | 2.8158 | 0.4341 |
| 0.1374 | 13.0 | 56745 | 2.9037 | 0.4425 |
| 0.1406 | 14.0 | 61110 | 3.0182 | 0.4366 |
| 0.1135 | 15.0 | 65475 | 3.0941 | 0.4440 |
| 0.0992 | 16.0 | 69840 | 3.1516 | 0.4437 |
| 0.1159 | 17.0 | 74205 | 3.2001 | 0.4418 |
| 0.0809 | 18.0 | 78570 | 3.2489 | 0.4373 |
| 0.1035 | 19.0 | 82935 | 3.2650 | 0.4407 |
| 0.0558 | 20.0 | 87300 | 3.2720 | 0.4415 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0.dev20230414+cu117
- Datasets 2.9.0
- Tokenizers 0.13.3
|
MWaleed/q-Taxi-v3
|
MWaleed
| 2023-07-06T03:23:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T03:23:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MWaleed/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BaoKien/deberta-base-finetuned-squad-v2
|
BaoKien
| 2023-07-06T03:22:36Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-06T01:19:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: deberta-base-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad-v2
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.753 | 1.0 | 8238 | 0.7286 |
| 0.5378 | 2.0 | 16476 | 0.7578 |
| 0.3881 | 3.0 | 24714 | 0.9221 |
### Performance
- 'exact': 81.84115219405373
- 'f1': 85.19125695340612
- 'total': 11873
- 'HasAns_exact': 80.24628879892038
- 'HasAns_f1': 86.95610556811602
- 'HasAns_total': 5928
- 'NoAns_exact': 83.43145500420522
- 'NoAns_f1': 83.43145500420522
- 'NoAns_total': 5945
- 'best_exact': 81.84115219405373
- 'best_exact_thresh': 0.9994916319847107
- 'best_f1': 85.19125695340657
- 'best_f1_thresh': 0.9994916319847107
- 'total_time_in_seconds': 294.34524957099984
- 'samples_per_second': 40.33698528277447
- 'latency_in_seconds': 0.024791143735450168
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
KJan05/rl-CartPole-v1-unit4
|
KJan05
| 2023-07-06T03:21:57Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T03:21:45Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: rl-CartPole-v1-unit4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AngelaBoadway/DustinBates
|
AngelaBoadway
| 2023-07-06T03:19:17Z | 0 | 1 |
transformers
|
[
"transformers",
"en",
"dataset:AngelaBoadway/DustinBates",
"doi:10.57967/hf/0859",
"endpoints_compatible",
"region:us"
] | null | 2023-07-06T01:00:15Z |
---
language:
- en
datasets:
- AngelaBoadway/DustinBates
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
D U S T I N B A T E S
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Angela Boadway
- **Language(s) (NLP):** English
|
squeeze-ai-lab/sq-xgen-7b-8k-inst-w4-s0
|
squeeze-ai-lab
| 2023-07-06T03:15:32Z | 0 | 1 | null |
[
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:33:19Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst).
* **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w4-s0
|
squeeze-ai-lab
| 2023-07-06T03:14:48Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:31:51Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
h2oai/h2ogpt-research-oasst1-llama-65b
|
h2oai
| 2023-07-06T03:11:31Z | 1,502 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"open-source",
"en",
"dataset:h2oai/openassistant_oasst1_h2ogpt_graded",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-13T18:11:13Z |
---
license: other
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
datasets:
- h2oai/openassistant_oasst1_h2ogpt_graded
---
# h2oGPT Model Card
## Summary
H2O.ai's `h2ogpt-research-oasst1-llama-65b` is a 65 billion parameter instruction-following large language model (NOT licensed for commercial use).
- Base model: [decapoda-research/llama-65b-hf](https://huggingface.co/decapoda-research/llama-65b-hf)
- Fine-tuning dataset: [h2oai/openassistant_oasst1_h2ogpt_graded](https://huggingface.co/datasets/h2oai/openassistant_oasst1_h2ogpt_graded)
- Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
- Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/llama-65b-hf.h2oaiopenassistant_oasst1_h2ogpt_graded.1_epochs.113510499324f0f007cbec9d9f1f8091441f2469.3.zip)
## Chatbot
- Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
[](https://github.com/h2oai/h2ogpt)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the following libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.1
pip install einops==0.6.1
```
```python
import torch
from transformers import pipeline, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left")
generate_text = pipeline(model="h2oai/h2ogpt-research-oasst1-llama-65b", tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", prompt_type="human_bot")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/h2oai_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", torch_dtype=torch.bfloat16, device_map="auto")
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer, prompt_type="human_bot")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 8192, padding_idx=31999)
(layers): ModuleList(
(0-79): 80 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=8192, out_features=8192, bias=False)
(k_proj): Linear(in_features=8192, out_features=8192, bias=False)
(v_proj): Linear(in_features=8192, out_features=8192, bias=False)
(o_proj): Linear(in_features=8192, out_features=8192, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=8192, out_features=22016, bias=False)
(down_proj): Linear(in_features=22016, out_features=8192, bias=False)
(up_proj): Linear(in_features=8192, out_features=22016, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=8192, out_features=32000, bias=False)
)
```
## Model Configuration
```json
LlamaConfig {
"_name_or_path": "h2oai/h2ogpt-research-oasst1-llama-65b",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 0,
"custom_pipelines": {
"text-generation": {
"impl": "h2oai_pipeline.H2OTextGenerationPipeline",
"pt": "AutoModelForCausalLM"
}
},
"eos_token_id": 1,
"hidden_act": "silu",
"hidden_size": 8192,
"initializer_range": 0.02,
"intermediate_size": 22016,
"max_position_embeddings": 2048,
"max_sequence_length": 2048,
"model_type": "llama",
"num_attention_heads": 64,
"num_hidden_layers": 80,
"pad_token_id": -1,
"rms_norm_eps": 1e-05,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.30.1",
"use_cache": true,
"vocab_size": 32000
}
```
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
TBD
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
EDfai/furry_lora_collections_self_made
|
EDfai
| 2023-07-06T03:08:16Z | 0 | 1 | null |
[
"image-generation",
"furry",
"region:us"
] | null | 2023-06-17T08:40:20Z |
---
tags:
- image-generation
- furry
---
# 模型摘要
注意这里所有的角色都是furry角色<br>
这里所有lora模型均是在fluffyrock模型上面进行训练,使用indigo furry mix系列模型(v30、v35)做出示例图<br>
目前已做出人物模型:<br>
* ECHO——Flynn、Chase、Jenna、Carl、TJ、Sydney、Kudzu<br>
* TSR——Yao<br>
* UTAU——Aro、Oyupo、Laru<br>
# 模型独立触发词
独立触发词括号后为具体使用prompt组合示范<br>
ECHO:
* Flynn——flynnboi (furry flynnboi ((reptile)) ((lizard)) gila anthro male,black Mohawk, solo, detailed beautiful green eyes,(detailed black scalie scales))<br>
* Carl——carlhen (furry carlhen goat anthro mature male, horn, beanie, solo, beard , detailed beautiful green eyes, male focus, (detailed brown fluffy fur))<br>
* Chase——chasehunter (furry chasehunter otter anthro mature male, goatee, solo, detailed beautiful orange eyes, (detailed brown fluffy fur))<br>
* Jenna——jennabg (furry jennabg fox anthro mature female, solo, detailed beautiful blue eyes, female focus, (detailed yellow fur))<br>
* TJ——tjgoodboi (furry tjgoodboi lynx anthro male, solo, blue eyes, male focus, grey and white body,)<br>
* Sydney——sydneybs (furry sydneybs otter anthro mature male, cap, solo, detailed beautiful blue eyes, male focus, (detailed brown fluffy fur))<br>
* Kudzu——kudzu(solo, kudzu, raccoon, anthro, male, black_eyes)<br>
TSR:
* Yao——tigeryao (furry tigeryao tiger anthro mature male, solo, stripes, detailed beautiful black eyes, (detailed white yellow fur))<br>
UTAU:
* Laru——mineraru (mineraru, dragon, ((bald)), earless, solo, blue eyes, blue body, blue skin, blue scalie scales)<br>
* Oyupo——oyupo (furry ((oyupo)) [[tiger]] anthro mature male, solo, (((white eyebrows))) , detailed beautiful brown eyes, (detailed yellow fluffy fur))<br>
* Aro——wolfaro (furry wolfaro anthro wolf mature male, solo, detailed beautiful green eyes, (detailed brown white fur))<br>
# 模型说明
想要较为准确使用正向prompt调用一个人物lora模型,在保留人物重要特征同时让其具有良好泛化性,作者在这里给出一些自己个人见解,从上到下代表重要性:
* 物种种类(species、furry、anthro等)
* 肤色
* 瞳色
* 关键特征(如龙的角、白色眉毛等,不好描述就不写)
* 独立触发词
比较有趣的是,前三项描述貌似包含了一个人物模型的大多数特征,只提出这三项有可能都会生成一个差不多的人物。<br>
即便一个furry相关人物lora模型训练的时候并没有将这几项作为标签,我也建议你在调用furry相关人物lora模型的时候对这些方面描述。<br>
作者生成的这些模型有些轻微的过拟合,易调用性方面较差。若发现自己难以调用出人物可以参考人物文件夹下的示例图prompt描述。<br>
水音来流(minelaru)模型调用时需注意,最好按照作者生成示例图中的prompt进行描述,否则难以调用出相关人物<br>
# 模型预览(证件照)











|
aroot/eng-fra-wsample.43a
|
aroot
| 2023-07-06T03:04:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T02:44:51Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-wsample.43a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1186
- Bleu: 32.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
google/umt5-small
|
google
| 2023-07-06T02:31:38Z | 9,128 | 21 |
transformers
|
[
"transformers",
"pytorch",
"text2text-generation",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-02T01:48:53Z |
---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
license: apache-2.0
---
[Google's UMT5](https://github.com/google-research/multilingual-t5)
UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5)
Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant*
## Abstract
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.*
|
saintzeno/a2c-AntBulletEnv-v0
|
saintzeno
| 2023-07-06T02:12:44Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T01:49:03Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1201.73 +/- 71.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chunlongniu/SantaTrialsCoder
|
chunlongniu
| 2023-07-06T01:59:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T01:55:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.4.0.dev0
|
AAOBA/ppo-SnowballTarget
|
AAOBA
| 2023-07-06T01:24:49Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-06T01:24:28Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chikoto/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jsjung00/ppo-LunarLander-v2
|
jsjung00
| 2023-07-06T01:20:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T01:20:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -636.93 +/- 286.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YIMMYCRUZ/vit-model-ojas
|
YIMMYCRUZ
| 2023-07-06T01:14:59Z | 72 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-07-05T03:17:25Z |
---
license: apache-2.0
tags:
- image-segmentation
- generated_from_trainer
metrics:
- accuracy
widget:
- src: https://i.ibb.co/NL52HmG/sana.png
example_title: Healthy
- src: https://i.ibb.co/P44CL1q/marchita.png
example_title: Bean Rust
model-index:
- name: vit-model-ojas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model-ojas
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0099
- Accuracy: 1.0
## Model description
You can manage to segment the images of plant leaves to be able to know if they are healthy or withered.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1457 | 3.85 | 500 | 0.0099 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
anujsahani01/finetuned_AI4Bharat_mr_en
|
anujsahani01
| 2023-07-06T01:08:18Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-05T15:52:30Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuned_AI4Bharat_mr_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_AI4Bharat_mr_en
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 8000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
anujsahani01/finetuned_Mbart_mr_en
|
anujsahani01
| 2023-07-06T01:08:06Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-05T17:34:56Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuned_Mbart_mr_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_Mbart_mr_en
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chaudha7/LLMs
|
chaudha7
| 2023-07-06T00:51:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-17T21:15:35Z |
### Model Description
This is a fine-tuned Bloom-7b model. It has been trained on a dummy dataset for question answering purposes. It is not very useful for the general public.
I wanted to get an idea of the hugging face model and dataset pipeline.
Do check out https://huggingface.co/chaudha7/DiaryFlow
- **Developed by:** Aashay Chaudhari
|
chaudha7/DiaryFlow
|
chaudha7
| 2023-07-06T00:49:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-06T00:43:32Z |
### Model Description
This is a fine-tuned Bloom-7b model. It was a demo project which I wanted to try to alleviate the seriousness and rapid pace around the "LLM" usecases.
This model has been trained on a custom chatGPT-created dataset (https://huggingface.co/datasets/chaudha7/Diary-Entry-To-Rap) which turns diary entries into short rap songs with a motivational ending. Just something fun for the users.
### Developed by: Aashay Chaudhari
## Training procedure
The following `bits and bytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
### Results
### Input:
Dear Diary,
Oh boy, today was such an incredible day! I woke up with a burst of excitement in my heart because it was my birthday! Double digits, baby! Mom and Dad surprised
me with a brand new bike, just like the one I've been dreaming about for ages. We went for a ride in the park, and I felt like I was flying!
My friends came over for a super fun party, and we played games, danced, and laughed until our tummies hurt. Grandma baked the most delicious chocolate cake,
and I blew out all the candles with one big breath. The best part? I got some awesome presents, like a shiny diary (just like you, Diary!), art supplies, and
a book about adventures. I can't wait to read it tonight!
Tonight, as I snuggle under my cozy blanket, I can't help but feel so grateful for the love and joy that surrounds me. Being 10 feels amazing, Diary.
I can't wait to see what other incredible adventures await me this year!
Until tomorrow,
Emily
### Output:
(Verse)
Today's my birthday,
feeling so alive,
Double digits, yo,
I'm ready to strive.
Got a brand-new bike,
flying through the air,
Riding in the park,
without a single care.
Friends came over,
we had a blast,
Playing games, dancing,
memories that'll last.
Grandma's chocolate cake,
candles all aglow,
Blowing them out,
a wish ready to grow.
Shiny diary, art supplies, and books,
Unleashing my creativity, no limits, no hooks.
Grateful for love and joy, can't help but smile,
Ten years old, embracing life's every mile.
(Motivational Ending)
No matter the age, dreams light up the way,
Believe in yourself, make the most of today.
Adventures await, with each step you take,
Keep shining, keep growing, there's no limit to what you'll make.
|
CalamitousVisibility/UK_Energy_Industry_Complaints_Identifier_ver1
|
CalamitousVisibility
| 2023-07-06T00:28:38Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T22:24:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: UK_Energy_Industry_Complaints_Identifier_ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UK_Energy_Industry_Complaints_Identifier_ver1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a balanced dataset consisting of 17,620
publicy available customer reviews of various domestic energy suppliers in the United Kingdom.
It achieves the following results on the evaluation set:
- Loss: 0.3369
- Accuracy: 0.9561
- F1: [0.95594347 0.95621041]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
NasimB/gpt2-concat-gutenberg-2p2k-1k
|
NasimB
| 2023-07-06T00:20:57Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-05T22:18:14Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-gutenberg-2p2k-1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-gutenberg-2p2k-1k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7263 | 0.29 | 500 | 5.6343 |
| 5.3716 | 0.58 | 1000 | 5.2005 |
| 5.0162 | 0.88 | 1500 | 4.9564 |
| 4.7483 | 1.17 | 2000 | 4.8083 |
| 4.5898 | 1.46 | 2500 | 4.6842 |
| 4.484 | 1.75 | 3000 | 4.5777 |
| 4.3681 | 2.04 | 3500 | 4.4955 |
| 4.1667 | 2.33 | 4000 | 4.4513 |
| 4.139 | 2.63 | 4500 | 4.3991 |
| 4.1109 | 2.92 | 5000 | 4.3502 |
| 3.9085 | 3.21 | 5500 | 4.3470 |
| 3.8598 | 3.5 | 6000 | 4.3167 |
| 3.8525 | 3.79 | 6500 | 4.2818 |
| 3.7503 | 4.08 | 7000 | 4.2851 |
| 3.5747 | 4.38 | 7500 | 4.2769 |
| 3.5782 | 4.67 | 8000 | 4.2592 |
| 3.5679 | 4.96 | 8500 | 4.2398 |
| 3.3474 | 5.25 | 9000 | 4.2678 |
| 3.3278 | 5.54 | 9500 | 4.2623 |
| 3.3307 | 5.83 | 10000 | 4.2571 |
| 3.2522 | 6.13 | 10500 | 4.2674 |
| 3.1738 | 6.42 | 11000 | 4.2697 |
| 3.1687 | 6.71 | 11500 | 4.2692 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Lucas-lab/distilbert-base-uncased-finetuned-cola
|
Lucas-lab
| 2023-07-06T00:13:07Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-02T20:28:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Lucas-lab/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lucas-lab/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1848
- Validation Loss: 0.5885
- Train Matthews Correlation: 0.5019
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5153 | 0.4879 | 0.4331 | 0 |
| 0.3121 | 0.5405 | 0.4874 | 1 |
| 0.1848 | 0.5885 | 0.5019 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hameersiddique/question_answer_model
|
hameersiddique
| 2023-07-05T23:40:18Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-05T18:05:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: question_answer_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_answer_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4246 |
| 2.7406 | 2.0 | 500 | 1.7882 |
| 2.7406 | 3.0 | 750 | 1.7276 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahmedALM1221/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat-50
|
ahmedALM1221
| 2023-07-05T23:21:55Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-04T18:45:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat-50
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 0.9753340184994861
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat-50
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0909
- Accuracy: 0.9753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0236 | 1.0 | 122 | 1.9878 | 0.1305 |
| 1.88 | 2.0 | 244 | 1.7957 | 0.2867 |
| 1.5421 | 3.0 | 366 | 1.3813 | 0.5149 |
| 0.9489 | 4.0 | 488 | 0.9015 | 0.7030 |
| 0.8734 | 5.0 | 610 | 0.6616 | 0.7667 |
| 0.6562 | 6.0 | 732 | 0.5095 | 0.8140 |
| 0.5788 | 7.0 | 854 | 0.4036 | 0.8520 |
| 0.6737 | 8.0 | 976 | 0.3157 | 0.8921 |
| 0.4687 | 9.0 | 1098 | 0.2146 | 0.9281 |
| 0.3775 | 10.0 | 1220 | 0.2020 | 0.9353 |
| 0.3226 | 11.0 | 1342 | 0.1549 | 0.9558 |
| 0.2452 | 12.0 | 1464 | 0.0909 | 0.9753 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
osiria/bert-italian-uncased-ner
|
osiria
| 2023-07-05T23:20:34Z | 626 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"it",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-03T10:44:00Z |
---
license: apache-2.0
language:
- it
widget:
- text: "mi chiamo marco rossi, vivo a roma e lavoro per l'agenzia spaziale italiana"
example_title: "Example 1"
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> Task: Named Entity Recognition</span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: BERT</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> Type: Uncased</span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, fine-tuned for <b>Named Entity Recognition</b> (<b>Person</b>, <b>Location</b>, <b>Organization</b> and <b>Miscellanea</b> classes) on the [WikiNER](https://figshare.com/articles/dataset/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) dataset <b>[2]</b>, using the uncased <b>BERT-ITALIAN</b> ([bert-base-italian-uncased](https://huggingface.co/osiria/bert-base-italian-uncased)) as a pre-trained model.
This is an uncased, base size BERT model. If you are looking for a cased model, you can refer to: https://huggingface.co/osiria/bert-italian-cased-ner
<h3>Training and Performances</h3>
The model is trained to perform entity recognition over 4 classes: <b>PER</b> (persons), <b>LOC</b> (locations), <b>ORG</b> (organizations), <b>MISC</b> (miscellanea, mainly events, products and services). It has been fine-tuned for Named Entity Recognition, using the WikiNER Italian dataset plus an additional custom dataset of manually annotated Wikipedia paragraphs.
The WikiNER dataset has been splitted in 102.352 training instances and 25.588 test instances, and the model has been trained for 1 epoch with a constant learning rate of 1e-5.
The performances on the test set are reported in the following table:
| Recall | Precision | F1 |
| ------ | ------ | ------ |
| 90.10 | 90.56 | 90.32 |
The metrics have been computed at the token level and then macro-averaged over the 4 classes.
Then, since WikiNER is an automatically annotated (silver standard) dataset, which sometimes contains imperfect annotations, an additional fine-tuning on ~3.500 manually annotated paragraphs has been performed.
<h3>Quick usage</h3>
```python
from transformers import BertTokenizerFast, BertForTokenClassification
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-italian-uncased-ner")
model = BertForTokenClassification.from_pretrained("osiria/bert-italian-uncased-ner")
from transformers import pipeline
ner = pipeline("ner", model = model, tokenizer = tokenizer, aggregation_strategy="first")
ner("mi chiamo marco rossi, vivo a roma e lavoro per l'agenzia spaziale italiana nella missione prisma")
[{'entity_group': 'PER',
'score': 0.9984422,
'word': 'marco rossi',
'start': 10,
'end': 21},
{'entity_group': 'LOC',
'score': 0.9976732,
'word': 'roma',
'start': 30,
'end': 34},
{'entity_group': 'ORG',
'score': 0.99747753,
'word': 'agenzia spaziale italiana',
'start': 50,
'end': 75},
{'entity_group': 'MISC',
'score': 0.96949625,
'word': 'prisma',
'start': 91,
'end': 97}]
```
You can also try the model online using this web app: https://huggingface.co/spaces/osiria/bert-italian-uncased-ner
<h3>References</h3>
[1] https://arxiv.org/abs/1810.04805
[2] https://www.sciencedirect.com/science/article/pii/S0004370212000276
<h3>Limitations</h3>
This model is mainly trained on Wikipedia, so it's particularly suitable for natively digital text from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions
(like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).
<h3>License</h3>
The model is released under <b>Apache-2.0</b> license
|
asenella/mmnist_MoPoEconfig_resnet_seed_0_ratio_0_c
|
asenella
| 2023-07-05T23:16:00Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T21:11:40Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
mpetrikov/q-FrozenLake-v1-4x4-noSlippery
|
mpetrikov
| 2023-07-05T22:50:25Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T22:50:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mpetrikov/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hopkins/eng-mya-simcse.near2.4440
|
hopkins
| 2023-07-05T22:49:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-05T22:28:28Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse.near2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse.near2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8502
- Bleu: 4.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-mya-simcse.dev2.4440
|
hopkins
| 2023-07-05T22:46:19Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-05T22:24:42Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse.dev2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse.dev2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8287
- Bleu: 4.8012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
joydragon/Reinforce-Pixelcopter-PLE-v3
|
joydragon
| 2023-07-05T22:39:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T22:39:08Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.10 +/- 15.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TheSupremeTaco/Taxi-v3
|
TheSupremeTaco
| 2023-07-05T22:11:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T22:11:31Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="TheSupremeTaco/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LiviaQi/trained_model
|
LiviaQi
| 2023-07-05T22:10:22Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-05T21:06:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: trained_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/eng-guj-simcse.near2.4440
|
hopkins
| 2023-07-05T22:07:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-05T21:47:31Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse.near2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse.near2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2452
- Bleu: 2.9768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
newconew/speecht5_finetuned_voxpopuli_nl
|
newconew
| 2023-07-05T21:55:25Z | 80 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-05T19:33:24Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5194 | 4.3 | 1000 | 0.4806 |
| 0.494 | 8.61 | 2000 | 0.4670 |
| 0.4929 | 12.91 | 3000 | 0.4642 |
| 0.4914 | 17.21 | 4000 | 0.4612 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/eng-fra-simcse.near2.4440
|
hopkins
| 2023-07-05T21:32:35Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-05T21:12:42Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse.near2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse.near2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1372
- Bleu: 33.0232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jeffboudier/vision-transformers-spain-or-italy-fan
|
jeffboudier
| 2023-07-05T21:29:05Z | 296 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vision-transformers--spain-or-italy-fan
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5666666626930237
---
# vision-transformers--spain-or-italy-fan
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### italy soccer fan

#### spain soccer fan

|
cleandata/whisper-small-dv
|
cleandata
| 2023-07-05T21:27:43Z | 79 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-05T20:25:03Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - local
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.245470668011267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1680
- Wer Ortho: 62.1074
- Wer: 13.2455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1233 | 1.63 | 500 | 0.1680 | 62.1074 | 13.2455 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
KevinQuijano/model
|
KevinQuijano
| 2023-07-05T21:12:27Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-05T14:32:19Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - KevinQuijano/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
joydragon/Reinforce-Pixelcopter-PLE-v2
|
joydragon
| 2023-07-05T20:50:19Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T20:50:15Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.00 +/- 28.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
choward/csv
|
choward
| 2023-07-05T20:46:13Z | 0 | 0 | null |
[
"text-generation",
"region:us"
] |
text-generation
| 2023-07-05T20:42:22Z |
---
pipeline_tag: text-generation
---
|
Gaborandi/MedBERT-breastcancer
|
Gaborandi
| 2023-07-05T20:41:38Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-12-31T18:51:41Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: MedBERT-breastcancer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedBERT-breastcancer
This model is a fine-tuned version of [Charangan/MedBERT](https://huggingface.co/Charangan/MedBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 12263 | 1.0881 |
| No log | 2.0 | 24526 | 1.0259 |
| No log | 3.0 | 36789 | 0.9937 |
| No log | 4.0 | 49052 | 0.9831 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.13.2
|
egarciamartin/poca-SoccerTwos
|
egarciamartin
| 2023-07-05T20:40:50Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-05T20:40:07Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: egarciamartin/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dhiruHF/falcon7b-FT-DocQA-v2
|
dhiruHF
| 2023-07-05T20:39:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-05T20:39:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
SaffalPoosh/falcon_7B_instruct_safetensors
|
SaffalPoosh
| 2023-07-05T20:27:23Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-05T20:13:30Z |
Converted using oobabooga script to safetensors to test the TGI LLM inference engine
|
durdana/alpaca7B-lora
|
durdana
| 2023-07-05T20:25:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-05T20:25:31Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
cjohlmacher/q-FrozenLake-v1-4x4-noSlippery
|
cjohlmacher
| 2023-07-05T20:20:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T20:20:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cjohlmacher/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jcm-art/hf_image_classification_tuning_pipeline
|
jcm-art
| 2023-07-05T20:14:07Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-05T19:35:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: hf_image_classification_tuning_pipeline
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.903
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf_image_classification_tuning_pipeline
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5764
- Accuracy: 0.903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7113 | 0.99 | 62 | 2.4840 | 0.849 |
| 1.8024 | 2.0 | 125 | 1.7298 | 0.891 |
| 1.5532 | 2.98 | 186 | 1.5764 | 0.903 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted
|
jordyvl
| 2023-07-05T20:02:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T17:53:13Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0783
- Accuracy: 0.71
- Exit 0 Accuracy: 0.115
- Exit 1 Accuracy: 0.1575
- Exit 2 Accuracy: 0.185
- Exit 3 Accuracy: 0.0875
- Exit 4 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7602 | 0.1125 | 0.0925 | 0.0675 | 0.0875 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7309 | 0.115 | 0.1175 | 0.0675 | 0.1075 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6967 | 0.1325 | 0.095 | 0.06 | 0.1175 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6631 | 0.17 | 0.085 | 0.0575 | 0.1275 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6242 | 0.205 | 0.085 | 0.0575 | 0.1225 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5736 | 0.2175 | 0.0875 | 0.0825 | 0.12 | 0.0625 | 0.0625 |
| No log | 6.72 | 14 | 2.5410 | 0.215 | 0.09 | 0.08 | 0.12 | 0.0625 | 0.0625 |
| No log | 7.72 | 16 | 2.5229 | 0.2325 | 0.1 | 0.0925 | 0.13 | 0.0625 | 0.0625 |
| No log | 8.72 | 18 | 2.4841 | 0.2525 | 0.1 | 0.1 | 0.1325 | 0.0625 | 0.0625 |
| No log | 9.72 | 20 | 2.4382 | 0.29 | 0.1 | 0.1025 | 0.1325 | 0.0625 | 0.0625 |
| No log | 10.72 | 22 | 2.3823 | 0.3 | 0.1 | 0.1275 | 0.1325 | 0.0625 | 0.0625 |
| No log | 11.72 | 24 | 2.3389 | 0.3275 | 0.1 | 0.1175 | 0.1225 | 0.0625 | 0.0625 |
| No log | 12.72 | 26 | 2.3002 | 0.35 | 0.0975 | 0.12 | 0.1225 | 0.0625 | 0.0625 |
| No log | 13.72 | 28 | 2.2421 | 0.36 | 0.0975 | 0.125 | 0.1275 | 0.0625 | 0.0625 |
| No log | 14.72 | 30 | 2.2026 | 0.3575 | 0.1025 | 0.13 | 0.125 | 0.0625 | 0.0625 |
| No log | 15.72 | 32 | 2.1712 | 0.375 | 0.105 | 0.1375 | 0.125 | 0.0625 | 0.0625 |
| No log | 16.72 | 34 | 2.0999 | 0.4075 | 0.1 | 0.145 | 0.125 | 0.0625 | 0.0625 |
| No log | 17.72 | 36 | 2.0414 | 0.4225 | 0.1025 | 0.145 | 0.1275 | 0.0625 | 0.0625 |
| No log | 18.72 | 38 | 1.9981 | 0.4375 | 0.0975 | 0.1425 | 0.13 | 0.0625 | 0.0625 |
| No log | 19.72 | 40 | 1.9369 | 0.4575 | 0.1025 | 0.14 | 0.1425 | 0.0625 | 0.0625 |
| No log | 20.72 | 42 | 1.8903 | 0.4975 | 0.1025 | 0.14 | 0.145 | 0.0625 | 0.0625 |
| No log | 21.72 | 44 | 1.8242 | 0.525 | 0.1025 | 0.1425 | 0.15 | 0.0625 | 0.0625 |
| No log | 22.72 | 46 | 1.7520 | 0.5325 | 0.11 | 0.1475 | 0.1475 | 0.0625 | 0.0625 |
| No log | 23.72 | 48 | 1.7203 | 0.5525 | 0.1125 | 0.1475 | 0.1525 | 0.0625 | 0.0625 |
| No log | 24.72 | 50 | 1.6753 | 0.565 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 25.72 | 52 | 1.6245 | 0.575 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 26.72 | 54 | 1.5832 | 0.61 | 0.11 | 0.15 | 0.1525 | 0.0625 | 0.0625 |
| No log | 27.72 | 56 | 1.5404 | 0.61 | 0.11 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 28.72 | 58 | 1.4958 | 0.6125 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 |
| No log | 29.72 | 60 | 1.4613 | 0.6325 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 |
| No log | 30.72 | 62 | 1.4479 | 0.63 | 0.11 | 0.1525 | 0.16 | 0.0625 | 0.0625 |
| No log | 31.72 | 64 | 1.4101 | 0.64 | 0.1125 | 0.1525 | 0.165 | 0.0625 | 0.0625 |
| No log | 32.72 | 66 | 1.3699 | 0.655 | 0.1125 | 0.1525 | 0.1675 | 0.0625 | 0.0625 |
| No log | 33.72 | 68 | 1.3427 | 0.6725 | 0.115 | 0.1525 | 0.165 | 0.0625 | 0.0625 |
| No log | 34.72 | 70 | 1.3161 | 0.6825 | 0.115 | 0.1525 | 0.1625 | 0.0625 | 0.0625 |
| No log | 35.72 | 72 | 1.2896 | 0.7025 | 0.115 | 0.1525 | 0.1675 | 0.0625 | 0.0625 |
| No log | 36.72 | 74 | 1.2720 | 0.705 | 0.11 | 0.1525 | 0.185 | 0.0625 | 0.0625 |
| No log | 37.72 | 76 | 1.2471 | 0.71 | 0.11 | 0.1525 | 0.1775 | 0.0625 | 0.0625 |
| No log | 38.72 | 78 | 1.2307 | 0.71 | 0.11 | 0.155 | 0.1775 | 0.0625 | 0.0625 |
| No log | 39.72 | 80 | 1.2174 | 0.7175 | 0.1125 | 0.155 | 0.1825 | 0.0625 | 0.0625 |
| No log | 40.72 | 82 | 1.1991 | 0.705 | 0.1125 | 0.1525 | 0.1775 | 0.0625 | 0.0625 |
| No log | 41.72 | 84 | 1.1867 | 0.71 | 0.1175 | 0.1525 | 0.18 | 0.065 | 0.0625 |
| No log | 42.72 | 86 | 1.1764 | 0.7025 | 0.115 | 0.1525 | 0.18 | 0.0675 | 0.0625 |
| No log | 43.72 | 88 | 1.1601 | 0.715 | 0.115 | 0.1525 | 0.1825 | 0.0725 | 0.0625 |
| No log | 44.72 | 90 | 1.1410 | 0.7175 | 0.115 | 0.1525 | 0.18 | 0.075 | 0.0625 |
| No log | 45.72 | 92 | 1.1408 | 0.71 | 0.115 | 0.155 | 0.1825 | 0.075 | 0.0625 |
| No log | 46.72 | 94 | 1.1443 | 0.7075 | 0.115 | 0.155 | 0.1825 | 0.0775 | 0.0625 |
| No log | 47.72 | 96 | 1.1364 | 0.705 | 0.115 | 0.155 | 0.1775 | 0.0825 | 0.0625 |
| No log | 48.72 | 98 | 1.1251 | 0.71 | 0.115 | 0.155 | 0.175 | 0.085 | 0.0625 |
| No log | 49.72 | 100 | 1.1113 | 0.7175 | 0.115 | 0.155 | 0.1775 | 0.085 | 0.0625 |
| No log | 50.72 | 102 | 1.1040 | 0.7175 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 |
| No log | 51.72 | 104 | 1.0972 | 0.715 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 |
| No log | 52.72 | 106 | 1.0938 | 0.7175 | 0.115 | 0.1575 | 0.1825 | 0.0875 | 0.0625 |
| No log | 53.72 | 108 | 1.0931 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 |
| No log | 54.72 | 110 | 1.0887 | 0.7075 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 |
| No log | 55.72 | 112 | 1.0865 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 56.72 | 114 | 1.0828 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 57.72 | 116 | 1.0801 | 0.7075 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 58.72 | 118 | 1.0786 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 59.72 | 120 | 1.0783 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pszemraj/gpt2-medium-vaguely-human-dialogue
|
pszemraj
| 2023-07-05T19:57:49Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"gpt",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
widget:
- text: |+
Do you like my new haircut?
person beta:
example_title: haircut
- text: |+
I love to learn new things.. are you willing to teach me something?
person beta:
example_title: teaching
- text: |+
What's your favorite animal? Mine is the dog?
person beta:
example_title: favorite
- text: |+
how much does it cost?
person beta:
example_title: money
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: true
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pszemraj/gpt2-medium-vaguely-human-dialogue
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a parsed version of Wizard of Wikipedia. Because the batch size was so large, it learned a general understanding of words that makes sense together but does not specifically respond to anything - sort of like an alien learning to imitate human words to convince others that it is human.
It achieves the following results on the evaluation set:
- Loss: 4.3281
## Model description
- a decent example of what happens when your batch size is too large and the global optima does not reflect specific prompts / use cases.
## Intended uses & limitations
- there are no intended uses
## Training and evaluation data
- a parsed version of the wizard of Wikipedia dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 34.991 | 1.0 | 837 | 14.8359 |
| 12.2881 | 2.0 | 1674 | 9.375 |
| 8.5071 | 3.0 | 2511 | 7.2148 |
| 7.6031 | 4.0 | 3348 | 6.1758 |
| 6.4808 | 5.0 | 4185 | 5.5820 |
| 5.8562 | 6.0 | 5022 | 5.0977 |
| 5.6094 | 7.0 | 5859 | 4.8203 |
| 5.2591 | 8.0 | 6696 | 4.5977 |
| 5.0031 | 9.0 | 7533 | 4.4219 |
| 4.8837 | 10.0 | 8370 | 4.3281 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
Damirchik/ppo-LunarLander-v2
|
Damirchik
| 2023-07-05T19:55:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T19:54:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.94 +/- 25.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AWolters/ByT5_DutchSpellingNormalization
|
AWolters
| 2023-07-05T19:53:42Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"text2text generation",
"spelling normalization",
"19th-century Dutch",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-01T16:11:47Z |
---
language:
- nl
tags:
- text2text generation
- spelling normalization
- 19th-century Dutch
license: apache-2.0
---
# 19th Century Dutch Spelling Normalization
This repository contains a pretrained and finetuned model of the original __google/ByT5-small__.
This model has been pretrained and finetuned for the task of 19th-century Dutch spelling normalization.
We first pretrained the model with 2 million sentences from Dutch historical novels.
Afterward, we finetuned the model with a 10k dataset consisting of 19th-century Dutch sentences;
these sentences were automatically annotated by a rule-based system built for 19th-century Dutch spelling normalization (van Cranenburgh and van Noord, 2022).
The finetuned model is only available in the TensorFlow format but can be converted to a PyTorch environment.
The pretrained only weights are available in the PyTorch environment; note that this model has to be finetuned first.
The pretrained only weights are available in the directory __Pretrained_ByT5__.
The train and validation sets used for finetuning are available in the main repository.
For further information about the model, please see the [GitHub](https://github.com/Awolters123/Master-Thesis) repository.
## How to use:
```
from transformers import AutoTokenizer, TFT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained('AWolters/ByT5_DutchSpellingNormalization')
model = TFT5ForConditionalGeneration.from_pretrained('AWolters/ByT5_DutchSpellingNormalization')
text = 'De menschen waren aan het werk.'
tokenized = tokenizer(text, return_tensors='tf')
prediction = model.generate(input_ids=tokenized['input_ids'],
attention_mask=tokenized['attention_mask'],
max_new_tokens=100)
print(tokenizer.decode(prediction[0], text_target=True, skip_special_tokens=True))
```
## Setup:
The model has been finetuned with the following (hyper)parameters values:
_Learn rate_: 5e-5
_Batch size_: 32
_Optimizer_: AdamW
_Epochs_: 30, with earlystopping
To further finetune the model, use the __T5Trainer.py__ script.
|
khushpreet/eyedisease
|
khushpreet
| 2023-07-05T19:51:05Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"medical",
"image-classification",
"arxiv:1910.09700",
"region:us"
] |
image-classification
| 2023-07-05T19:48:02Z |
---
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sebasvaron/my_awesome_model
|
sebasvaron
| 2023-07-05T19:50:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T19:45:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rsilg/dqn-SpaceInvadersNoFrameskip-v4
|
rsilg
| 2023-07-05T19:40:58Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T19:40:29Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 541.50 +/- 118.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rsilg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rsilg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rsilg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
joydragon/Reinforce-Pixelcopter-PLE-v0
|
joydragon
| 2023-07-05T19:14:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T18:30:19Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.40 +/- 19.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
maubers/emily_yeppers
|
maubers
| 2023-07-05T19:08:47Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-05T17:28:43Z |
## Overview
This contains Emily Yeppers, a bot who likes to talk about very inappropriate things and how vital they are to the existence of our species (the truth, technically) using GPT-Neo. The bot streams new content from specified subreddits and responds when certain target phrases are detected in comments and submissions, or when it is mentioned or directly replied to.
She is designed to function as a Reddit bot. See the Github page for more information. She WILL generate inappropriate content, as she was trained on comments posted in inappropriate subreddits.
## Setup and Installation (for Reddit)
See https://github.com/maubers/emily_yeppers
|
sd-concepts-library/ahx-beta-4a5b307
|
sd-concepts-library
| 2023-07-05T18:57:32Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-05T18:57:29Z |
---
license: mit
---
### ahx-beta-4a5b307 on Stable Diffusion
This is the `<ahx-beta-4a5b307>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:









|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.