modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 12:28:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 12:28:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
susnato/distilhubert-finetuned-gtzan
|
susnato
| 2023-07-06T12:40:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-01T16:32:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5542
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0241 | 1.0 | 112 | 1.9155 | 0.4 |
| 1.5443 | 2.0 | 225 | 1.2937 | 0.65 |
| 1.1134 | 3.0 | 337 | 0.9665 | 0.71 |
| 0.7215 | 4.0 | 450 | 0.8201 | 0.74 |
| 0.4679 | 5.0 | 562 | 0.7616 | 0.75 |
| 0.3626 | 6.0 | 675 | 0.5217 | 0.85 |
| 0.1775 | 7.0 | 787 | 0.6748 | 0.81 |
| 0.1642 | 8.0 | 900 | 0.5287 | 0.86 |
| 0.0772 | 9.0 | 1012 | 0.5632 | 0.84 |
| 0.0478 | 10.0 | 1125 | 0.5576 | 0.85 |
| 0.0662 | 11.0 | 1237 | 0.5455 | 0.88 |
| 0.0446 | 11.95 | 1344 | 0.5542 | 0.87 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
s8sesche/unsuitablePreTrainedModel_finetuned_model_petOrNot_improved
|
s8sesche
| 2023-07-06T12:36:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T12:31:32Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: unsuitablePreTrainedModel_finetuned_model_petOrNot_improved
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unsuitablePreTrainedModel_finetuned_model_petOrNot_improved
This model is a fine-tuned version of [shahrukhx01/question-vs-statement-classifier](https://huggingface.co/shahrukhx01/question-vs-statement-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 88 | 0.3550 | 0.8333 |
| No log | 2.0 | 176 | 0.2258 | 0.9667 |
| No log | 3.0 | 264 | 0.2445 | 0.9667 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
oliverguhr/spelling-correction-german-base
|
oliverguhr
| 2023-07-06T12:27:25Z | 872 | 15 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-24T07:11:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-spelling-de
results: []
widget:
- text: "correct: ein dransformer isd ein mthode mit der ein compuder eine volge von zeichn übersetz"
example_title: "1"
- text: "correct: Dresten ist di Landeshaubtstadt des Freistaats Saksens und die zweid größte stadt des landel"
example_title: "2"
---
## Model description
This is a proof of concept spelling correction model for german. The model should fix your typos and punctuation.
If you like to participate in the development or run your own experiments, have a look at [the GitHub repo](https://github.com/oliverguhr/spelling).
Model Input:
> ein dransformer isd ein mthode mit der ein compuder eine volge von zeichn in eine andrere folge von zeichen übersetzn kann dies kan zb genutzt werdne um text von einer spracge in eine andrere zu übersetzen
Model Output:
> Ein Transformer ist eine Methode, mit der ein Computer eine Folge von Zeichen in eine andere Folge von Zeichen übersetzen kann dies kann z.B. genutzt werden, um Texte von einer Sprache in eine andere zu übersetzen.
## Intended uses & limitations
This is a work in progress, be aware that the model can produce artefacts.
You can test the model using the pipeline-interface:
```python
from transformers import pipeline
fix_spelling = pipeline("text2text-generation",model="oliverguhr/spelling-correction-german-base")
print(fix_spelling("correct: das idst ein neuZr test",max_length=256))
```
|
Khushnur/t5-small-end2end-questions-generation_test
|
Khushnur
| 2023-07-06T12:22:17Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5_cleaned_datav3_60k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T12:16:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5_cleaned_datav3_60k
model-index:
- name: t5-small-end2end-questions-generation_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-end2end-questions-generation_test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5_cleaned_datav3_60k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
iulusoy/de_test_pipeline
|
iulusoy
| 2023-07-06T12:18:19Z | 3 | 0 |
spacy
|
[
"spacy",
"de",
"license:mit",
"region:us"
] | null | 2023-03-24T09:52:13Z |
---
tags:
- spacy
language:
- de
license: mit
model-index:
- name: de_test_pipeline
results: []
---
A test pipeline for ModelManager testing purposes
| Feature | Description |
| --- | --- |
| **Name** | `de_test_pipeline` |
| **Version** | `0.1.0` |
| **spaCy** | `>=3.5.3,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `spancat` |
| **Components** | `tok2vec`, `spancat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Liam Keegan](https://github.com/ssciwr/moralization) |
### Label Scheme
<details>
<summary>View label scheme (24 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `Moralisierung explizit`, `Care`, `Institution`, `Forderer:in`, `Keine Moralisierung`, `Darstellung`, `Individuum`, `Neutral`, `Fairness`, `soziale Gruppe`, `Cheating`, `Appell`, `explizit`, `Moralisierung`, `Adresassat:in`, `Own Group`, `Moralisierung interpretativ`, `Benefizient:in`, `Other Group`, `Menschen`, `Oppression`, `Liberty`, `Harm`, `Kein Bezug` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SPANS_SC_F` | 0.04 |
| `SPANS_SC_P` | 0.02 |
| `SPANS_SC_R` | 36.36 |
| `TOK2VEC_LOSS` | 50495.68 |
| `SPANCAT_LOSS` | 1149148.34 |
|
Vtmpas/q-FrozenLake-v1-4x4-noSlippery
|
Vtmpas
| 2023-07-06T12:10:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T12:10:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Vtmpas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Sekiraw/cart-pole
|
Sekiraw
| 2023-07-06T12:05:18Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T12:01:38Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cart-pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
maxkskhor/ppo-Pyramids
|
maxkskhor
| 2023-07-06T11:59:01Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-06T11:58:54Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: maxkskhor/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/gpt2-concat-aochildes-len-16k-punc-dot
|
NasimB
| 2023-07-06T11:58:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T10:05:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-len-16k-punc-dot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-len-16k-punc-dot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7347 | 0.29 | 500 | 5.6594 |
| 5.3783 | 0.59 | 1000 | 5.2121 |
| 5.0252 | 0.88 | 1500 | 4.9610 |
| 4.7546 | 1.18 | 2000 | 4.8238 |
| 4.5897 | 1.47 | 2500 | 4.6965 |
| 4.4789 | 1.77 | 3000 | 4.5879 |
| 4.3473 | 2.06 | 3500 | 4.5156 |
| 4.1614 | 2.35 | 4000 | 4.4620 |
| 4.1298 | 2.65 | 4500 | 4.4035 |
| 4.0926 | 2.94 | 5000 | 4.3498 |
| 3.873 | 3.24 | 5500 | 4.3486 |
| 3.8259 | 3.53 | 6000 | 4.3189 |
| 3.809 | 3.83 | 6500 | 4.2819 |
| 3.6844 | 4.12 | 7000 | 4.2885 |
| 3.5391 | 4.41 | 7500 | 4.2779 |
| 3.5315 | 4.71 | 8000 | 4.2655 |
| 3.5178 | 5.0 | 8500 | 4.2534 |
| 3.3396 | 5.3 | 9000 | 4.2694 |
| 3.3435 | 5.59 | 9500 | 4.2672 |
| 3.3344 | 5.89 | 10000 | 4.2660 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Bugsys0302/undressav05
|
Bugsys0302
| 2023-07-06T11:56:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T11:52:15Z |
---
license: creativeml-openrail-m
---
|
anujsahani01/finetuned_AI4Bharat_en_mr
|
anujsahani01
| 2023-07-06T11:55:30Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T01:54:41Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuned_AI4Bharat_en_mr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_AI4Bharat_en_mr
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 8000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HeshamMamdouh/arabart-v2-sum-fine-tuned
|
HeshamMamdouh
| 2023-07-06T11:48:04Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"mbart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T11:47:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: arabart-v2-sum-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arabart-v2-sum-fine-tuned
This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0702
- Validation Loss: 2.6760
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.9033 | 2.5054 | 0 |
| 2.8269 | 2.5495 | 1 |
| 2.7380 | 2.5277 | 2 |
| 2.6357 | 2.5637 | 3 |
| 2.6693 | 2.5888 | 4 |
| 2.6560 | 2.5696 | 5 |
| 2.5431 | 2.5702 | 6 |
| 2.6064 | 2.5789 | 7 |
| 2.5252 | 2.5570 | 8 |
| 2.4553 | 2.6162 | 9 |
| 2.4159 | 2.5925 | 10 |
| 2.4123 | 2.6065 | 11 |
| 2.3180 | 2.6371 | 12 |
| 2.2778 | 2.6259 | 13 |
| 2.2570 | 2.6121 | 14 |
| 2.1909 | 2.6489 | 15 |
| 2.1705 | 2.6390 | 16 |
| 2.1008 | 2.6628 | 17 |
| 2.0684 | 2.6929 | 18 |
| 2.0702 | 2.6760 | 19 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.12.1
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent
|
jordyvl
| 2023-07-06T11:45:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T09:34:33Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3493
- Accuracy: 0.645
- Exit 0 Accuracy: 0.1125
- Exit 1 Accuracy: 0.155
- Exit 2 Accuracy: 0.3775
- Exit 3 Accuracy: 0.5225
- Exit 4 Accuracy: 0.5875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7604 | 0.1075 | 0.09 | 0.0675 | 0.1075 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7329 | 0.1125 | 0.0725 | 0.065 | 0.13 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6989 | 0.1325 | 0.08 | 0.06 | 0.1375 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6608 | 0.17 | 0.08 | 0.0575 | 0.1375 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6201 | 0.19 | 0.09 | 0.0525 | 0.1175 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5813 | 0.2175 | 0.095 | 0.0825 | 0.1125 | 0.0675 | 0.0625 |
| No log | 6.72 | 14 | 2.5503 | 0.215 | 0.0925 | 0.08 | 0.12 | 0.0825 | 0.0625 |
| No log | 7.72 | 16 | 2.5289 | 0.23 | 0.09 | 0.0925 | 0.15 | 0.1025 | 0.0625 |
| No log | 8.72 | 18 | 2.5344 | 0.245 | 0.0975 | 0.1 | 0.165 | 0.105 | 0.0675 |
| No log | 9.72 | 20 | 2.5533 | 0.265 | 0.1 | 0.0975 | 0.185 | 0.09 | 0.1025 |
| No log | 10.72 | 22 | 2.4567 | 0.29 | 0.0975 | 0.13 | 0.2 | 0.1 | 0.095 |
| No log | 11.72 | 24 | 2.3982 | 0.3 | 0.1 | 0.12 | 0.205 | 0.1125 | 0.09 |
| No log | 12.72 | 26 | 2.3722 | 0.3075 | 0.1025 | 0.1175 | 0.195 | 0.13 | 0.0825 |
| No log | 13.72 | 28 | 2.3546 | 0.31 | 0.105 | 0.1225 | 0.1825 | 0.1425 | 0.085 |
| No log | 14.72 | 30 | 2.3287 | 0.315 | 0.11 | 0.125 | 0.195 | 0.1775 | 0.095 |
| No log | 15.72 | 32 | 2.2970 | 0.32 | 0.1075 | 0.13 | 0.2175 | 0.2275 | 0.1 |
| No log | 16.72 | 34 | 2.2763 | 0.325 | 0.1075 | 0.14 | 0.225 | 0.2375 | 0.1075 |
| No log | 17.72 | 36 | 2.3456 | 0.3075 | 0.105 | 0.14 | 0.2375 | 0.18 | 0.1275 |
| No log | 18.72 | 38 | 2.3160 | 0.325 | 0.115 | 0.14 | 0.24 | 0.175 | 0.16 |
| No log | 19.72 | 40 | 2.2257 | 0.33 | 0.1225 | 0.14 | 0.245 | 0.225 | 0.17 |
| No log | 20.72 | 42 | 2.1769 | 0.355 | 0.125 | 0.1425 | 0.26 | 0.2725 | 0.135 |
| No log | 21.72 | 44 | 2.1449 | 0.355 | 0.125 | 0.14 | 0.2725 | 0.3125 | 0.1175 |
| No log | 22.72 | 46 | 2.1200 | 0.3675 | 0.125 | 0.1425 | 0.27 | 0.3125 | 0.115 |
| No log | 23.72 | 48 | 2.0995 | 0.3725 | 0.1225 | 0.1425 | 0.2625 | 0.31 | 0.115 |
| No log | 24.72 | 50 | 2.0769 | 0.3825 | 0.12 | 0.1425 | 0.2725 | 0.3375 | 0.1125 |
| No log | 25.72 | 52 | 2.0473 | 0.3975 | 0.115 | 0.14 | 0.285 | 0.335 | 0.1325 |
| No log | 26.72 | 54 | 2.0094 | 0.4075 | 0.115 | 0.14 | 0.2925 | 0.3075 | 0.1525 |
| No log | 27.72 | 56 | 1.9660 | 0.435 | 0.1175 | 0.14 | 0.29 | 0.2725 | 0.21 |
| No log | 28.72 | 58 | 1.9271 | 0.46 | 0.11 | 0.1425 | 0.3025 | 0.27 | 0.235 |
| No log | 29.72 | 60 | 1.8910 | 0.4825 | 0.11 | 0.145 | 0.305 | 0.27 | 0.2525 |
| No log | 30.72 | 62 | 1.8619 | 0.475 | 0.11 | 0.1475 | 0.3 | 0.2875 | 0.27 |
| No log | 31.72 | 64 | 1.8215 | 0.5025 | 0.11 | 0.15 | 0.3025 | 0.305 | 0.325 |
| No log | 32.72 | 66 | 1.7845 | 0.52 | 0.1125 | 0.15 | 0.3175 | 0.3225 | 0.3625 |
| No log | 33.72 | 68 | 1.7509 | 0.5375 | 0.1125 | 0.15 | 0.325 | 0.3525 | 0.3975 |
| No log | 34.72 | 70 | 1.7237 | 0.545 | 0.1075 | 0.15 | 0.3325 | 0.365 | 0.4275 |
| No log | 35.72 | 72 | 1.6970 | 0.555 | 0.11 | 0.15 | 0.3275 | 0.4 | 0.4475 |
| No log | 36.72 | 74 | 1.6512 | 0.57 | 0.1075 | 0.15 | 0.3225 | 0.4125 | 0.465 |
| No log | 37.72 | 76 | 1.6212 | 0.5875 | 0.11 | 0.1525 | 0.3375 | 0.42 | 0.4775 |
| No log | 38.72 | 78 | 1.5995 | 0.595 | 0.1125 | 0.15 | 0.34 | 0.4275 | 0.4975 |
| No log | 39.72 | 80 | 1.5713 | 0.5925 | 0.115 | 0.15 | 0.35 | 0.4375 | 0.525 |
| No log | 40.72 | 82 | 1.5551 | 0.5875 | 0.115 | 0.15 | 0.3525 | 0.4375 | 0.5325 |
| No log | 41.72 | 84 | 1.5276 | 0.59 | 0.115 | 0.15 | 0.35 | 0.4575 | 0.5425 |
| No log | 42.72 | 86 | 1.5050 | 0.5925 | 0.115 | 0.15 | 0.355 | 0.46 | 0.5425 |
| No log | 43.72 | 88 | 1.4871 | 0.595 | 0.1125 | 0.1525 | 0.3625 | 0.47 | 0.5625 |
| No log | 44.72 | 90 | 1.4712 | 0.5975 | 0.1125 | 0.1525 | 0.3675 | 0.4775 | 0.5525 |
| No log | 45.72 | 92 | 1.4615 | 0.5975 | 0.1125 | 0.155 | 0.365 | 0.4825 | 0.555 |
| No log | 46.72 | 94 | 1.4449 | 0.6075 | 0.1125 | 0.155 | 0.3625 | 0.4875 | 0.5575 |
| No log | 47.72 | 96 | 1.4273 | 0.6175 | 0.1125 | 0.155 | 0.365 | 0.5025 | 0.565 |
| No log | 48.72 | 98 | 1.4127 | 0.6225 | 0.1125 | 0.155 | 0.365 | 0.505 | 0.5725 |
| No log | 49.72 | 100 | 1.4005 | 0.63 | 0.1125 | 0.155 | 0.3675 | 0.5125 | 0.575 |
| No log | 50.72 | 102 | 1.3925 | 0.625 | 0.1125 | 0.155 | 0.37 | 0.5125 | 0.5725 |
| No log | 51.72 | 104 | 1.3847 | 0.6325 | 0.1125 | 0.155 | 0.38 | 0.5175 | 0.57 |
| No log | 52.72 | 106 | 1.3772 | 0.64 | 0.1125 | 0.155 | 0.38 | 0.515 | 0.57 |
| No log | 53.72 | 108 | 1.3679 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.52 | 0.5825 |
| No log | 54.72 | 110 | 1.3595 | 0.6475 | 0.1125 | 0.155 | 0.3775 | 0.525 | 0.5825 |
| No log | 55.72 | 112 | 1.3544 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.58 |
| No log | 56.72 | 114 | 1.3515 | 0.6425 | 0.1125 | 0.155 | 0.375 | 0.52 | 0.5875 |
| No log | 57.72 | 116 | 1.3500 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.52 | 0.5925 |
| No log | 58.72 | 118 | 1.3495 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.59 |
| No log | 59.72 | 120 | 1.3493 | 0.645 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.5875 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Tidzo/falcon-7b-esg-instruct
|
Tidzo
| 2023-07-06T11:40:34Z | 36 | 0 |
peft
|
[
"peft",
"climate",
"text-generation",
"en",
"dataset:tiiuae/falcon-refinedweb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T04:42:23Z |
---
library_name: peft
license: apache-2.0
tags:
- climate
language:
- en
datasets:
- tiiuae/falcon-refinedweb
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
vineetsharma/speecht5_finetuned_voxpopuli_nl
|
vineetsharma
| 2023-07-06T11:38:52Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-06T08:55:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5219 | 4.3 | 1000 | 0.4787 |
| 0.5047 | 8.61 | 2000 | 0.4660 |
| 0.4922 | 12.91 | 3000 | 0.4621 |
| 0.4898 | 17.21 | 4000 | 0.4609 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Binaryy/llama_travel_test
|
Binaryy
| 2023-07-06T11:38:27Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T11:37:12Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Bugsys0302/undbob
|
Bugsys0302
| 2023-07-06T11:36:05Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T11:26:03Z |
---
license: creativeml-openrail-m
---
|
papahawk/gpt2-1.5b
|
papahawk
| 2023-07-06T11:19:11Z | 206 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"pyTtorch",
"tensorflow",
"en",
"dataset:gpt-2-output-dataset",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-05T22:17:24Z |
---
language:
- en
tags:
- text-generation
- pyTtorch
- tensorflow
- transformers
datasets:
- gpt-2-output-dataset
license: mit
---
<h1 style='text-align: center '>GPT2-1.5b LLM</h1>
<h2 style='text-align: center '><em>Fork of OpenAI/GPT2-1.5b</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# gpt2-1.5b
Code and models from the paper ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf).
You can read about GPT-2 and its staged release in our [original blog post](https://blog.openai.com/better-language-models/), [6 month follow-up post](https://openai.com/blog/gpt-2-6-month-follow-up/), and [final post](https://www.openai.com/blog/gpt-2-1-5b-release/).
We have also [released a dataset](https://github.com/openai/gpt-2-output-dataset) for researchers to study their behaviors.
<sup>*</sup> *Note that our original parameter counts were wrong due to an error (in our previous blog posts and paper). Thus you may have seen small referred to as 117M and medium referred to as 345M.*
## Usage
This repository is meant to be a starting point for researchers and engineers to experiment with GPT-2.
For basic information, see our [model card](./model_card.md).
### Some caveats
- GPT-2 models' robustness and worst case behaviors are not well-understood. As with any machine-learned model, carefully evaluate GPT-2 for your use case, especially if used without fine-tuning or in safety-critical applications where reliability is important.
- The dataset our GPT-2 models were trained on contains many texts with [biases](https://twitter.com/TomerUllman/status/1101485289720242177) and factual inaccuracies, and thus GPT-2 models are likely to be biased and inaccurate as well.
- To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination. Our models are often incoherent or inaccurate in subtle ways, which takes more than a quick read for a human to notice.
### Work with us
Please [let us know](mailto:languagequestions@openai.com) if you’re doing interesting research with or working on applications of GPT-2! We’re especially interested in hearing from and potentially working with those who are studying
- Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)
- The extent of problematic content (e.g. bias) being baked into the models and effective mitigations
## Development
See [DEVELOPERS.md](./DEVELOPERS.md)
## Contributors
See [CONTRIBUTORS.md](./CONTRIBUTORS.md)
## Citation
Please use the following bibtex entry:
```
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
## Future work
We may release code for evaluating the models on various benchmarks.
We are still considering release of the larger models.
## License
[Modified MIT](./LICENSE)
|
m-aliabbas1/dqn-SpaceInvadersNoFrameskip-v4
|
m-aliabbas1
| 2023-07-06T11:17:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T11:16:52Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 807.50 +/- 374.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m-aliabbas1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m-aliabbas1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga m-aliabbas1
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ddoc/pta2
|
ddoc
| 2023-07-06T11:16:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-06T11:16:15Z |
# stable-diffusion-webui-prompt-travel
Travel between prompts in the latent space to make pseudo-animation, extension script for AUTOMATIC1111/stable-diffusion-webui.
----
<p align="left">
<a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/commits"><img alt="Last Commit" src="https://img.shields.io/github/last-commit/Kahsolt/stable-diffusion-webui-prompt-travel"></a>
<a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/issues"><img alt="GitHub issues" src="https://img.shields.io/github/issues/Kahsolt/stable-diffusion-webui-prompt-travel"></a>
<a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/Kahsolt/stable-diffusion-webui-prompt-travel"></a>
<a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/Kahsolt/stable-diffusion-webui-prompt-travel"></a>
<img alt="Language" src="https://img.shields.io/github/languages/top/Kahsolt/stable-diffusion-webui-prompt-travel">
<img alt="License" src="https://img.shields.io/github/license/Kahsolt/stable-diffusion-webui-prompt-travel">
<br/>
</p>

Try interpolating on the hidden vectors of conditioning prompt to make seemingly-continuous image sequence, or let's say a pseudo-animation. 😀
Not only prompts! We also support non-prompt conditions, read => [README_ext.md](README_ext.md) ~
⚠ 我们成立了插件反馈 QQ 群: 616795645 (赤狐屿),欢迎出建议、意见、报告bug等 (w
⚠ We have a QQ chat group (616795645) now, any suggestions, discussions and bug reports are highly wellllcome!!
ℹ 实话不说,我想有可能通过这个来做ppt童话绘本<del>甚至本子</del>……
ℹ 聪明的用法:先手工盲搜两张好看的图 (只有prompt差异),然后再尝试在其间 travel :lolipop:
⚠ Remeber to check "Always save all generated images" on in the settings tab, otherwise "upscaling" and "saving intermediate images" would not work.
⚠ 记得在设置页勾选 “总是保存所有生成的图片”,否则 上采样 与 保存中间图片 将无法工作。
### Change Log
⚪ Compatibility
The latest version `v3.0` is synced & tested with:
- [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): version `v1.4.0`, tag [v1.4.0](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.4.0)
- [Mikubill/sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet): version `v1.1.229`, commit [eceeec7a7e](https://github.com/Mikubill/sd-webui-controlnet/commit/eceeec7a7e856867de56e26cae9f3e1076480344)
⚪ Features
- 2023/07/05: `v3.0` re-impl core with sd-webui `v1.4.0` callbacks; this new implementation will be slower, but more compatible with other extensions
- 2023/04/13: `v2.7` add RIFE to controlnet-travel, skip fusion (experimental)
- 2023/03/31: `v2.6` add a tkinter [GUI](#run-each-time) for postprocess toolchain
- 2023/03/30: `v2.5` add controlnet-travel script (experimental), interpolating between hint conditions **instead of prompts**, thx for the code base from [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet)
- 2023/02/14: `v2.3` integrate basic function of [depth-image-io](https://github.com/AnonymousCervine/depth-image-io-for-SDWebui) for depth2img models
- 2023/01/27: `v2.2` add 'slerp' linear interpolation method
- 2023/01/22: `v2.1` add experimental 'replace' mode again, it's not smooth interpolation
- 2023/01/20: `v2.0` add optional external [post-processing pipeline](#post-processing-pipeline) to highly boost up smoothness, greate thx to [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) and [RIFE](https://github.com/nihui/rife-ncnn-vulkan)!!
- 2023/01/16: `v1.5` add upscale options (issue #12); add 'embryo' genesis, reproducing idea of [stable-diffusion-animation](https://replicate.com/andreasjansson/stable-diffusion-animation) except [FILM](https://github.com/google-research/frame-interpolation) support (issue #11)
- 2023/01/12: `v1.4` remove 'replace' & 'grad' mode support, due to webui's code change
- 2022/12/11: `v1.3` work in a more 'successive' way, idea borrowed from [deforum](https://github.com/deforum-art/deforum-for-automatic1111-webui) ('genesis' option)
- 2022/11/14: `v1.2` walk by substituting token embedding ('replace' mode)
- 2022/11/13: `v1.1` walk by optimizing condition ('grad' mode)
- 2022/11/10: `v1.0` interpolate linearly on condition/uncondition ('linear' mode)
⚪ Fixups
- 2023/07/05: sync sd-webui-controlnet to `v1.1.229`
- 2023/04/30: update controlnet core to `v1.1.116`
- 2023/03/29: `v2.4` bug fixes on script hook, now working correctly with extra networks & [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet)
- 2023/01/31: keep up with webui's updates, (issue #14: `ImportError: cannot import name 'single_sample_to_image'`)
- 2023/01/28: keep up with webui's updates, extra-networks rework
- 2023/01/16: `v1.5` apply zero padding when condition length mismatch (issue #10: `RuntimeError: The size of tensor a (77) must match the size of tensor b (154) at non-singleton dimension 0`), typo in demo filename
- 2023/01/12: `v1.4` keep up with webui's updates (issue #9: `AttributeError: 'FrozenCLIPEmbedderWithCustomWords' object has no attribute 'process_text'`)
- 2022/12/13: `#bdd8bed` fixup no working when negative prompt is left empty (issue #6: `neg_prompts[-1] IndexError: List index out of range`)
- 2022/11/27: `v1.2-fix2` keep up with webui's updates (error `ImportError: FrozenCLIPEmbedderWithCustomWords`)
- 2022/11/20: `v1.2-fix1` keep up with webui's updates (error `AttributeError: p.all_negative_prompts[0]`)
⚠ this script will NOT probably support the schedule syntax (i.e.: `[prompt:prompt:number]`), because interpolate on mutable conditions requires sampler level tracing which is hard to maintain :(
⚠ this script will NOT probably work together with `hires.fix` due to some inner conceptual/logical conflict of `denoising_strength`, you can alternatively perform batch-upscale then batch-img2img.
### How it works?
- input **multiple lines** in the prompt/negative-prompt box, each line is called a **stage**
- generate images one by one, interpolating from one stage towards the next (batch configs are ignored)
- gradually change the digested inputs between prompts
- freeze all other settings (`steps`, `sampler`, `cfg factor`, `seed`, etc.)
- note that only the major `seed` will be forcely fixed through all processes, you can still set `subseed = -1` to allow more variances
- export a video!
- follow [post-processing pipeline](#post-processing-pipeline) to get much better result 👌
⚪ Txt2Img
| sampler \ genesis | fixed | successive | embryo |
| :-: | :-: | :-: | :-: |
| Eular a |  |  |  |
| DDIM |  |  |  |
⚪ Img2Img
| sampler \ genesis | fixed | successive | embryo |
| :-: | :-: | :-: | :-: |
| Eular a |  |  |  |
| DDIM |  |  |  |
post-processing pipeline (case `i2i-f-ddim`):
| w/o. post-processing | w/. post-processing |
| :-: | :-: |
|  |  |
other stuff:
| reference image for img2img | embryo image decoded <br/> case `i2i-e-euler_a` with `embryo_step=8` |
| :-: | :-: |
|  |  |
⚪ ControlNet support
| prompt-travel with ControlNet (depth) | controlnet-travel (depth) |
| :-: | :-: |
|  |  |
Example above run configure:
```text
Prompt:
(((masterpiece))), highres, ((boy)), child, cat ears, white hair, red eyes, yellow bell, red cloak, barefoot, angel, [flying], egyptian
((masterpiece)), highres, ((girl)), loli, cat ears, light blue hair, red eyes, magical wand, barefoot, [running]
Negative prompt:
(((nsfw))), ugly,duplicate,morbid,mutilated,tranny,trans,trannsexual,mutation,deformed,long neck,bad anatomy,bad proportions,extra arms,extra legs, disfigured,more than 2 nipples,malformed,mutated,hermaphrodite,out of frame,extra limbs,missing arms,missing legs,poorly drawn hands,poorty drawn face,mutation,poorly drawn,long body,multiple breasts,cloned face,gross proportions, mutated hands,bad hands,bad feet,long neck,missing limb,malformed limbs,malformed hands,fused fingers,too many fingers,extra fingers,missing fingers,extra digit,fewer digits,mutated hands and fingers,lowres,text,error,cropped,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,username,blurry,text font ufemale focus, poorly drawn, deformed, poorly drawn face, (extra leg:1.3), (extra fingers:1.2),out of frame
Steps: 15
CFG scale: 7
Clip skip: 1
Seed: 114514
Size: 512 x 512
Model hash: animefull-final-pruned.ckpt
Hypernet: (this is my secret :)
```
### Options
- prompt: (list of strings)
- negative prompt: (list of strings)
- input multiple lines of prompt text
- we call each line of prompt a stage, usually you need at least 2 lines of text to starts travel
- if len(positive_prompts) != len(negative_prompts), the shorter one's last item will be repeated to match the longer one
- mode: (categorical)
- `linear`: linear interpolation on condition/uncondition of CLIP output
- `replace`: gradually replace of CLIP output
- replace_dim: (categorical)
- `token`: per token-wise vector
- `channel`: per channel-wise vector
- `random`: per point-wise element
- replace_order: (categorical)
- `similiar`: from the most similiar first (L1 distance)
- `different`: from the most different first
- `random`: just randomly
- `embryo`: pre-denoise few steps, then hatch a set of image from the common embryo by linear interpolation
- steps: (int, list of int)
- number of images to interpolate between two stages
- if int, constant number of travel steps
- if list of int, length should match `len(stages)-1`, separate by comma, e.g.: `12, 24, 36`
- genesis: (categorical), the a prior for each image frame
- `fixed`: starts from pure noise in txt2img pipeline, or from the same ref-image given in img2img pipeline
- `successive`: starts from the last generated image (this will force txt2img turn to actually be img2img from the 2nd frame on)
- `embryo`: starts from the same half-denoised image, see [=> How does it work?](https://replicate.com/andreasjansson/stable-diffusion-animation#readme)
- (experimental) it only processes 2 lines of prompts, and does not interpolate on negative_prompt :(
- genesis_extra_params
- denoise_strength: (float), denoise strength in img2img pipelines (for `successive`)
- embryo_step: (int or float), steps to hatch the common embryo (for `embryo`)
- if >= 1, taken as step cout
- if < 1, taken as ratio of total step
- video_*
- fps: (float), FPS of video, set `0` to disable file saving
- fmt: (categorical), export video file format
- pad: (int), repeat beginning/ending frames, giving a in/out time
- pick: (string), cherry pick frames by [python slice syntax](https://www.pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python) before padding (e.g.: set `::2` to get only even frames, set `:-1` to drop last frame)
### Installation
Easiest way to install it is to:
1. Go to the "Extensions" tab in the webui, switch to the "Install from URL" tab
2. Paste https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel.git into "URL for extension's git repository" and click install
3. (Optional) You will need to restart the webui for dependencies to be installed or you won't be able to generate video files
Manual install:
1. Copy this repo folder to the 'extensions' folder of https://github.com/AUTOMATIC1111/stable-diffusion-webui
2. (Optional) Restart the webui
### Post-processing pipeline
There are still two steps away from a really smooth and high resolution animation, namely image **super-resolution** & video **frame interpolation** (see `third-party tools` below).
⚠ Media data processing is intrinsic resource-exhausting, and it's also not webui's work or duty, hence we separated it out. 😃
#### setup once
⚪ auto install (Windows)
- run `cd tools & install.cmd`
- trouble shooting
- if you got any file system access errors like `Access denied.`, try run it again until you see `Done!` without errors 😂
- if you got SSL errors about `curl schannel ... Unknown error ... certificate.`, the downloader not work due to some SSL security reasons, just turn to install manually...
- you will have four components: [Busybox](https://frippery.org/busybox/), [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan), [RIFE](https://github.com/nihui/rife-ncnn-vulkan) and [FFmpeg](https://ffmpeg.org/) installed under the [tools](tools) folder
⚪ manually install (Windows/Linux/Mac)
ℹ Understand the `tools` folder layout first => [tools/README.txt](tools/README.txt)
ℹ If you indeed wanna put the tools elsewhere, modify paths in [tools/link.cmd](tools/link.cmd) and run `cd tools & link.cmd` 😉
For Windows:
- download [Busybox](https://frippery.org/busybox/)
- download [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases) (e.g.: `realesrgan-ncnn-vulkan-20220424-windows.zip`)
- (optional) download interesting seperated model checkpoints (e.g.: `realesr-animevideov3.pth`)
- download [rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan/releases) bundle (e.g.: `rife-ncnn-vulkan-20221029-windows.zip `)
- download [FFmpeg](https://ffmpeg.org/download.html) binary (e.g.: `ffmpeg-release-full-shared.7z` or `ffmpeg-git-full.7z`)
For Linux/Mac:
- download [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases) and [rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan/releases), put them according to the `tools` folder layout, manually apply `chmod 755` to the executables
- `ffmpeg` can be easily found in your app store or package manager, run like `apt install ffmpeg`; DO NOT need to link it under `tools` folder
#### run each time
⚪ tkinter GUI (Windows/Linux/Mac)

For Windows:
- run `manager.cmd`, to start webui's python venv
- run the [DOSKEY](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/doskey) `install` (only setup once)
- run the DOSKEY `run`
For Linux/Mac:
- run `../../venv/Scripts/activate`, to start webui's python venv
- run `pip install -r requirements.txt` (only setup once)
- run `python manager.py`
ℹ find usage help message in right click pop menu~
⚪ <del> cmd script (Windows) - deprecated </del>
- check params in [postprocess-config.cmd](postprocess-config.cmd)
- pick one way to start 😃
- run `postprocess.cmd path/to/<image_folder>` from command line
- drag & drop any image folder over `postprocess.cmd` icon
- once processing finished, the explorer will be auto lauched to locate the generated file named with `synth.mp4`
### Related Projects
⚪ extensions that inspired this repo
- sd-webui-controlnet (various image conditions): [https://github.com/Mikubill/sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet)
- depth-image-io (custom depth2img): [https://github.com/AnonymousCervine/depth-image-io-for-SDWebui](https://github.com/AnonymousCervine/depth-image-io-for-SDWebui)
- animator (img2img): [https://github.com/Animator-Anon/animator_extension](https://github.com/Animator-Anon/animator_extension)
- sd-webui-riffusion (music gen): [https://github.com/enlyth/sd-webui-riffusion](https://github.com/enlyth/sd-webui-riffusion)
- sd-animation (half denoise + FILM):
- Github: [https://github.com/andreasjansson/cog-stable-diffusion](https://github.com/andreasjansson/cog-stable-diffusion)
- Replicate: [https://replicate.com/andreasjansson/stable-diffusion-animation](https://replicate.com/andreasjansson/stable-diffusion-animation)
- deforum (img2img + depth model): [https://github.com/deforum-art/deforum-for-automatic1111-webui](https://github.com/deforum-art/deforum-for-automatic1111-webui)
- seed-travel (varying seed): [https://github.com/yownas/seed_travel](https://github.com/yownas/seed_travel)
⚪ third-party tools
- image super-resoultion
- ESRGAN:
- ESRGAN: [https://github.com/xinntao/ESRGAN](https://github.com/xinntao/ESRGAN)
- Real-ESRGAN: [https://github.com/xinntao/Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)
- Real-ESRGAN-ncnn-vulkan (recommended): [https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan)
- video frame interpolation
- FILM (recommended): [https://github.com/google-research/frame-interpolation](https://github.com/google-research/frame-interpolation)
- RIFE:
- ECCV2022-RIFE: [https://github.com/megvii-research/ECCV2022-RIFE](https://github.com/megvii-research/ECCV2022-RIFE)
- rife-ncnn-vulkan (recommended): [https://github.com/nihui/rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan)
- Squirrel-RIFE: [https://github.com/Justin62628/Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE)
- Practical-RIFE: [https://github.com/hzwer/Practical-RIFE](https://github.com/hzwer/Practical-RIFE)
- GNU tool-kits
- BusyBox: [https://www.busybox.net/](https://www.busybox.net/)
- BusyBox for Windows: [https://frippery.org/busybox/](https://frippery.org/busybox/)
- FFmpeg: [https://ffmpeg.org/](https://ffmpeg.org/)
⚪ my other experimental toy extensions
- vid2vid (video2video) [https://github.com/Kahsolt/stable-diffusion-webui-vid2vid](https://github.com/Kahsolt/stable-diffusion-webui-vid2vid)
- hires-fix-progressive (a progressive version of hires.fix): [https://github.com/Kahsolt/stable-diffusion-webui-hires-fix-progressive](https://github.com/Kahsolt/stable-diffusion-webui-hires-fix-progressive)
- sonar (k_diffuison samplers): [https://github.com/Kahsolt/stable-diffusion-webui-sonar](https://github.com/Kahsolt/stable-diffusion-webui-sonar)
- size-travel (kind of X-Y plot on image size): [https://github.com/Kahsolt/stable-diffusion-webui-size-travel](https://github.com/Kahsolt/stable-diffusion-webui-size-travel)
----
by Armit
2022/11/10
|
maxkskhor/ppo-SnowballTarget
|
maxkskhor
| 2023-07-06T11:09:41Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-06T11:09:35Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: maxkskhor/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
asenella/mmnist_JNFconfig_resnet_seed_0_ratio_0_c
|
asenella
| 2023-07-06T11:06:29Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T20:51:08Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Zain6699/intent-classifier-establish_credibility
|
Zain6699
| 2023-07-06T11:03:55Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T11:02:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: intent-classifier-establish_credibility
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-classifier-establish_credibility
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0714
- Accuracy: 0.9854
- F1: 0.9581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Zain6699/intent-classifier-common_ground
|
Zain6699
| 2023-07-06T11:02:19Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T11:00:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: intent-classifier-common_ground
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-classifier-common_ground
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0360
- Accuracy: 0.9938
- F1: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
arham061/finance-alpaca
|
arham061
| 2023-07-06T11:01:10Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T10:26:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finance-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finance-alpaca
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
linlinlin/peft-fine-tuning
|
linlinlin
| 2023-07-06T10:54:57Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-06T10:31:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: peft-fine-tuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-fine-tuning
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.27.2
- Pytorch 2.0.1+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
linlinlin/full-fine-tuning
|
linlinlin
| 2023-07-06T10:53:14Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T10:22:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: full-fine-tuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full-fine-tuning
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.27.2
- Pytorch 2.0.1+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Bugsys0302/trbrma
|
Bugsys0302
| 2023-07-06T10:46:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T10:41:02Z |
---
license: creativeml-openrail-m
---
|
mpetrikov/dqn-unit3-SpaceInvadersNoFrameskip-v4
|
mpetrikov
| 2023-07-06T10:45:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T10:45:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 568.00 +/- 121.35
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mpetrikov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mpetrikov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mpetrikov
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Hardi13/Revier
|
Hardi13
| 2023-07-06T10:44:53Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-06T10:44:53Z |
---
license: bigscience-openrail-m
---
|
Norod78/TinyStories-3M-val-Hebrew
|
Norod78
| 2023-07-06T10:42:58Z | 120 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"he",
"dataset:Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T05:30:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: TinyStories-3M-val-Hebrew
results: []
license: mit
language:
- he
datasets:
- Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT
widget:
- text: היה פעם
- text: פעם אחת
- text: החתול שלך מאוד חמוד ו
pipeline_tag: text-generation
---
# TinyStories-3M-val-Hebrew
This model is trained upon [Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT](https://huggingface.co/datasets/Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT)
Dataset is a machine translation of [TinyStoriesV2-GPT4-valid.txt](https://huggingface.co/datasets/roneneldan/TinyStories/blob/main/TinyStoriesV2-GPT4-valid.txt) by [roneneldan](https://huggingface.co/roneneldan)
Trasnlation was done using [this](https://huggingface.co/datasets/Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT/blob/main/translate_file_2.py) script
Original [Dataset](https://huggingface.co/datasets/roneneldan/TinyStories) containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.
## Model description
A very very small model (8M params) tarined on a very small dataset
A [sample inference script](https://huggingface.co/Norod78/TinyStories-3M-val-Hebrew/blob/main/TinyStories-3M-val-Hebrew-inference.py) is available
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 300.0
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
- ### Parameter calculation
```
def gpt_params(seq_len, vocab_size, d_model, num_heads, num_layers):
""" Given GPT config calculate total number of parameters """
ffw_size = 4*d_model # in GPT the number of intermediate features is always 4*d_model
# token and position embeddings
embeddings = d_model * vocab_size + d_model * seq_len
# transformer blocks
attention = 3*d_model**2 + 3*d_model # weights and biases
attproj = d_model**2 + d_model
ffw = d_model*(ffw_size) + ffw_size
ffwproj = ffw_size*d_model + d_model
layernorms = 2*2*d_model
# dense
ln_f = 2*d_model
dense = d_model*vocab_size # note: no bias here
# note: embeddings are not included in the param count!
total_params = num_layers*(attention + attproj + ffw + ffwproj + layernorms) + ln_f + dense
return total_params
#gpt2 = dict(seq_len = 1024, vocab_size = 50257, d_model = 768, num_heads = 12, num_layers = 12)
gpt2 = dict(seq_len = 256, vocab_size = 50259, d_model = 128, num_heads = 16, num_layers = 8)
result = gpt_params(**gpt2)/1e6
print(result) #Prints 8.019584
```
|
qwopqwop/danbooru-llama-qlora
|
qwopqwop
| 2023-07-06T10:38:48Z | 0 | 4 | null |
[
"license:mit",
"region:us"
] | null | 2023-07-06T10:25:19Z |
---
license: mit
---
train code: https://github.com/qwopqwop200/llama-danbooru-qlora
|
cardiffnlp/twitter-roberta-base-hate-multiclass-latest
|
cardiffnlp
| 2023-07-06T10:37:08Z | 136 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"arxiv:2307.01680",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-09T22:40:52Z |
---
model-index:
- name: twitter-roberta-base-hate-multiclass-latest
results: []
language:
- en
pipeline_tag: text-classification
---
# cardiffnlp/twitter-roberta-base-hate-multiclass-latest
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) for multiclass hate-speech classification. A combination of 13 different hate-speech datasets in the English language were used to fine-tune the model.
## Classes available
```
{
"sexism": 0,
"racism": 1,
"disability": 2,
"sexual_orientation": 3,
"religion": 4,
"other": 5,
"not_hate":6
}
```
## Following metrics are achieved
* Accuracy: 0.9419
* Macro-F1: 0.5752
* Weighted-F1: 0.9390
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-hate-latest")
model.predict('Women are trash 2.')
>> {'label': 'sexism'}
model.predict('@user dear mongoloid respect sentiments & belief refrain totalitarianism. @user')
>> {'label': 'disability'}
```
### Model based on:
```
@misc{antypas2023robust,
title={Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation},
author={Dimosthenis Antypas and Jose Camacho-Collados},
year={2023},
eprint={2307.01680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
soduhh/marian-finetuned-kde4-en-to-fr
|
soduhh
| 2023-07-06T10:26:33Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-05T14:32:51Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: soduhh/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# soduhh/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6854
- Validation Loss: 0.8044
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0627 | 0.8795 | 0 |
| 0.7968 | 0.8213 | 1 |
| 0.6854 | 0.8044 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
blanchefort/rubert-base-cased-sentiment-mokoron
|
blanchefort
| 2023-07-06T09:56:44Z | 129 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"text-classification",
"sentiment",
"ru",
"dataset:RuTweetCorp",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuTweetCorp
---
# RuBERT for Sentiment Analysis of Tweets
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuTweetCorp](https://study.mokoron.com/).
## Labels
0: POSITIVE
1: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuTweetCorp](https://study.mokoron.com/)**
> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
|
ketong3906/opus-mt-en-zh-finetuned-eng-to-chn
|
ketong3906
| 2023-07-06T09:53:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T09:50:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-finetuned-eng-to-chn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-finetuned-eng-to-chn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 1 | 6.2769 | 0.8101 | 73.625 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ddmlproject/cassianatuzzi
|
ddmlproject
| 2023-07-06T09:48:26Z | 30 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-06T09:44:16Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### cassianatuzzi Dreambooth model trained by ddmlproject with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:

.jpg)
.jpg)
.jpg)
.jpeg)
.jpg)
.jpg)
.jpg)
.jpeg)
.jpg)
.jpg)
.jpg)
|
arham061/codeparrot-ds
|
arham061
| 2023-07-06T09:47:41Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T09:36:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nolanaatama/mrdcrvcv2400pchscrckdfl
|
nolanaatama
| 2023-07-06T09:40:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T09:37:34Z |
---
license: creativeml-openrail-m
---
|
RogerB/KinyaBERT-small-finetuned-kintweetsB
|
RogerB
| 2023-07-06T09:33:58Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-06T09:26:55Z |
---
tags:
- generated_from_trainer
model-index:
- name: KinyaBERT-small-finetuned-kintweetsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KinyaBERT-small-finetuned-kintweetsB
This model is a fine-tuned version of [jean-paul/KinyaBERT-small](https://huggingface.co/jean-paul/KinyaBERT-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3312 | 1.0 | 900 | 3.9289 |
| 4.0017 | 2.0 | 1800 | 3.8163 |
| 3.8861 | 3.0 | 2700 | 3.7473 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
happyduck/qlora-koalpaca-polyglot-12.8b-50step
|
happyduck
| 2023-07-06T09:32:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T09:32:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
irfan62622/q-FrozenLake-v1-4x4-noSlippery
|
irfan62622
| 2023-07-06T09:29:01Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T09:28:58Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="irfan62622/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-GGML
|
TheBloke
| 2023-07-06T09:26:48Z | 0 | 4 | null |
[
"license:other",
"region:us"
] | null | 2023-07-06T09:14:38Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# CAMEL AI's CAMEL 33B Combined Data GGML
These files are GGML format model files for [CAMEL AI's CAMEL 33B Combined Data](https://huggingface.co/camel-ai/CAMEL-33B-Combined-Data).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/camel-ai/CAMEL-33B-Combined-Data)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| CAMEL-33B-combined-data-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 CAMEL-33B-combined-data-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
Tests have shown that the model does indeed leverage the extended context at 8K.
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
#### Looking for Merged & Quantized Models?
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
# Original model card: CAMEL AI's CAMEL 33B Combined Data
CAMEL-33B-Combined-Data is a chat large language model obtained by finetuning LLaMA-33B model on a total of 229K conversations collected through our CAMEL framework, 100K English public conversations from ShareGPT that can be found here, and 52K instructions from Alpaca dataset that can be found here. We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL-33B scores an average of 64.2.
Regarding the prompt format, we follow the same prompt as LMSYS's [FastChat](https://github.com/lm-sys/FastChat/tree/main) Vicuna-13B-1.1 conversation template. It assumes a conversation between a user and AI assistant seperated by a <\/s> at the end of every role message. More details can be found [here](https://github.com/lm-sys/FastChat/blob/daa2b9abe20597ebf34dc5df164d450456610c74/fastchat/conversation.py#LL247C1-L247C1).
|
squeeze-ai-lab/sq-opt-13b-w3-s0
|
squeeze-ai-lab
| 2023-07-06T09:25:37Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T08:38:24Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
ireneli1024/bigbird-pegasus-large-pubmed-plos-finetuned
|
ireneli1024
| 2023-07-06T09:18:37Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"text-generation-inference",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-05T05:58:53Z |
---
license: other
language:
- en
metrics:
- rouge
tags:
- text-generation-inference
---
This is the finetuned model based on the [google/bigbird-pegasus-large-pubmed](https://huggingface.co/google/bigbird-pegasus-large-pubmed) model.
The data is from BioLaySumm 2023 [shared task 1](https://biolaysumm.org/#data).
|
zblaaa/t5-base-finetuned-ner_docred_symbole
|
zblaaa
| 2023-07-06T09:18:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T07:45:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-ner_docred_symbole
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-ner_docred_symbole
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.1.0.dev20230611+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Sekiraw/space_invaders
|
Sekiraw
| 2023-07-06T09:16:19Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-05T12:58:30Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 251.50 +/- 28.46
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sekiraw
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 200000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
megagonlabs/pilota_scud2query
|
megagonlabs
| 2023-07-06T09:12:07Z | 0 | 0 | null |
[
"t5",
"text2text-generation",
"pilota",
"ja",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-06-27T09:20:44Z |
---
language: ja
tags:
- t5
- text2text-generation
- pilota
license: apache-2.0
---
# Pilota model for scud2query
A model for [Pilota](https://github.com/megagonlabs/pilota) trained with <https://github.com/megagonlabs/scud2query>.
- ``scud``
- Fine tuned model of [t5-base-japanese-web (with Byte-fallback, 8K)](https://huggingface.co/megagonlabs/t5-base-japanese-web-8k)
- The original model is distributed in [the Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- ``scorer``
- Fine tuned model of [LINE DistilBERT Japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese)
- The original model is distributed in [the Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Usage
1. Install [Pilota](https://github.com/megagonlabs/pilota)
2. Prepare inputs
- Command
```bash
echo -e '部屋に冷蔵庫があると良い。レンタカーサービスがあるホテルを【customer】が希望する。' | python -m pilota.convert.plain2request | tee input.jsonl
```
- Output
```json
{"context":null,"utterance":"部屋に冷蔵庫があると良い。レンタカーサービスがあるホテルを【customer】が希望する。","sentences":null,"meta":{}}
```
3. Feed it to Pilota
- Command
```console
pilota -m megagonlabs/pilota_scud2query --batch_size 1 --outlen 60 --nbest 1 --beam 5 < input.jsonl
```
- Output (Formatted by ``jq .``)
```json
[
{
"scuds_nbest": [
[
"部屋に冷蔵庫がある。"
]
],
"original_ranks": [
0
],
"scores": [
0.9769772589206696
],
"scores_detail": [
{
"OK": 0.9232575297355652,
"incorrect_none": 0.0034886503126472235,
"lack": 0.023772092536091805,
"limited": 0.013821585103869438,
"untruth": 0.04332486167550087
}
],
"sentence": "部屋に冷蔵庫があると良い。"
},
{
"scuds_nbest": [
[
"レンタカーサービスがあるホテルだ。"
]
],
"original_ranks": [
0
],
"scores": [
0.9876023113727569
],
"scores_detail": [
{
"OK": 0.9586743712425232,
"incorrect_none": 0.004059707745909691,
"lack": 0.0024317132774740458,
"limited": 0.007630097679793835,
"untruth": 0.04025880992412567
}
],
"sentence": "レンタカーサービスがあるホテルを【customer】が希望する。"
}
]
```
## License
Apache License 2.0
|
Abzu/mpt-30b-instruct-q8
|
Abzu
| 2023-07-06T09:11:11Z | 20 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-06-30T07:59:31Z |
---
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MosaicML's MPT-30B-Instruct 8-bit
These files are .safetensors format model files for [MosaicML's MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct).
## How to convert
```python
# Load the model
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
start_time = time.time()
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True,
load_in_8bit=True
)
# Filter the non-tensor items
def filter_dict(dictionary):
filtered_dict = {key: value for key, value in dictionary.items() if "weight_format" not in key}
return filtered_dict
new_state_dict = filter_dict(model.state_dict())
# Save the 8-bit model
model.save_pretrained('mpt-30b-instruct-8bits', state_dict=new_state_dict, safe_serialization=True)
```
## How to use
```python
# Load the model
model = transformers.AutoModelForCausalLM.from_pretrained(
'mpt-30b-instruct-8bits',
trust_remote_code=True,
)
```
## Prompt template
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction
{prompt}
### Response
```
# MPT-30B-Instruct
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| competition_math | 1.6 M | 3.66% |
| cot_gsm8k | 3.36 M | 7.67% |
| dialogsum | 0.1 M | 0.23% |
| dolly_hhrlhf | 5.89 M | 13.43% |
| duorc | 7.8 M | 17.80% |
| qasper | 8.72 M | 19.90% |
| quality | 11.29 M | 25.78% |
| scrolls/summ_screen_fd | 4.97 M | 11.33% |
| spider | 0.089 M | 0.20% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
squeeze-ai-lab/sq-opt-30b-w4-s0
|
squeeze-ai-lab
| 2023-07-06T09:01:30Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T09:00:24Z |
---
{}
---
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 30B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 30B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-2.7b-w3-s0
|
squeeze-ai-lab
| 2023-07-06T08:58:57Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T08:28:00Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 2.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 2.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-1.3b-w3-s0
|
squeeze-ai-lab
| 2023-07-06T08:58:23Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T08:27:48Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 1.3B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 1.3B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
Vtmpas/ppo-Huggy
|
Vtmpas
| 2023-07-06T08:54:55Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-06T08:54:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Vtmpas/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
viceisi/identify-my-cat
|
viceisi
| 2023-07-06T08:54:29Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-06-28T15:18:19Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
smaciu/bee-wings-classifier
|
smaciu
| 2023-07-06T08:32:55Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-06-24T10:25:38Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
m-aliabbas1/q-FrozenLake-v1-4x4-noSlippery
|
m-aliabbas1
| 2023-07-06T08:31:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T08:31:42Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="m-aliabbas1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fadliaulawi/dummy-model
|
fadliaulawi
| 2023-07-06T08:25:22Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-06T07:56:53Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aronmal/Reinforce-CartpoleMLP
|
aronmal
| 2023-07-06T07:53:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T07:53:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-MLP
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 464.00 +/- 91.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Technotech/opt-125m-4bit-128g
|
Technotech
| 2023-07-06T07:51:47Z | 5 | 1 |
transformers
|
[
"transformers",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-06-12T08:04:01Z |
---
language: en
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
## OPT-125m-4bit-128g
OPT 125M, quantised to 4bit using AutoGPTQ, with groupsize 128g, no act order.
Good for testing AutoGPTQ with a small model download.
# Original Model Card
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-125m")
>>> generator("Hello, I'm am conscious and")
[{'generated_text': 'Hello, I am conscious and aware of the fact that I am a woman. I am aware of'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True)
>>> generator("Hello, I'm am conscious and")
[{'generated_text': 'Hello, I am conscious and active member of the Khaosan Group, a private, self'}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
xian79/Reinforce-CartPole-v1
|
xian79
| 2023-07-06T07:51:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T07:51:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Technotech/RedPajama-Base-3B-4bit-128g
|
Technotech
| 2023-07-06T07:49:49Z | 5 | 0 |
transformers
|
[
"transformers",
"gpt_neox",
"text-generation",
"gptq",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-12T09:18:42Z |
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
tags:
- gptq
---
## RedPajama-Base-3B-4bit-128g
RedPajama 3B, quantised to 4bit with groupsize of 128, no act order.
# Original Model Card
# RedPajama-INCITE-Base-3B-v1
RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
- Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1)
- Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1)
- Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
## GPU Inference
This requires a GPU with 8GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
a name that has been synonymous with the computer age since the 1950s. The British mathematician, logician, and cryptanalyst is widely regarded as the father of modern computing. His contributions to the development of the modern computer and the theory of computation have had a profound impact on the world we live in today.
Turing’s contributions to the development of the modern computer were made in the 1940s and 1950s. He is most famous for his work on the Turing machine, a theoretical model of a computing machine that was able to perform all the mathematical operations of a computer. Turing’s work on the...
"""
```
## GPU Inference in Int8
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
the man who cracked the Enigma code during World War II, and who was later convicted of homosexual acts. He was a brilliant mathematician, and a visionary who foresaw the computer age....
"""
```
## CPU Inference
You can run inference on CPU as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.bfloat16)
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
a name that is synonymous with the history of computer science. As the man who invented the Turing machine, the mathematical model that defines the limits of what can be computed, Turing is credited with the invention of the modern computer. Turing was also a mathematician and logician, and his work in these fields led to the development of the field of artificial intelligence...
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
`RedPajama-INCITE-Base-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
`RedPajama-INCITE-Base-3B-v1` is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
`RedPajama-INCITE-Base-3B-v1`, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 256 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster
- **Optimizer:** Apex FusedAdam
- **Parallelism:** Pipeline parallel 6, tensor parallel 2
- **Gradient Accumulations**: 8 (global batch size 4M tokens)
- **Num of Tokens:** 800B Tokens
- **Learning rate:** 0.00016
## Benchmark
Please refer to our [blog post](https://together.xyz) for benchmark results.
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
|
zhundred/ppo-Huggy
|
zhundred
| 2023-07-06T07:46:14Z | 33 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-06T07:42:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zhundred/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
atrytone/MIReAD-Neuro-Contrastive
|
atrytone
| 2023-07-06T07:40:38Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-06T07:38:47Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 480 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 100,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
liyingjian/ppo-LunarLander-v2
|
liyingjian
| 2023-07-06T07:38:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T06:36:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.29 +/- 21.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Vtmpas/ppo-LunarLander-v2
|
Vtmpas
| 2023-07-06T07:36:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T07:35:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.43 +/- 16.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Word2vec/nlpl_207
|
Word2vec
| 2023-07-06T07:30:10Z | 0 | 0 | null |
[
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T09:08:03Z |
---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 35193029 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_207", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/207.zip
|
Word2vec/nlpl_206
|
Word2vec
| 2023-07-06T07:29:52Z | 0 | 0 | null |
[
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:09:12Z |
---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_206", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/206.zip
|
Word2vec/nlpl_204
|
Word2vec
| 2023-07-06T07:29:15Z | 0 | 0 | null |
[
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:56:31Z |
---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 998459 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_204", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/204.zip
|
NTQAI/pedestrian_age_recognition
|
NTQAI
| 2023-07-06T07:28:59Z | 110,387 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"beit",
"image-classification",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-09T03:36:33Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pedestrian_age_recognition_local
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8073394495412844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pedestrian_age_recognition_local
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5004
- Accuracy: 0.8073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8849 | 1.0 | 2008 | 0.7939 | 0.6807 |
| 0.9836 | 2.0 | 4016 | 0.6694 | 0.7336 |
| 0.8128 | 3.0 | 6024 | 0.5768 | 0.7668 |
| 0.7611 | 4.0 | 8032 | 0.5541 | 0.7833 |
| 0.6441 | 5.0 | 10040 | 0.5473 | 0.7773 |
| 0.5696 | 6.0 | 12048 | 0.5187 | 0.7971 |
| 0.6925 | 7.0 | 14056 | 0.5082 | 0.8038 |
| 0.5711 | 8.0 | 16064 | 0.5092 | 0.8098 |
| 0.7741 | 9.0 | 18072 | 0.5026 | 0.8020 |
| 0.5269 | 10.0 | 20080 | 0.5004 | 0.8073 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
|
Word2vec/nlpl_186
|
Word2vec
| 2023-07-06T07:28:40Z | 0 | 0 | null |
[
"word2vec",
"rus",
"dataset:Taiga_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:55:53Z |
---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Taiga_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249946 corresponding to 4867000000 tokens from the dataset `Taiga_corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_186", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/186.zip
|
afaan00733/refference_filtering
|
afaan00733
| 2023-07-06T07:28:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T07:15:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: refference_filtering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# refference_filtering
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3518
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6560 | 0.8947 |
| No log | 2.0 | 4 | 0.6103 | 1.0 |
| No log | 3.0 | 6 | 0.5545 | 1.0 |
| No log | 4.0 | 8 | 0.4951 | 0.9474 |
| No log | 5.0 | 10 | 0.4457 | 1.0 |
| No log | 6.0 | 12 | 0.4127 | 1.0 |
| No log | 7.0 | 14 | 0.3894 | 1.0 |
| No log | 8.0 | 16 | 0.3705 | 1.0 |
| No log | 9.0 | 18 | 0.3577 | 1.0 |
| No log | 10.0 | 20 | 0.3518 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Word2vec/nlpl_182
|
Word2vec
| 2023-07-06T07:27:18Z | 0 | 0 | null |
[
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:54:36Z |
---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 248978 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_182", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/182.zip
|
digiplay/Zevinemix_v1.0
|
digiplay
| 2023-07-06T07:24:33Z | 255 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-06T04:38:41Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/103015?modelVersionId=110251
Sample image I made :


Original Author's DEMO images :




|
Bugsys0302/m416
|
Bugsys0302
| 2023-07-06T07:16:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T07:06:10Z |
---
license: creativeml-openrail-m
---
|
atiiisham988/distilbert-base-uncased-finetuned-imdb
|
atiiisham988
| 2023-07-06T07:01:14Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-06T06:57:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Bugsys0302/beltbr
|
Bugsys0302
| 2023-07-06T06:59:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T06:57:43Z |
---
license: creativeml-openrail-m
---
|
afaan00733/my_awesome_model
|
afaan00733
| 2023-07-06T06:56:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-04T21:18:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6546
- Accuracy: 0.4737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6732 | 0.4737 |
| No log | 2.0 | 4 | 0.6546 | 0.4737 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
IliyanGochev/whisper-small-bg
|
IliyanGochev
| 2023-07-06T06:50:12Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"bg",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-05T08:04:03Z |
---
language:
- bg
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: whisper-small-bg
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_13_0 bg
type: mozilla-foundation/common_voice_13_0
config: bg
split: test
args: bg
metrics:
- name: Wer
type: wer
value: 44.67291341315287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-bg
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_13_0 bg dataset.
It achieves the following results on the evaluation set:
- Loss: 9.0612
- Wer: 44.6729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.9319 | 6.76 | 1000 | 10.0774 | 73.9892 |
| 2.6116 | 13.51 | 2000 | 11.4089 | 67.0484 |
| 0.9607 | 20.27 | 3000 | 11.8266 | 60.9448 |
| 0.3464 | 27.03 | 4000 | 9.9500 | 52.1213 |
| 0.0122 | 33.78 | 5000 | 9.0612 | 44.6729 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bugsys0302/fmmstrb
|
Bugsys0302
| 2023-07-06T06:46:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T06:40:45Z |
---
license: creativeml-openrail-m
---
|
JennnDexter/pokemon-lora
|
JennnDexter
| 2023-07-06T06:44:42Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-12T06:24:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - JennnDexter/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
aroot/eng-mya-simcse_central
|
aroot
| 2023-07-06T06:36:12Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T06:14:05Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_central
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_central
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8980
- Bleu: 4.1973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hchung1017/aihub_012_streaming_conformer
|
hchung1017
| 2023-07-06T06:22:30Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"ko",
"dataset:aihub_012",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-07-06T06:22:07Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ko
datasets:
- aihub_012
license: cc-by-4.0
---
## ESPnet2 ASR model
### `hchung1017/aihub_012_streaming_conformer`
This model was trained by hchung1017 using aihub_012 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout f4d7fead71e2a99541a8d3d66d6e00a33d9e82df
pip install -e .
cd egs2/aihub_012/asr1
./run.sh --skip_data_prep false --skip_train true --download_model hchung1017/aihub_012_streaming_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Jul 5 15:19:05 KST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202304`
- pytorch version: `pytorch 1.13.1`
- Git hash: `f4d7fead71e2a99541a8d3d66d6e00a33d9e82df`
- Commit date: `Wed May 24 14:58:35 2023 -0400`
## exp/asr_train_asr_streaming_conformer_raw_ko_bpe5000_sp/decode_asr_streaming_asr_model_valid.acc.ave
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|3794053|89.7|9.1|1.2|1.4|11.8|28.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|17636048|94.8|3.0|2.2|1.6|6.8|28.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|4325914|88.1|8.2|3.7|1.5|13.4|28.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_streaming_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_streaming_conformer_raw_ko_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 51405
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- cer_ctc
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/train_sp/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/dev/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.003
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- ▁I
- ▁YOU
- ''''
- S
- ▁WHAT
- ▁A
- ▁IT
- ▁TO
- ▁IS
- ▁THE
- ▁ARE
- ▁CAN
- ▁OKAY
- ▁YES
- ▁DO
- ▁THAT
- ▁SEE
- T
- ▁HE
- ▁HOW
- ▁ME
- ▁HAVE
- ▁MY
- ▁GOOD
- ▁REALLY
- ▁SO
- ▁FOR
- ▁AM
- ▁SURE
- ▁OH
- ▁GO
- ▁WHY
- ▁NO
- ▁YOUR
- ▁RIGHT
- ▁HELP
- ’
- ▁DON
- ▁NOT
- ▁HI
- ▁HERE
- ▁DID
- ▁LIKE
- ▁AND
- ▁TOO
- ▁SHE
- ▁THIS
- ▁HELLO
- M
- ▁KNOW
- ▁WANT
- RE
- ▁NEED
- ▁WILL
- ▁ABOUT
- ▁THERE
- ▁LET
- ▁OF
- ▁IN
- ▁BE
- ▁BUT
- ▁THINK
- ▁SOMETHING
- ▁LOOK
- ▁NOW
- ▁NICE
- ▁THEN
- ▁
- ▁WE
- ▁GREAT
- ▁THANK
- ▁WITH
- ▁TELL
- ▁PROBLEM
- ▁HER
- ▁GOING
- ▁WAS
- ▁DOING
- ▁ASK
- ▁THANKS
- ▁HEY
- ▁BACK
- ▁WRONG
- ▁THEY
- ▁ON
- ▁HIM
- ▁UP
- ▁AT
- LL
- ▁WELL
- ▁GET
- ▁WHERE
- VERY
- ▁SOME
- ▁PEOPLE
- ▁ALL
- ▁MEAN
- ▁PLEASE
- ▁TIME
- ▁WHO
- ▁GOT
- ▁WELCOME
- ▁MAKE
- ▁COME
- ▁MEET
- ▁NEW
- ▁LOT
- ▁MOM
- ▁SAID
- ▁SHOULD
- ▁HAPPY
- ▁HIS
- ▁BUSY
- ▁BYE
- ▁QUESTION
- ▁SAY
- ▁TAKE
- ▁MORE
- ▁SORRY
- ▁IDEA
- ▁OUT
- ▁FINE
- ▁PLAY
- ▁ANY
- ▁AGAIN
- ▁BECAUSE
- ▁FROM
- ▁AN
- ▁WHEN
- ▁TRY
- ▁HAS
- ▁TODAY
- ▁READY
- ▁HOPE
- ▁GIVE
- ▁BIG
- ▁FRIEND
- ▁WRITE
- ▁EAT
- ▁ONE
- ▁BAD
- ▁MUCH
- ▁SOON
- ▁MANY
- ED
- ▁THEM
- ▁ANGRY
- ▁LATER
- ING
- ▁MAYBE
- ▁DAD
- ▁FIND
- ▁DOWN
- ▁WORRY
- ▁SHOW
- ▁COURSE
- ▁DAY
- ▁SOUNDS
- ▁DOES
- ▁STRANGE
- ▁TALK
- ▁FUN
- ▁REMEMBER
- ▁ANYTHING
- ▁BUY
- ▁LETTER
- ▁JUST
- ▁MADE
- ▁READ
- ▁CANNOT
- ▁WANTS
- ▁WOW
- ▁DIDN
- ▁IF
- ▁GLAD
- ▁WAY
- ▁MUST
- ▁SCHOOL
- ▁BOOK
- ▁LOOKING
- ▁TOLD
- ▁NAME
- ▁HEAR
- ▁TOY
- ▁TRUE
- ▁TEACHER
- ▁US
- ▁WORK
- ▁TWO
- ▁SONG
- ▁HARD
- ▁LOVE
- ▁THINGS
- ▁SING
- ▁BETTER
- ▁HOME
- ▁LINKER
- ▁UNDERSTAND
- ▁LOOKS
- ▁KIND
- ▁HOUSE
- LUE
- ▁DRESS
- ▁BY
- ▁BEST
- ▁LONG
- ▁NEWS
- ▁WENT
- ▁HAPPENED
- ▁OLD
- ▁KEEP
- ▁NEXT
- ▁CHECK
- D
- ▁SPECIAL
- ▁USE
- ▁LIKES
- ▁EVERYTHING
- ▁FEEL
- ▁ROBOT
- ▁SAD
- ▁PLEASURE
- ▁JOE
- ▁COOL
- ▁TOMORROW
- ▁LUCK
- ▁DOESN
- ▁BOX
- ▁AROUND
- ▁HOMEWORK
- ▁ALWAYS
- ▁MORGAN
- ▁PUT
- ▁THESE
- ▁GAVE
- ▁HEARD
- ▁WAIT
- ▁PRESENT
- ▁SOMEONE
- ▁PARTY
- ▁BIRTHDAY
- ▁RANDY
- ▁FRIENDS
- ▁MONEY
- ▁DONE
- ▁CAR
- ▁COFFEE
- ▁MUSIC
- ▁BEN
- ▁BEEN
- ▁STILL
- ▁GREEN
- ▁STAR
- ▁PERSON
- ▁WERE
- ▁STORY
- ▁ELSE
- ▁IDEAS
- ▁TOGETHER
- ▁MILK
- ▁WOULD
- ▁SOUND
- ▁THAN
- ▁TALKED
- ▁EVERY
- ▁NEEDS
- ▁SAW
- ▁HAIR
- ▁CHANGE
- ▁WORRIED
- ▁EASY
- ▁FOOD
- ▁DOG
- VE
- ▁CONCERT
- ▁MAKING
- ▁MONSTER
- ▁BOY
- ▁PHOTO
- ▁SCARY
- ▁RED
- ▁BROTHER
- ▁FIRST
- ▁DANCE
- ▁BEFORE
- ▁PRETTY
- ▁DRINK
- ▁WISH
- ▁HARRY
- ▁CALM
- ▁CAT
- ▁WEAR
- ▁BLUE
- ▁MESSAGE
- ▁TRUST
- ▁ONLY
- ▁HAD
- ▁THREE
- ▁AWAY
- ▁MIND
- ▁MAKES
- ▁GRANDMOTHER
- ▁WATCH
- ▁EMMA
- ▁AMY
- ▁TIRED
- ▁CLASS
- ▁MAN
- ▁DAN
- ▁COULD
- ▁BRING
- ▁SMALL
- ▁ANYWAY
- ▁OUR
- ▁ROOM
- ▁AFTER
- ▁BELIEVE
- ▁BOOKS
- ▁TEN
- ▁DEVILMON
- ▁JOB
- ▁OVER
- ▁COMING
- ▁STOP
- ▁FUNNY
- ▁DIANA
- ▁TOYS
- ▁FAST
- ▁MORNING
- ▁NUMBER
- ▁NOTHING
- ▁TOWN
- ▁OPEN
- ▁OTHER
- ▁PHONE
- ▁CARE
- ▁LEAVE
- ▁CONTEST
- ▁WOODY
- ▁THINKING
- Y
- ▁ANOTHER
- A
- ▁ENGLISH
- ▁SICK
- ▁BRAVE
- ▁TROY
- ▁EATING
- ▁SLEEP
- ▁THEIR
- ▁SELL
- ▁DELICIOUS
- ▁OFF
- ▁WATER
- ▁PICTURE
- ▁CAME
- ▁EVERYONE
- ▁PAPER
- ▁PARK
- ▁PAINT
- ▁SHOP
- ▁CREAM
- ▁TV
- ▁BOUGHT
- ▁CAREFUL
- ▁ROBBY
- ▁FOUND
- ▁STONE
- ▁SISTER
- ▁HURRY
- ▁BAG
- ▁WAKE
- ▁SYRUP
- ▁DRAW
- ▁ENERGY
- ▁SHOES
- ▁IMPORTANT
- ▁NEVER
- ▁LISTEN
- ▁WON
- ▁DOOR
- ▁POP
- ▁LAST
- ▁DIFFERENT
- ▁FISH
- ▁SAVE
- ▁HEALTHY
- ▁UNCLE
- ▁NIGHT
- UCH
- ▁PLACE
- ▁DARK
- ▁GUESS
- ▁LATE
- ▁PIE
- N
- ▁PRACTICE
- ▁MONICA
- ▁ANYONE
- ▁READING
- ▁COLOR
- ▁SALLY
- ▁BLACK
- ▁MOVIE
- ▁TROUBLE
- ▁COLD
- ▁STUDY
- ▁LITTLE
- ▁WHITE
- ▁CHEER
- ▁SCARED
- ▁POSTER
- ▁TALKING
- ▁TEACH
- ▁WALK
- ▁CAKE
- ▁INTO
- ▁FIGHT
- ▁ALREADY
- ▁SLEEPY
- ▁STRONG
- ▁OLIVIA
- ▁CALL
- ▁WROTE
- ▁ICE
- ▁OR
- ▁SCOTT
- ▁LIBRARY
- ▁NANCY
- ▁LUMY
- ▁HAT
- ▁YET
- ▁ALEX
- ▁SHORT
- ▁CLOTHES
- ▁YESTERDAY
- ▁FAVORITE
- ▁SWEET
- ▁FIVE
- ▁HOLD
- ▁LUNCH
- ▁PLAYING
- ▁GARY
- ▁HANDS
- ▁LEFT
- ▁ASKED
- ▁CHEESE
- ▁FACE
- ▁BORROW
- ▁SPEAK
- ▁INTERESTING
- ▁MAY
- ▁BEAR
- ▁SIGN
- ▁SHADOW
- ▁FLOWERS
- ▁PINO
- ▁ERIN
- ▁FOREST
- ▁GAME
- ▁MR
- ▁WANTED
- ▁RUN
- ▁SPELL
- ▁PEN
- ▁SHOPPING
- ▁COOK
- ▁DAYS
- ▁BED
- ▁BEAUTIFUL
- ▁MUSEUM
- ▁CLEAN
- ▁REST
- ▁SAME
- ▁DOCTOR
- ▁YOURSELF
- ▁DINNER
- ▁DANGEROUS
- ▁SECRET
- ▁STORE
- ▁TREE
- ▁MIGHT
- ▁MAYOR
- ▁CHARLIE
- ▁PIZZA
- ▁FOUR
- ▁SIR
- ▁SEEN
- ▁TURN
- ▁ENJOY
- ▁CLARA
- ▁ANYTIME
- ▁LIVE
- ▁LOST
- ▁SANDRA
- ▁DURING
- ▁MYSELF
- ▁TALL
- ▁MINE
- ▁CHOOSE
- ▁TOOK
- ▁WAITING
- ▁S
- ▁SUNNY
- ▁SINGING
- ▁ACADEMY
- ▁AHEAD
- ▁HURT
- ▁CLOCK
- ▁PAINTING
- ▁RAN
- ▁ALONE
- ▁USED
- ▁PLAN
- ▁THEATER
- ▁HAND
- ▁WEEK
- ▁CATCH
- ▁SEND
- ▁CUBE
- ▁ERIC
- ▁WOOD
- ▁HOT
- ▁DEVILMONS
- ▁FREE
- ▁STAY
- ▁PROMISE
- ▁RULE
- ▁HUNGRY
- ▁WORKING
- ▁HAPPEN
- ▁VIKI
- ▁FAMILY
- ▁CHICKEN
- ▁FORGET
- ▁YELLOW
- ▁BROWN
- ▁VACATION
- ▁KELLY
- ▁JACK
- ▁SINGER
- ▁HAMMER
- ▁SAYS
- ▁TRAIN
- ▁FIX
- ▁CUTE
- ▁EVEN
- ▁SANTA
- ▁SLEEPING
- ▁BUS
- ▁BARBECUE
- ▁AGREE
- ▁COULDN
- ▁MISS
- E
- ▁GRACE
- ▁TRASH
- ▁BABY
- ▁LUMA
- ▁CHILDREN
- ▁EXCUSE
- ▁DPOP
- ▁OUTSIDE
- ▁ORDER
- ▁MATTER
- ▁RIDE
- ▁SUMMER
- ▁CLOSE
- ▁MOVE
- ▁JUICE
- ▁TOUCH
- ▁CARD
- ▁THOSE
- ▁HAIRSTYLE
- ▁RICH
- ▁BREAK
- ▁ANYMORE
- ▁TRIP
- ▁EYES
- ▁LEARN
- IC
- ▁YOUNGER
- ▁SMELLS
- ▁CHRIS
- ▁ITEMS
- ▁STONES
- ▁CUT
- ▁STUDENT
- ▁CALLED
- ▁SHINE
- ▁ATE
- ▁PERFECT
- ▁BETIA
- ▁MOVING
- LY
- ▁FIRE
- ▁D
- ▁CHRISTMAS
- ▁RUNNING
- ▁LINE
- ▁JACKET
- ▁WHICH
- ▁GIFT
- ▁SMILE
- ▁WEARING
- ▁STELLA
- ▁SEVEN
- ▁ANSWER
- ▁YEAR
- ▁MOST
- ▁WENDY
- RA
- ▁BALL
- ▁THING
- ▁FIFTY
- ▁YOUNG
- ▁FRONT
- ▁LIKED
- ▁WINDOW
- ▁BEING
- ▁RICE
- ▁HOBBY
- ▁BRUCE
- ▁ALVIN
- ▁CHAIR
- ▁ELEVEN
- ▁INTERVIEW
- ▁TRUMPET
- ▁DRAWING
- ▁WHILE
- ▁HAV
- ▁NEWSPAPER
- ▁WRITING
- ▁FRUIT
- ▁BEHIND
- ▁EVENT
- ▁HAVEN
- ▁BELLOW
- ▁YEARS
- ▁DIV
- ▁VICTORIA
- ▁SENT
- ▁STYLE
- ▁LUNA
- ▁AUNT
- ▁DREAM
- ▁PICTURES
- ▁LEO
- ▁QUESTIONS
- ▁PRICE
- ▁APPLE
- ▁SCHEDULE
- ▁TABLE
- ▁PLANT
- ▁BELL
- ▁SUSAN
- ▁SHIRT
- ▁GRANDFATHER
- ▁EXPENSIVE
- ▁GUYS
- ▁THOUGHT
- ▁OSCAR
- ▁TIMES
- ▁ACTUALLY
- ▁CHANCE
- ▁PAY
- ▁WASH
- ▁JUGGLING
- ▁JULIA
- ▁MAKEUP
- ▁PIANO
- ▁GOES
- ▁QUIZ
- ▁OFTEN
- ▁THIRTY
- ▁SMART
- ▁WEEKEND
- ▁CHOCOLATE
- ▁BATHROOM
- ▁CANDY
- ▁SPEECH
- ▁FEELING
- ▁RADIO
- ▁HECTOR
- ▁KNOWS
- ▁GRANDMA
- ▁SEEM
- ER
- ▁START
- ▁PENCIL
- ▁SUNDAY
- ▁WORD
- ▁MOUSE
- ▁PLAYGROUND
- ▁BREAD
- ▁MAGIC
- ▁CD
- ▁BROKEN
- ▁COLIN
- ▁DIRTY
- ▁MOTHER
- ▁DESK
- ▁BORING
- ▁SOUP
- ▁ONCE
- ▁WORKED
- ▁COUNT
- ▁EXCITED
- ▁PARADE
- ▁GUITAR
- ▁PM
- ▁FINISH
- ▁BLOCK
- ▁FISHING
- ▁VOICE
- ▁ROGER
- ▁WORKS
- ▁PLAYER
- ▁GLASSES
- ▁LAB
- ▁SIGH
- ▁LOVES
- ▁MODEL
- ▁EXERCISE
- ▁O
- ▁POINT
- ▁SWIMMING
- ▁MARKET
- ▁NOTE
- ▁SECOND
- ▁LUCKY
- ▁BROKE
- ▁CAVE
- ▁SHALL
- ▁KID
- ▁HANG
- ▁MICHAEL
- ▁DANCING
- ▁COM
- ▁MASK
- TING
- ▁KYLE
- ▁FRIDAY
- ▁MELOD
- ▁DOUGLAS
- ▁ENOUGH
- ▁LEARNED
- ▁ALICE
- ▁NEWSPAPERS
- ▁NEAR
- ▁GIRL
- ▁LAURA
- ▁BANK
- ▁ORANGE
- ▁HEART
- ▁SNACKS
- ▁BANANA
- ▁AFRAID
- ▁NOISE
- ▁AARON
- ▁SIDE
- ▁POSSIBLE
- ▁ISN
- ▁UPSET
- ▁KATHY
- ▁ENTER
- ▁STATUE
- ▁FAVOR
- ▁CAPSULE
- ▁CLUB
- ▁BORED
- ▁STREET
- ▁FAR
- ▁BROUGHT
- ▁HENRY
- ▁BRIAN
- ▁FLOOR
- ▁RECORD
- ▁SUN
- ▁BORN
- ▁GONE
- ▁ELEPHANT
- ▁FATHER
- ▁BEAT
- ▁MISTAKE
- NY
- ▁MEGAN
- ▁JIN
- ▁CARL
- ▁FACTORY
- ▁HORSE
- ▁STANLEY
- ▁WIN
- ▁AFTERNOON
- ▁LIVED
- ▁HIGH
- ▁LEAVING
- ▁MINUTES
- ▁WALL
- ▁SURPRISE
- ▁DAVID
- ▁TWENTY
- ▁BIRD
- ▁NICK
- ▁REASON
- ▁OWN
- ▁STEVE
- ▁LADY
- ▁COMES
- ▁STATION
- ▁DOLL
- ▁JADE
- ▁STAND
- ▁FAMOUS
- ▁PLAYED
- ▁TSHIRT
- ▁HUEY
- ▁SEA
- ▁SIX
- ▁REPORT
- ▁POPULAR
- ▁PICK
- ▁TONY
- ▁TINA
- ▁KIDS
- ▁WEATHER
- ▁TREES
- ▁TIFFANY
- ▁WONDERFUL
- ▁RING
- ▁SOMEWHERE
- ▁LIGHT
- ▁NOSE
- ▁AUDREY
- ▁CAMERA
- ▁GARDEN
- ▁SOCCER
- ▁PIG
- ▁FRESH
- ▁NOBODY
- ▁AMANDA
- ▁SURPRISED
- ▁STOPPED
- ▁CITY
- ▁KOREAN
- ▁HISTORY
- ▁STUDENTS
- ▁COOKING
- L
- ▁LOUD
- ▁LOSE
- ▁PINK
- ▁LIE
- ▁CRAYONS
- ▁HEALTH
- ▁HANDWRITING
- ▁JOIN
- ▁THROW
- ▁INFORMATION
- ▁DIFFICULT
- ▁SOMETIMES
- ▁BIKE
- ▁WOMAN
- ▁FLOWER
- ▁WORDS
- ▁GHOST
- ▁RICKY
- R
- ▁TEETH
- ▁SAYING
- ▁PIECE
- ▁DR
- ▁CHANGED
- ▁SIT
- ▁ARTICLE
- ▁ARM
- ▁BECOME
- ▁MONKEY
- ▁YEAH
- ▁JUDY
- ▁FOLLOW
- ▁ALSO
- ▁GAMES
- ▁BAND
- ▁COMPUTER
- ▁ANDRE
- ▁EATS
- ▁MATH
- ▁EXACTLY
- ▁ART
- ▁JUMP
- ▁FOODS
- ▁PRESENTS
- ▁RABBIT
- ▁SMELL
- ▁HEAVY
- ▁SWIM
- ▁RICHARD
- ▁GRASS
- ▁BOTHER
- ▁PANTS
- ES
- ▁ALMOST
- ▁HELPING
- ▁ZOO
- ▁SHOULDN
- ▁FAN
- ▁EGGS
- ▁ELLA
- ▁RESTAURANT
- ▁CHIPS
- ▁BIGGER
- ▁MONDAY
- ▁CATS
- ▁STUDYING
- ▁TONIGHT
- ▁BRADY
- ▁SERIOUS
- ▁FORGOT
- ▁VISIT
- ▁BUILDING
- ▁SET
- ▁HANDSOME
- ▁CLAUS
- ▁RALPH
- ▁COMPANY
- ▁SEAT
- ▁ANDREW
- ▁WITHOUT
- EN
- ▁MEAT
- ▁BOARD
- ▁CLASSES
- ▁FLY
- ▁BIT
- ▁ANGELA
- ▁POLICE
- ▁BET
- ▁FINISHED
- ▁EITHER
- ▁SKY
- ▁POLIA
- ▁EIGHT
- ▁AMAZING
- ▁INSIDE
- ▁SATURDAY
- ▁DINOSAUR
- ▁DEVERYTHING
- ▁BRUSH
- ▁VIVIEN
- ▁BREAKFAST
- ▁QUICKLY
- ▁HEAD
- ▁CAROL
- ▁EACH
- ▁BANANAS
- ▁JAZZ
- ▁OWEN
- ▁LEAVES
- ▁HELPED
- ▁WINTER
- ▁REAL
- ▁TRUTH
- ▁RIVER
- ▁ROAD
- ▁ANNA
- ▁INTERESTED
- ▁EVERYBODY
- ▁HIMSELF
- ▁TAKES
- ▁LADDER
- ▁BOTH
- ▁CLASSROOM
- ▁STUDIED
- ▁HALL
- MAS
- ▁STARTED
- ▁THO
- ▁REFUND
- ▁EARLY
- ▁MARK
- ▁TRIED
- ▁CRY
- ▁CUP
- ▁DEAL
- ▁LEGS
- ▁PARTNER
- ▁NINE
- ▁MONTH
- ▁CRYSTAL
- ▁MRS
- ▁WHOM
- ▁QUIET
- ▁TICKET
- ▁TRYING
- ▁JELLY
- ▁TEST
- ▁OFFICE
- ▁BICYCLE
- ▁HOSPITAL
- ▁POOL
- ▁DOGS
- ▁LIVES
- ▁NOISY
- ▁TASTE
- ▁FEET
- ▁PASTA
- ▁HANS
- AL
- ▁PAST
- ▁PRIZE
- ▁KEY
- ▁COUPON
- ▁TIMMY
- ▁AREN
- ▁MEMO
- ▁TEACHE
- ▁PRACTICING
- ▁ANIMAL
- ▁MOUTH
- ▁WORLD
- ▁UNDER
- ▁WATCHING
- ▁FELL
- ▁DRIVE
- ▁BEACH
- ▁CLEAR
- ▁JOKES
- ▁GAVIN
- ▁ADD
- CLOCK
- ▁HELPER
- ▁JULIE
- ▁WEIRD
- ▁SINCE
- ▁MILLER
- ▁TIE
- ▁FRUITS
- ▁HOUR
- ▁ANIMALS
- ▁TWICE
- ▁WARM
- ▁LARGE
- ▁UNTI
- ▁JAMES
- ▁DOLLARS
- ▁STORIES
- ▁MEAL
- ▁APPLES
- ▁CRYING
- ▁DIET
- ▁HEADPHONES
- ▁MEMORI
- ▁COMPLIMENT
- ▁TRIANGLE
- ▁DIARY
- ▁TOWER
- ▁EYE
- ▁SALE
- ▁BUILT
- ▁CARROT
- ▁ORDERED
- ▁ITEM
- ▁SLOW
- ▁NAOMI
- ▁TUESDAY
- ▁SENSE
- ▁PARENTS
- ▁GIV
- ▁BUSINESS
- ▁EVER
- ▁TYLER
- ▁FORWARD
- ▁CELL
- ▁SHUT
- ▁COAT
- ▁PRINCE
- ▁HATE
- ▁PUPPET
- ▁FULL
- ▁WOULDN
- ▁TERRIBLE
- ▁CARDS
- ▁MAP
- ▁STAMP
- ▁SNACK
- ▁SNOW
- ▁RUBY
- ▁SLOWLY
- ▁EDDY
- ▁EASILY
- ▁LAZY
- ▁BLOCKS
- ▁EARS
- ▁COLORS
- ▁TTEOKBOKKI
- ▁CAREFULLY
- ▁MARRIED
- ▁VILLAGE
- ▁HEADACHE
- ▁MOUNTAIN
- ▁PETER
- ▁FAT
- ▁MARRY
- WEEN
- ▁RYAN
- ▁DISHES
- ▁JIM
- ▁FIELD
- ▁CINDY
- ▁FEW
- ▁STARS
- ▁UMBRELLA
- ▁GROW
- ▁FROG
- ▁RULER
- ▁BASKETBALL
- ▁PART
- ▁ORLANDO
- ▁CORRECT
- ▁GRANDPA
- ▁ADVICE
- ▁ARMS
- SE
- ▁PHOTOS
- ▁KICKBOARD
- ▁JACOB
- ▁DANGER
- ▁BOOTS
- ▁GIANT
- ▁BATH
- ▁VISITOR
- ▁PROMISED
- ▁SNAKE
- ▁GLASS
- ▁RAISE
- ▁SPICY
- ▁TURNED
- ▁MEETING
- ▁VIOLIN
- ▁MINUTE
- ▁DAISY
- ▁BUTTON
- ▁OTHERS
- ▁DELIVERY
- ▁WASN
- ▁JOGGING
- ▁SOFA
- ▁FINGERS
- ▁NICOLE
- ▁TALLER
- ▁RUNS
- ▁BENJAMIN
- ▁GOLD
- ▁LUCAS
- ▁SNOWMAN
- ▁LOVED
- ▁SANDWICH
- ▁STRAIGHT
- ▁AGAINST
- ▁BALLOONS
- ▁KEPT
- ▁CLOSED
- ▁PENS
- ▁MAX
- ▁LEG
- ▁FILL
- ▁QUIT
- ▁ANYBODY
- ▁JEFF
- ▁ANN
- ▁EVAN
- ▁MISSED
- ▁TAEKWONDO
- ▁JOY
- ▁PUSH
- ▁WOODWARD
- ▁ROSS
- ▁LISA
- ▁PULL
- ▁NECTAR
- ▁VASE
- ▁RABBITS
- ▁BOW
- ▁BUGS
- ▁SAFE
- GETTING
- ▁CASH
- ▁LAMP
- ▁DOLLS
- ▁YUMMY
- ▁MEDICINE
- ▁SPORTS
- ▁ENDS
- ▁BASEBALL
- ▁THROUGH
- ▁CENTER
- ▁FIGHTER
- ERS
- ▁PACKAGE
- ▁WORMS
- ▁SHAPE
- ▁DISAPPOINTED
- ▁PHILLIP
- ▁DINOSAURS
- ▁SALAD
- ▁HAMBURGER
- ▁COOKIES
- ▁PASS
- ▁CHEAP
- ▁STAGE
- ▁COLORED
- ▁TYPE
- ▁EVENING
- ▁CRIED
- ▁SHOWER
- ▁WALLET
- ▁FIFTEEN
- ▁HERO
- ▁USUALLY
- ▁GATE
- ▁TEAM
- ▁PLANE
- ▁DRESSES
- ▁SOLD
- ▁CRAYON
- LE
- ▁HIDE
- ▁BODY
- ▁MEN
- ▁HAIRSTYLES
- ▁BOAT
- ▁WONDER
- ▁RAIN
- ▁FEELS
- ▁NERVOUS
- ▁CHILD
- ▁MIRROR
- ▁BUG
- ▁LONGER
- ▁LOUIS
- ▁AIR
- ▁STOMACHACHE
- ▁ASKING
- ▁OWNER
- ▁KNEW
- ▁BELT
- I
- ▁MAGAZINE
- ▁HOP
- ▁SUGAR
- ▁END
- ▁TAKING
- ▁LIGHTS
- ▁EMPTY
- ▁PUPPY
- ▁DUCK
- ▁SUPERMARKET
- ▁APARTMENT
- ▁ADDRESS
- ▁MACHINE
- ▁JASON
- ▁CARRY
- ▁DRY
- ▁EXCITING
- ▁BOTTLE
- ▁RIDING
- ▁CHARCOAL
- ▁TRAVIS
- ▁UGLY
- ▁CAUGHT
- ▁PROBAB
- ▁PROJECT
- ▁LISTENING
- ▁JUGGLE
- ▁ROPE
- ▁BILL
- ▁HOURS
- ▁MOLLY
- ▁SOPHIE
- ▁WEARS
- ▁LIFE
- ▁CAFE
- ▁HURTS
- ▁RELAX
- ▁TED
- ▁COPY
- ▁COTTON
- ▁ALONG
- ▁OFFER
- ▁DATE
- ▁LI
- ▁YOUTUBE
- ▁JOKE
- ▁BARREL
- ▁DIED
- ▁SINGS
- ▁SEVERAL
- ▁TALENT
- ▁CARTER
- ▁PASSWORD
- ▁CASE
- ▁SCISSORS
- ▁YORK
- ▁FANTASTIC
- ▁CLOUDY
- ▁ROUND
- ▁BUILD
- ▁PRINCESS
- ▁RAINY
- ▁GRAPES
- ▁SKIRT
- ▁LION
- ▁FASTER
- ▁FASHION
- ▁AD
- ▁EXPLAIN
- ▁DOCK
- ▁MATCH
- ▁BOMB
- ▁STADIUM
- ▁WOODS
- ▁FALL
- ▁MAD
- ▁TRUCK
- ▁STEP
- ▁ANSWERS
- ▁KIDDING
- ▁MOON
- ▁BEAN
- ▁PICKED
- ▁LESSON
- ▁KNOWN
- ▁HAPPENING
- ▁BLUEBERRIES
- ▁SANDWICHES
- ▁BUTTER
- ▁BEDROOM
- ▁ABOVE
- ▁LEGO
- ▁HELENA
- ▁FOOTPRINT
- ▁SHIP
- ▁TAP
- ▁HILL
- ▁CHURCH
- ▁GOODBYE
- ▁LEMON
- ▁HUNDRED
- ▁COWARD
- ▁ARRIVED
- ▁WATERMELON
- ▁BOXES
- ▁FINALLY
- ▁MAIN
- ▁KEVIN
- BINGO
- ▁BONES
- ▁SPOKE
- ▁DONUTS
- ▁HENNA
- ▁LETTERS
- ▁PAM
- ▁LESS
- ▁WEDDING
- ▁POCKET
- ▁SHY
- ▁NOWHERE
- ▁MIC
- ▁NAMES
- ▁SONGS
- MED
- ▁DECIDED
- ▁KITCHEN
- ▁SHINING
- ▁LOVELY
- ▁SEASON
- ▁STEAK
- ▁DRUM
- ▁TEDDY
- ▁SHINY
- ▁GIRLS
- ▁AUDITION
- ▁ACTING
- ▁NECK
- ▁ROSA
- ▁SNEAKERS
- ▁SHOE
- ▁QUITE
- ▁HOTEL
- ▁LEATHER
- ▁WIND
- ▁COUSIN
- ▁JANET
- ▁ONIONS
- ▁DEAD
- ▁PROUD
- ▁PET
- ▁HELPFUL
- ▁TOILET
- ▁FORTY
- ▁JAKE
- ▁BUTTERFLY
- ▁KICK
- ▁BIRDS
- ▁ABROAD
- ▁TEA
- ▁STARTS
- ▁MEALS
- ▁AIRSHIPS
- ▁SOFT
- ▁MATT
- ▁BLANKET
- ▁WINDY
- ▁PLAYS
- ▁COVER
- ▁WEIGHT
- ▁PURPLE
- ▁HIDING
- ▁TAGS
- ▁F
- ▁WHATEVER
- ▁AIRSHIP
- ▁LIVING
- ▁MAT
- ▁KINDERGARTEN
- ▁POND
- ▁LAUNDRY
- O
- ▁NOTEBOOK
- ▁HELEN
- ▁SWEATER
- ▁TEACHING
- ▁FAULT
- ▁SQUARE
- ▁HONEST
- ▁LOUDER
- CAME
- ▁3
- ▁DROP
- ▁GUY
- ▁GIRLFRIEND
- ▁RAINING
- ▁SPIDER
- ▁FLYER
- ▁WATCHED
- ▁B
- ▁LOW
- ▁COUSINS
- ▁OLDER
- DY
- ▁ROCK
- ▁MOMENT
- ▁SHEET
- ▁LAUGH
- ▁BLUEBERRY
- ▁NEIGHBORHOOD
- ▁GRADE
- ▁STICKER
- ▁OPENING
- ▁ALRIGHT
- ▁OFFICER
- ▁PI
- ▁WEDNESDAY
- ▁BITE
- ▁CONTINUE
- TIME
- ▁SAIN
- ▁COSTUME
- ▁MOVED
- ▁BOOKCASE
- ▁DENTIST
- ▁STOPS
- ▁SAM
- ▁APRIL
- ▁THIRSTY
- ▁MOOD
- ▁PEA
- ▁ENTRY
- ▁SERVICE
- ▁ABLE
- ▁FRIED
- ▁W
- ▁FLASH
- ▁KATRINA
- ▁REPAIR
- ▁TI
- ▁GIMBAP
- NDA
- ▁ANNIVERSARY
- ▁NAMED
- ▁WRITTEN
- ▁CUSTOMERS
- ▁COLLECT
- ▁BONGOS
- ▁EGG
- ▁BAT
- ▁RIBS
- ▁SAT
- ▁RETURN
- LIGHT
- BACK
- CA
- NESS
- ▁FACES
- ▁CALLING
- ▁HOLIDAY
- ▁HOLE
- ▁MILLION
- ▁DELIVER
- ▁10
- ▁TAXI
- ▁HASN
- ▁MINDS
- ▁DONALD
- ▁MISTAKES
- ▁SPRING
- ▁MENTION
- ▁NEITHER
- ▁TOWEL
- ▁BEANS
- ▁WILLIAM
- ▁BRIGHT
- ▁STOMACH
- ▁CANDIES
- ▁BURGERS
- ▁FEAR
- ▁DECIDE
- ▁FEVER
- ▁FANS
- ▁STUDIO
- ▁LIAR
- ▁BREAKING
- ▁SLEPT
- ▁TAIL
- ▁BURGER
- ▁MOVIES
- ▁SMOKE
- ▁DANIEL
- ▁WAITER
- ▁PENCILS
- ▁CROSS
- ▁KOREA
- ▁GUARD
- ▁LEARNING
- ▁SUBWAY
- ▁CARS
- ▁SKIP
- ▁MIX
- ▁JEANS
- ▁LIST
- ▁POST
- ▁TRAVEL
- ▁BORROWED
- ▁AWESOME
- ▁RECORDER
- ▁FLOUR
- ▁COW
- ▁CAMPING
- ▁DRIVING
- ▁FELT
- ▁WINNER
- ▁CHARACTER
- ▁BALLOON
- ▁RIDDLE
- W
- FUL
- ▁NECKLACE
- ▁GLOVES
- ▁CHANGING
- ▁CRACKED
- ▁DROPPED
- ▁ROBERT
- ▁BAKERY
- ▁GRILL
- ▁INVITED
- ▁LAND
- ▁PORK
- ▁TELEPHONE
- ▁SKI
- ▁GUEST
- ▁AMBER
- ▁SHARP
- ▁KITE
- ▁DELI
- ▁MART
- ANNA
- ▁CIRCLE
- ▁FLYING
- ▁SHAKE
- ▁DANCER
- ▁POLICEMAN
- ▁DESSERT
- ▁SHOCK
- ▁BLOOD
- ▁MENU
- ▁BUMP
- ▁NOVEL
- ▁SKIN
- ▁SHOULDERS
- ▁MICHELLE
- ▁CROSSED
- ▁TICKETS
- ▁DRANK
- ▁OUTFIT
- ▁LAKE
- ▁PAINTER
- ▁ALIEN
- ▁RAINBOW
- ▁WORE
- ▁BAR
- ▁BROTHERS
- ▁DISH
- ▁SIMILAR
- ▁DISPLAY
- ▁GIRAFFE
- ▁FANCY
- ▁THIEF
- ▁HALLWAY
- ▁WAVE
- ▁CARROTS
- PE
- ▁ELDER
- ▁SOMEBODY
- ▁TRAFFIC
- ▁ACTOR
- ▁RUMORS
- ▁CHOSE
- ▁CAUS
- ▁DRESSED
- ▁ROSE
- ▁LYING
- ▁PANDA
- ▁PEAR
- ▁SUGGEST
- ▁DECISION
- ▁NOISES
- ▁TAKEN
- ▁GARLIC
- ▁CHINESE
- ▁ITCHY
- ▁SWORD
- ▁WAITED
- ▁NONE
- ▁SIZE
- ▁ACCEPT
- ▁CAPTAIN
- ▁GRAY
- ▁IDOL
- ▁SMALLER
- ▁USUAL
- ▁THOUSAND
- ▁LONELY
- ▁RETURNED
- ▁JENNY
- ▁PRACTICED
- ▁NEEDED
- ▁PAIN
- ▁RAP
- ▁THIN
- ▁EVERYWHERE
- ▁SUIT
- ▁BUSH
- ▁SON
- ▁COMPLIMENTS
- ▁FAILED
- ▁RUG
- ▁PAID
- ▁MANGO
- ▁BOYFRIEND
- ▁SCARF
- ELA
- ▁CROWD
- ▁ONLINE
- ▁GREW
- ▁SOCKS
- ▁SEAGULLS
- ▁USING
- ▁MELTED
- ▁OIL
- ▁ADULTS
- ▁KATE
- ▁WHISTLING
- ▁PRAY
- ▁POOR
- ▁SAUCE
- ▁PACKED
- ▁HATS
- ▁BUYING
- ▁AGO
- ▁SCIENCE
- ▁TUNNEL
- ▁DRESSING
- ▁MISSING
- ▁FESTIVAL
- ▁THURSDAY
- ▁PAIR
- ▁SITTING
- ▁SUITCASE
- ▁SHAPES
- ▁WILLY
- ▁HUGE
- ▁SHOUTED
- EVER
- ▁FAIR
- ▁TASTES
- ▁CAFETERIA
- ▁BINGO
- ▁BEGINS
- ▁DOLLAR
- ▁GRILLING
- ▁ALIVE
- ▁DINO
- ▁LIFT
- ▁TOP
- ION
- ▁STUFF
- ▁FROZEN
- ▁ACROSS
- ▁SEOUL
- ▁FRIES
- ▁TAUGHT
- ▁VIDEO
- ▁CREDIT
- ▁HAPPENS
- ▁RACE
- ▁TOUR
- ▁SPAGHETTI
- ▁SWING
- ▁INVITATION
- ▁COUNTRYSIDE
- ▁STAIRS
- ▁HIGHER
- ▁RANGER
- BAG
- ▁PULLED
- ▁LIPSTICK
- ▁VALLEY
- ▁NAP
- ▁FUTURE
- ▁SILENT
- ▁SPEAKER
- ▁GIVEN
- ▁JUMPING
- ▁AUTUMN
- ▁HOLDING
- ▁BOB
- ▁PLANNING
- ▁SUPPOSE
- ▁CLUES
- ▁ANSWERED
- ▁STICK
- ▁WASHED
- ▁CURLY
- ▁RUINED
- ▁SMILING
- ▁UNHAPPY
- ▁KIMBAP
- ▁CAUSE
- ▁CHUNKMONS
- ▁REPEAT
- STOOD
- ▁8
- ▁SHEEP
- ▁LOUDLY
- ▁SLIDE
- ▁KING
- ▁LIME
- ▁SKATING
- ▁SERVE
- ▁SAND
- ▁POWER
- ▁MUSICIANS
- ▁RESTROOM
- ▁SOMEDAY
- ▁GYM
- ▁GOD
- ▁COOKIE
- ▁NUMBERS
- ▁WARNING
- ▁CLASSMATE
- ▁COMPLAIN
- ▁LAUGHED
- ▁BEES
- ▁SAFELY
- ▁DESIGNER
- ▁ORANGES
- B
- ▁RETURNS
- ▁SPEAKING
- ▁GINA
- ▁MARTI
- ▁FEELINGS
- MAN
- ▁TULIP
- ▁BAZAAR
- ▁EMAIL
- ▁STRAWBERRY
- ▁PRESS
- ▁SALT
- ▁PHEW
- ▁COWS
- ▁ENTRANCE
- ▁LEAF
- ▁PAN
- ▁SOUR
- ▁DISEASE
- ▁OPENED
- ▁LUGGAGE
- ▁SWIMSUIT
- ▁PASSED
- ▁ALISON
- ▁SHOVELS
- ▁SENTENCES
- ▁GROUND
- ▁STAYING
- ▁SALES
- ▁JAM
- ▁WRAP
- ▁LATELY
- ▁SHRIMP
- ▁TWELVE
- ▁CHEAPER
- ▁CHECKING
- ▁SEAWEED
- ▁LO
- ▁TURTLES
- ▁DNN
- ▁WHE
- ▁ACT
- ▁LIZARD
- ▁SUCCEED
- ▁STRING
- ▁BASKET
- ▁HINT
- ▁VEGETABLES
- ▁FOOL
- ▁SHOT
- ▁ADULT
- ▁GREG
- ▁TASTY
- ▁FARM
- ▁LIPS
- ▁STARFISH
- ▁NAILS
- C
- ▁FR
- ▁TEARS
- ▁SUPERSTAR
- ▁CLEANS
- ▁HEAT
- ▁SILLY
- ▁WIG
- ▁BELLA
- WOKE
- ▁5
- ▁BOYS
- IVA
- ▁IMAGINE
- ▁LAUGHING
- ▁WASHING
- ▁FLAT
- ▁STICKERS
- ▁PRETTIER
- ▁KILL
- ▁FLIGHT
- ▁WOMEN
- ▁MOMMY
- ▁CAMP
- ▁MEMBERS
- ▁CUSTOMER
- ▁E
- ▁SINGERS
- 'ON'
- ▁CONTROL
- ▁TIGER
- ▁ZEBRA
- ▁IMPOSSIBLE
- ▁CONSOLE
- ▁CLUE
- ▁FOLD
- ▁BEE
- ▁ANDY
- ▁SEATS
- ▁POUND
- ▁SANG
- ▁DIAMOND
- ▁BATS
- ▁ARTIST
- ▁BABIES
- ▁GARAGE
- ▁INSTEAD
- ▁OLDFASHION
- ▁GIFTS
- ▁RODE
- BIG
- ▁MOUNTAINS
- ▁THUNDER
- ▁DONKEY
- ▁PIGEON
- ROOM
- ▁WORSE
- ▁HAMBURGERS
- ▁ERASER
- ▁TAMBOURINE
- ▁BREATH
- ▁ANNOYED
- ▁HALLOWEEN
- ▁KNOCK
- ▁STUPID
- ▁BANDAGE
- ▁PINEAPPLE
- OUT
- ▁SALTY
- ▁POTATO
- ▁MILES
- ▁COMMENT
- ▁TREATED
- ▁EAR
- ▁SLEDDING
- ▁VIOLET
- ▁BOTTLES
- ▁BRILLIANT
- ▁AUNTIE
- ▁SPEND
- ▁REACH
- ▁PAYING
- ▁APOLOGIZE
- ▁CORNER
- ▁FORGIVE
- ▁RELIEF
- ▁BEHAVE
- ▁DIE
- ▁PRETTIEST
- ▁H
- ▁HEN
- ▁POUR
- ▁NEEDLE
- ▁WORRIES
- ▁LARGER
- ▁CRAZY
- TYFIVE
- ▁DISCOUNT
- ▁HEADED
- ▁TWENTYFIVE
- ▁SOMETIME
- ▁REPORTER
- ▁FEED
- ▁KIMCHI
- ▁TENNIS
- ▁DOLPHIN
- ▁SUNGLASSES
- ▁THREW
- ▁COUNTRY
- ▁HUSBAND
- ▁JAPAN
- ▁TOMATOES
- ▁OK
- ▁POET
- ▁LUKE
- ▁LEND
- ▁LOWER
- ▁SHOVEL
- ▁AMERICA
- ▁BLOSSOMS
- OH
- K
- ▁SAFETY
- TALK
- ▁ASLEEP
- ▁MINER
- ▁PERIOD
- ▁STORYBOOK
- ▁BOWLS
- ▁DOUBT
- ▁MEMORY
- ▁SKINNY
- ▁EARTHQUAKE
- ▁2
- ▁BALLS
- ▁POTATOES
- ▁TROUSERS
- ▁WAR
- ▁FUR
- ▁RUMOR
- ▁CONGRATULATIONS
- ▁EASYGOING
- ▁NURSE
- ▁FLIES
- ▁GROWING
- ▁SMILES
- ▁CHOICE
- ▁ERASE
- ▁COMFORTABLE
- ▁GUIDE
- ▁PE
- ▁CLEVER
- ▁PEACE
- ▁AFTERSCHOOL
- ▁SOAP
- ▁POPCORN
- ▁SUNBLOCK
- ▁INVITE
- ▁AWAKE
- ▁FEMALE
- ▁HIKING
- ▁FOLLOWED
- ▁BUMPER
- ▁FILLED
- ▁HIPPO
- ▁COMEDIAN
- ▁SILK
- ▁COST
- IES
- ▁AWFUL
- ▁SIBLING
- ▁PIES
- ▁BURNING
- ▁CRASH
- ZIPPED
- ▁SPACE
- ▁LYRICS
- ▁HANDMADE
- ▁PER
- ▁ROUGH
- ▁THROWING
- ▁STATIONERY
- ▁WORM
- ▁PAGE
- ▁CLASSMATES
- ▁EXAM
- ▁FINAL
- ▁BLOW
- ▁CHINA
- U
- TH
- ▁BATTER
- ▁HONEY
- ▁MISTAKEN
- ▁DEPARTMENT
- GREAT
- ▁SHIRTS
- ▁COMPETITION
- ▁YOGURT
- MBER
- ▁DRINKS
- ▁WOLF
- ▁ISLAND
- ▁GROCER
- ▁SHARON
- ▁BREATHE
- ▁ANNOYING
- ▁LIED
- ▁SPA
- ▁KANGAROOS
- ▁ALIKE
- ▁PENGUIN
- ▁BRIGHTCOLORED
- ▁4
- ▁MESSAGES
- ▁INVENTION
- ▁WIPE
- BIRD
- ▁PRECIOUS
- ▁FLEW
- ▁CH
- ▁APART
- ▁MIDNIGHT
- ▁SPEN
- ▁SHELLS
- ▁GIN
- ▁NATURAL
- ▁THIRD
- ▁BADLY
- ▁PLATES
- ▁JOSHUA
- ▁MIDDLE
- ▁SWEAT
- ▁TOES
- ▁TIP
- ▁TEASE
- ▁BOOKSHOP
- ▁COUGHING
- ▁GUN
- ▁WASTE
- UMOR
- AR
- ▁SPREAD
- ▁GOAT
- ▁SPROUTS
- ▁BALLET
- ▁SNAKES
- ▁SCRATCHED
- ▁AMONG
- DANGER
- KGO
- NISH
- ▁FEE
- ▁JANE
- ▁TEMPER
- ▁CROWDED
- ▁BONO
- ▁CHEF
- ▁SAMPLE
- ▁LIONS
- ▁RULES
- ▁DREW
- ▁WORTH
- ▁MAGICIAN
- ▁GLUE
- ▁TOUGH
- ▁TOUCHE
- ▁TUNA
- ▁BAKE
- ▁LAUGHTER
- ▁HALF
- ▁HELMET
- ▁UH
- ▁COPIES
- ▁DIFFERENCE
- ▁FORK
- ▁STARTING
- ▁CRIES
- ▁SPROUT
- SNOW
- ▁SCARE
- ▁DRUMS
- ▁PHANTOPIA
- ▁VOUCHER
- ▁FARMER
- ▁CHANGES
- ▁SPILL
- AN
- ▁COMPLETELY
- ▁PRACTICES
- CHAIR
- ▁MISSE
- ▁RACHEL
- ▁SEEK
- EST
- ▁SISTERS
- ▁BLAME
- ▁PACK
- ▁BOIL
- ▁REQUEST
- ▁SH
- ▁WIRE
- ▁POT
- ▁ONION
- ▁CLOSER
- ▁MICE
- ▁SCRATCH
- ▁DUCKS
- THANK
- ▁RECEIVE
- ▁CABBAGE
- ▁SEEDS
- ▁JEJU
- ▁SUDDENLY
- RAY
- ▁KIWI
- ▁POWDER
- ERRY
- ▁MESSY
- ▁RID
- ▁CHAMPION
- ▁ARGUE
- ▁RECIPE
- ▁MICROPHONE
- ▁SCOLDED
- TRY
- ▁STRONGER
- ▁EXPECT
- ▁WEEKS
- AKER
- ▁JUMPED
- ▁RAINS
- ▁OREPHIA
- ▁PIGS
- LOSING
- ▁PRAYING
- ▁DUE
- ▁SOUTH
- ▁PUNCH
- ▁CREATIVE
- ▁FINISHING
- ▁HARMONI
- ▁CLOWN
- ▁SALON
- ▁SINK
- H
- ▁TOOL
- ▁ALARM
- VISION
- GY
- ▁FAIL
- ▁DRAWER
- ▁HAIRBAND
- ▁X
- ▁ARTICLES
- ▁DEEP
- ▁EARLIER
- ▁EXTRA
- ▁DOWNTOWN
- ▁LEFTHAND
- PTER
- ▁NOODLES
- ▁CONSIDER
- ▁ACCOUNT
- ▁DEER
- ▁SEAN
- RABBITS
- TY
- ▁CREAMS
- ▁LUCY
- ▁BOUN
- ▁HORNS
- EMENT
- ▁NOON
- ▁SMILED
- ▁NINETEEN
- ▁TURNS
- ▁MUFFLER
- ▁ROAR
- ▁HARDLY
- ▁SPELLED
- ▁SPOTS
- ▁SHORTS
- ▁JUMPS
- ▁RECENTLY
- ▁STOLEN
- ▁WITHIN
- ▁ENGLAND
- ▁PENDANT
- ▁MARY
- ▁AMUS
- ▁SERIOUSLY
- ▁FALLS
- ▁SPOONS
- ▁SAVED
- ▁STOLE
- ▁STUCK
- ▁G
- ▁DUMPLINGS
- ▁GERMAN
- ▁PLACES
- ▁OCARINA
- ▁QUEENSTEIN
- ▁BRANDON
- ▁DWARFS
- ▁TOFU
- ▁SPRAY
- PARD
- ▁CROSSING
- ▁PIGEONS
- ▁NOTICE
- CE
- LTY
- ▁BASEMENT
- ▁TABLET
- ▁COUPONS
- ▁PROGRAM
- ▁SOCK
- ▁GUI
- ▁NUT
- ▁OLIVE
- ▁PREFER
- ▁MUSHROOM
- ▁FIGHTING
- ▁DENERGY
- ▁STORAGE
- ▁POLITE
- IST
- ▁KICKBOARDS
- GAGE
- ▁DROWN
- ▁MANAGE
- ▁DRIVER
- P
- ▁WEEKENDS
- ▁SHOULDER
- ▁MUD
- ▁SEVENTY
- ALLY
- ▁POSTCARD
- ▁PIECES
- ▁HICCUPS
- ▁CHARACTERS
- ▁CLEANING
- ▁DIS
- ▁JG
- ▁JOSEPH
- ▁TITLE
- ▁CDS
- ▁BOSTON
- ▁BRACELET
- ▁PERMISSION
- ▁STEW
- ▁RAT
- ▁SKATE
- ▁CHEST
- ▁FOOT
- ▁CLIMB
- ▁AUDIENCE
- ▁DUFAR
- ▁GRANDPARENTS
- ▁FIT
- ▁TOUCHING
- ▁ELEPHANTS
- ▁TSHIRTS
- ▁APPOINTMENT
- ▁FOREVER
- ▁STARVING
- ▁LESSONS
- ▁COUPLE
- ▁TOTO
- ▁DRINKING
- ▁ARRIVE
- ▁GREE
- ▁SPOT
- ▁HELD
- ▁EARTH
- ▁DAUGHTER
- ▁SLICE
- ▁CASTLE
- ▁FEEDING
- ▁COVERED
- ▁FAM
- ▁AGE
- ▁AUSTIN
- ▁DEAR
- ▁NATI
- ▁CELEBRATE
- ▁MEATBALLS
- ▁STRETCH
- ▁SOLVE
- ▁USEFUL
- ▁SCAR
- DDING
- ▁ALLERG
- ▁RINGING
- ▁SAILING
- ▁SNOWING
- ▁LATEST
- ▁LIES
- ▁ACADEMIES
- ▁MUSICIAN
- ▁STA
- ▁FROGS
- ▁STOMP
- ▁KEYBOARD
- ▁FAIRY
- ▁CLAP
- ▁HAM
- ▁TOWARDS
- ▁RESERVATIONS
- ▁SHOUT
- SORRY
- ▁PUPPIES
- ▁WEAK
- ▁ORIGINAL
- ▁RESPECT
- ▁TABLES
- ▁COMPUTERS
- ▁TOWELS
- ▁CRAFTSMEN
- ▁ELE
- ▁REPAIRED
- ▁PRINT
- ▁BLOOM
- ▁WISELY
- ▁SCOLD
- ▁TWINKL
- ▁CANCEL
- ▁KIM
- ▁STAINED
- ▁LAP
- ▁DRI
- ▁SHARK
- ▁KANGAROO
- MENTARY
- THEY
- ▁DALLAS
- ▁SEESAW
- ▁WHISPER
- CAL
- ▁DWARF
- ▁SUNDAYS
- ALK
- ▁DOUBLE
- ▁SHAKING
- ▁PREPAR
- ▁YOYO
- ▁SKILLS
- ▁OCTOPUS
- ▁INSTRUMENTS
- ▁MAIL
- ▁ALIENS
- ▁JESSI
- ▁CHERRY
- ▁INCONVENIENCE
- ▁CERTAIN
- ▁BEEF
- CON
- 'OFF'
- ▁GATHERED
- ▁PRODUCTS
- CONVENIENCE
- ▁RESTAURANTS
- ▁MONKEYS
- ▁FIGURE
- ▁QUICK
- ▁GAIN
- ▁PENALTY
- ▁INLINE
- ▁INTRODUCE
- ▁OVERSLEPT
- ▁POL
- ▁HOWEVER
- ▁GORILLA
- ▁MEMBER
- ▁PLU
- ▁ANGER
- ▁AQUARIUM
- ▁GAS
- ELY
- ▁TIES
- ▁PUNISHED
- ▁CUCUMBERS
- ▁TINY
- ▁RISE
- ▁GHOSTS
- ▁WIFE
- MOND
- ▁RARE
- ▁BARN
- ▁SMELLY
- GAN
- ▁REASONS
- ▁BURNED
- ▁ANNOUNCE
- ▁CAPSULES
- ▁PICNIC
- ▁GLOVE
- FF
- RANCE
- ▁TREAT
- ▁JOG
- ▁BULLS
- ▁JJAKGUNG
- ▁PROVE
- ▁BAGS
- ▁RUDOLPH
- ▁MC
- ▁TRICKS
- RIOR
- ”
- ▁HAPPILY
- ▁REMIND
- ▁DIVER
- BE
- ▁HATES
- ▁SPOON
- ▁SIZES
- ▁THROAT
- ▁UN
- CRAFTS
- ▁BRIDGE
- ▁CONFUSED
- DONALD
- KEEPER
- ▁SIBLINGS
- ▁DENNIS
- ▁EMBARRASSED
- ▁PATRICK
- DWARFS
- ▁PREGNANT
- ▁VOTE
- ▁WHIPPED
- ▁10000
- ▁SUPPORT
- ▁TOOTH
- ▁STANDING
- ▁CLOSET
- ▁NEEDLES
- ▁SWEEP
- ▁RAISED
- ▁PEE
- ▁CONTACT
- ▁JEALOUS
- ▁SURVEY
- BOX
- ▁CROSSWALK
- ▁WALKING
- ▁SOP
- ▁SITE
- ▁OWE
- ▁FOURTEEN
- ▁PLANTING
- ▁CHANNELS
- ▁WIGGL
- ▁OURSELVES
- ▁SCENE
- ▁BAS
- ▁LETTUCE
- ▁NICKNAME
- ▁GRABB
- ▁ELEVATOR
- ▁COP
- ▁FALLING
- ▁DESERVE
- ▁FILM
- ▁SOPHOMORE
- ▁WOUND
- ▁PROTEST
- ▁PEACHES
- ▁CHILL
- ▁COURT
- ▁ROOF
- ▁CHARGE
- ▁FINGER
- ▁HANBOK
- ▁TAPDANCE
- ▁JAPANESE
- ▁MELON
- ▁BATTLE
- ▁LEAS
- ▁PARTS
- BATHING
- ▁CRUNCHY
- ▁PAUL
- ▁WHISTLE
- ▁CAKES
- ▁HEAL
- ▁SHELL
- ▁GUM
- ▁CARPENTER
- ▁HEAVILY
- ▁N
- ▁LEMONS
- ▁HARDER
- ▁ROW
- ▁STEAM
- ▁STUDIES
- ▁LOTTERY
- ▁BITTER
- ▁MOW
- ▁EATEN
- ▁SPORT
- ▁SHORTER
- ▁STEAL
- ▁GRADUATE
- ▁PUZZLE
- ▁CEREMONY
- ▁RAINCOAT
- ▁KISS
- HAP
- WAY
- ▁DEPART
- ▁LANGUAGE
- ▁BITTEN
- ▁BUSAN
- ▁L
- ▁TIGHT
- ▁BELOW
- ▁PERFECTLY
- KE
- ▁NATURE
- ▁MISUNDERST
- ▁CLOUD
- ▁DRAG
- ▁CARTOON
- ▁COCONUT
- ▁GOLF
- ▁THIRTEEN
- ▁DYING
- ▁PETE
- ▁MALL
- ▁BIN
- ICAL
- ▁ALIB
- ▁BREEZE
- ▁FRENCH
- ▁DATING
- ROW
- ▁WATERING
- ARD
- ▁DESERT
- ▁PRAISE
- ▁INTERNET
- ▁STRICT
- ▁MOSQUITOES
- TLE
- ▁SKILL
- ▁BEHAV
- ▁KTX
- ▁LONDON
- ▁TASTING
- ▁VAN
- ▁COUGHED
- ▁NICELY
- ▁HARM
- ▁BOOKSHELF
- ▁CRICKET
- ▁EDGE
- ▁PILLOW
- ▁RECTANGLE
- ▁STRESS
- ▁FOOTBALL
- ▁LAW
- ▁CHOPSTICKS
- WHAT
- ▁TWINS
- ▁AUSTRALIA
- ▁LAMB
- ▁MAYO
- ▁DESIGN
- ▁BLEW
- ▁GLORY
- ▁ROCKCLIMBING
- ▁DUTY
- ▁ENTERTAINMENT
- ▁THEMSELVES
- ▁YOG
- ▁BUCKET
- ▁BIRTH
- ▁FALSE
- ▁PATTERN
- ▁THREAD
- ▁SOLDIER
- ▁BATTERY
- ▁KNEES
- ▁HEADS
- ▁DELIVERED
- ROUTE
- ▁SIMPLE
- ▁WATERFALL
- ▁SWITCH
- ▁EFFORT
- ▁UNUSUAL
- ▁SLIPPED
- ▁REG
- ▁SUITS
- ▁CHANNEL
- ▁MINI
- ▁PLASTIC
- ▁RECOMMEND
- ▁RUBBER
- ▁THANKFUL
- ▁ROLL
- ▁SOLV
- ▁CLAPS
- ▁BUD
- ▁CINEMA
- ▁SHELF
- ▁LOSS
- ▁WOMANS
- ▁CANADA
- ▁EXPRESS
- ▁SHARING
- ▁LOOSEN
- ▁CHOCO
- ▁RUNNY
- ▁REPL
- ▁BOWL
- ▁FULLY
- ▁SOMEHOW
- ▁UNIQUE
- ▁CARES
- ▁NOODLE
- ▁JETLAG
- ▁LAPTOP
- ▁TOOTHPASTE
- ▁JON
- ▁AIRPORT
- ▁JOO
- YER
- ▁CAP
- ▁HOLLY
- ▁JOHNSON
- ▁ZERO
- ▁LEADER
- ▁OX
- ▁SQUEEZE
- PY
- GET
- ▁FIN
- ▁ZIP
- ▁SEPTEMBER
- ▁TEMPERATURE
- THIRTY
- ▁GOODLOOKING
- ▁GUAR
- ANTEE
- ▁LOG
- ▁WILD
- ▁BOOTH
- ▁PEPPERS
- ▁FORGOTTEN
- BALL
- ▁AB
- CALORIE
- ▁POLICY
- ICO
- ▁INCLUDED
- ▁LIGHTEN
- ▁BLAMED
- ▁LONGTIME
- OOD
- ▁JEAN
- ▁DECK
- ▁MANNER
- ALTH
- ▁PERSONALLY
- TRUCK
- PT
- ▁GUT
- ▁CRASHED
- ▁FLO
- ▁REACT
- ▁ABSENT
- KYO
- ▁BLUSH
- ▁DONATE
- DOCK
- ▁COMPLAINING
- ▁DESCRI
- ▁GEORG
- ▁RECOVER
- ▁WALNUT
- ▁LUNG
- ▁BUDDY
- ENSE
- ▁PASSES
- ▁PLUM
- HALF
- ▁SE
- ▁TURTLE
- ▁FRANC
- ▁KOALA
- ▁TURKEY
- ▁CARPET
- ▁ANYWHERE
- ▁R
- ▁SKIING
- ▁FOCUS
- ▁HARV
- ▁JANUARY
- ▁PRESIDENT
- ▁TWENTYONE
- ▁WRESTLE
- ▁CANCER
- ▁CHEATING
- ▁HOMEMADE
- ▁WEEKDAY
- ▁K
- THER
- ▁DREAMS
- ▁APPRECIATE
- ▁BRAIN
- ▁SAUSAGES
- SOMETHING
- GAR
- ▁SMOOTH
- ▁SLIM
- ▁FENCE
- JURY
- LIES
- ▁SPIDERS
- EADLINE
- EVEREST
- ▁SCORES
- ▁JOKING
- ▁REJECT
- ▁STEPMOTHER
- ▁CRIM
- ▁DIGGING
- ▁QUEEN
- ▁MALE
- ▁SNORES
- ▁EXPLAINED
- ▁HOUSEWORK
- ▁BEDTIME
- BEAT
- WORKING
- ▁SMELLING
- ▁GRAPE
- ▁INSTRUCTIONS
- ▁SUNSCREEN
- ▁WORKDAY
- ▁HOLES
- ATER
- UP
- RIDA
- ▁VINE
- ▁HERSELF
- ▁NIGHTMARE
- ▁SNAP
- ▁INSU
- ▁BURNS
- GIV
- ▁MOUNT
- ▁NEGATIVE
- ▁ADVANTAGE
- ▁DIFFICULTIES
- ▁7
- ▁REMAINS
- CHECK
- ▁TRAVELING
- ▁IMAGIN
- G
- ▁BENNY
- ▁JOHN
- ▁ATHLET
- ▁COOPE
- ▁DICTIONARY
- ▁HAPPINESS
- ▁RAPPER
- ▁SLIPPERY
- ▁SUNRISE
- ▁TAPDANCING
- ORABLE
- ▁NOTICING
- ▁WAITLIST
- ▁CUCUMBER
- FTH
- ▁GUESTS
- ▁COLLEGE
- ▁STOCK
- HH
- ▁TALE
- POP
- ▁MEXIC
- ▁FREEZER
- ▁REFUSE
- ▁SWIMMER
- ▁THOUGHTFUL
- DIVING
- WORKED
- ▁COURAGE
- ▁ERRANDS
- ▁LISTENED
- ▁GRUM
- ▁WEB
- ▁TWEL
- GED
- ▁CABIN
- ▁REHEARSAL
- ▁SKETCHBOOK
- ▁DAYCARE
- ▁PARTIES
- OBBY
- ▁SEAL
- WHERE
- ▁ROSES
- INE
- ▁ACCIDENT
- ▁PERSONALITY
- ▁SPECIFIC
- ▁RINGS
- ▁BLOOMED
- ▁AW
- YARD
- ▁ENTERED
- ▁BELLY
- ▁FUNNIER
- ▁NARROWMINDED
- USY
- ▁JOURNAL
- ▁JER
- ▁PRICES
- BREAK
- ▁BILLS
- SOLUT
- ▁11
- ▁REFILL
- ▁BAKED
- ▁ALPHABET
- CONNECTED
- ▁GOATS
- ▁WASHE
- ▁CHOP
- PHLE
- ▁NONSENSE
- ▁WADDL
- ▁PETS
- ▁DECORATE
- LUSH
- ▁FORGETTING
- ▁EMILY
- ▁BICYCLES
- ▁SHOWN
- ▁BUCK
- ▁BAIT
- ▁100
- ▁MOVER
- ▁HEL
- ▁WINNING
- ▁ROCKET
- ▁FANG
- ▁CA
- ▁DEPRESS
- ▁BEAUTY
- ▁DAILY
- ▁ENGINEER
- ▁MUFFIN
- ▁WRITER
- ▁OPINIONS
- ▁TRACKS
- ▁PAUSE
- ▁PUZZLED
- URE
- SEY
- ▁WRAPS
- ▁SOCIAL
- ▁GRADES
- ▁WARMLY
- ▁YOYOS
- ▁CHEW
- ▁BULGOGI
- ▁BARKING
- ▁SENTENCE
- ▁THOUGH
- ▁POO
- ALIAN
- ▁EVE
- ICED
- ▁RAIS
- ▁DISTURB
- ▁ITSELF
- ▁ORIGAMI
- ▁TISSUE
- ▁JOHNNY
- ▁BURN
- ▁COOKS
- ▁CANDLE
- ▁OBVIOUS
- ▁SANDPAPER
- ▁SUPPLIES
- ▁CHEWY
- ATIONS
- ▁FLAVOR
- ▁KIWIS
- ▁MASTER
- ▁YELLING
- ▁CUPS
- ▁BL
- LAINE
- ▁STIMULAT
- ▁TIRES
- ▁PRETEND
- ▁CLEANED
- ▁RUSSIA
- ▁FRECKLES
- ▁FART
- ▁CHEETAH
- ▁RUDE
- ▁TRAINS
- ▁LOTTE
- ▁PAGES
- ▁POSTCARDS
- ▁KEYS
- ME
- ▁BOOKSTORE
- ▁HOST
- ▁SHORTCUT
- ▁SHOOTS
- ▁OPINION
- ▁APRON
- ▁COPIED
- LLOWED
- ▁STICKY
- ▁PREPARE
- ▁HEADQUARTERS
- ▁REPAIRS
- ▁WHALE
- ▁POOP
- ▁RESEMBLE
- ▁SHARE
- ▁LOLL
- ▁EXERCISES
- ▁PROGRAMS
- ▁BLINK
- ▁FLAG
- ▁LAY
- ▁FASTEST
- ▁SNEEZE
- ▁ENDED
- J
- ▁MARKER
- HER
- ▁ASSISTANT
- ▁CURRY
- ▁PURSE
- ▁SLIPPERS
- ▁UNDERSTANDING
- ▁PIT
- ▁INDOOR
- ▁CROWN
- ▁CURIOUS
- ▁SYSTEM
- ▁CABLE
- ▁MOSQUITO
- ▁PHARMACY
- ▁EVERLAND
- ▁WINDOWS
- ▁BOOGER
- ▁TIRING
- ▁PAPERS
- ▁PEANUT
- ▁PARDON
- ▁AH
- ▁FOX
- ▁RESELL
- ▁RESULT
- ▁TWIST
- ▁SLED
- ▁TALLEST
- ▁RIBBONS
- ▁RECEI
- ▁SQUIRREL
- ▁CUTLET
- ▁HEIGHT
- ▁HURTING
- ▁TRAP
- ▁WRAPPER
- ITED
- ▁FRIGHTENED
- ▁PATIENT
- ▁CANCELED
- ▁SHELVE
- ▁NET
- OOPS
- ▁MESS
- ▁MERRY
- ▁PLATE
- ▁COMPLAINT
- ▁SITUATION
- ▁PARIS
- ▁STRAW
- ▁DIVIDE
- ▁GOAL
- ▁SHRIMPS
- X
- SPECIAL
- GOTTEN
- F
- ▁COLLECTED
- ▁AFFORD
- ▁HUNG
- ▁CHAMBER
- ▁AIRPLANE
- ▁CHA
- ▁WALLS
- ▁REGULAR
- ▁EXPERIENCE
- ▁PILOT
- ▁250
- ▁LEMONADE
- ▁FURTHER
- ▁RAC
- IN
- ▁SWALLOW
- ▁CLOSING
- ▁CLASSROOMS
- ACK
- ▁RENT
- ▁ADS
- ▁TENTH
- ▁FRY
- ▁HOTDOG
- ▁ANGEL
- ▁PEACH
- ▁HIDDEN
- ▁GOOSE
- ▁SMALLEST
- ▁ROCKS
- ▁COOKED
- ▁CORN
- ▁SIGNS
- ▁ANXIOUS
- ▁LIGHTNING
- ▁SNOWBALL
- ▁BESIDE
- ▁ANTS
- ▁ALLOWANCE
- ▁COUNTRIES
- ▁POUCH
- ▁SLIP
- ▁POEM
- ▁RAMEN
- ▁ROLLING
- ▁PATIENTS
- ▁SCREEN
- ▁PRESENTATION
- ▁CAST
- ▁FLUTE
- ▁HU
- ▁ZEBRAS
- ▁COMPARE
- ▁WIDE
- ▁FORSYTHIA
- ▁SENIOR
- ▁DONATED
- ▁FACTS
- RD
- ▁FOG
- ▁ROLE
- ▁PEARS
- ▁BUTTONS
- COME
- ▁HAIRCUT
- ONDE
- ▁ENV
- ▁CHASED
- THE
- '4'
- ▁TRACK
- ▁STRANGER
- ASOL
- ▁CHIN
- ▁PUBLI
- ▁DUN
- ▁JUNE
- ▁20
- ▁DOUGHNUT
- ▁DADDY
- PORT
- ▁EMBARRASSING
- ▁UNCOMFORTABLE
- ▁FOREHEAD
- ▁RELATIVES
- ▁DOODLE
- ▁GENTLEMAN
- ▁TAPE
- ▁BANKER
- ▁ACTRESS
- ▁SORT
- ▁REDESIGN
- ▁GRADERS
- ▁KICKING
- ▁LA
- UK
- ▁BARBECUING
- ▁BULLY
- RATE
- ▁JUN
- ▁KOREANS
- ▁CORPORATION
- ▁HEAVIE
- ▁IMPROVE
- ▁OCEAN
- ▁LG
- ▁LAYER
- ▁BRIGHTLY
- ▁CRABS
- ▁PAR
- ▁BLANK
- ▁CALENDAR
- ▁CROCODILE
- ▁SALARY
- ▁CHUSEOK
- ▁CUTEST
- ▁NOR
- ▁MYSTER
- ▁BEND
- ▁INCLUDE
- ▁EXCELLENT
- ▁PAINFUL
- ▁SKEWERS
- ▁CHEERING
- SIZE
- BELT
- RCH
- ▁PLEASANT
- ▁PATH
- ▁QUALITY
- ▁STINGS
- ▁REPAIRING
- ▁DELAY
- ▁RIDES
- ▁ELSA
- ▁SECURITY
- ▁TWENTIETH
- ▁PC
- AH
- ▁NOTES
- RAL
- ▁NORMAL
- ▁DIRECT
- ▁CENT
- ▁APOLOGY
- ▁GARBAGE
- ▁GEE
- ▁WATCHES
- ▁SCISSOR
- ▁CULT
- ▁ECONOMY
- ▁SEASHELL
- ▁HA
- ▁HORSES
- ▁WHEELS
- BYE
- ▁HABIT
- ▁VI
- OOKIE
- ▁BAKING
- ▁CHERISH
- ▁JESUS
- ▁KLEA
- ▁PARTICIPATE
- ▁NICER
- ▁LISTING
- ▁SUPP
- IELD
- ▁CRISPY
- ▁EYESIGHT
- ▁TWITCH
- ▁WORST
- ▁GREETING
- ▁DRYER
- ▁LINES
- ▁DEPRESSED
- RENT
- ▁ROLLS
- LAND
- ▁DOCUMENT
- ▁COCKROACH
- ▁TAX
- ▁LIBER
- ▁FRIGHT
- ▁GARDENVIEW
- ▁JAR
- ▁ONESELF
- ▁PELICAN
- ▁RUSH
- ▁BAKER
- ▁EXPLODED
- ▁CARNATIONS
- ▁BUBBLES
- ▁BREAKS
- ▁EUROPE
- ▁EXCHANGE
- ▁SMASH
- ▁TORONTO
- ▁CEO
- ▁BLEEDING
- ▁IMAGINED
- ▁KIL
- ▁POU
- ▁TAB
- ▁CRUS
- OGRAMS
- ▁ALASKA
- ▁FROWNED
- MAIL
- TWINKL
- ▁SINGLE
- ▁INVENT
- ▁ROD
- ▁EMERGENCY
- PORTER
- ▁COMB
- ▁HUG
- TI
- '...'
- SMITH
- ▁AVOID
- ▁JJAKKUNG
- ▁MATERIALS
- ▁LOSES
- ▁LU
- INA
- FREE
- ▁SERV
- ▁FLU
- ▁REEL
- ▁BACKPACK
- ▁REPRINT
- ▁SIXTEEN
- ▁ZENA
- ROL
- ▁AWARD
- ▁TENK
- ▁NETWORK
- ▁WORKER
- ▁REDUCE
- GUE
- ▁PROTECT
- ▁CONCERN
- ▁CRIMINAL
- ▁FIREFIGHTER
- ▁INCHEON
- ▁SUWON
- ▁VIEWER
- OVER
- ▁ELEVATORS
- OR
- ▁IMPRESSED
- ▁SHAME
- ▁STRAP
- ▁YIELD
- ▁WARNED
- ▁HANDOUT
- ▁LUNCHTIME
- URY
- IED
- AY
- WIFE
- GUN
- ▁ISSUE
- RRIE
- ▁SANDCASTLE
- ▁FIGURES
- ▁LOV
- ▁POKE
- ▁FREESTYLE
- ▁CHAIN
- ▁EVERYDAY
- OK
- ALY
- ▁RATING
- ▁SPIT
- ▁SAIL
- ▁AMBULANCE
- ▁ENORMOUS
- ▁SELFCONT
- ▁MEMORIZED
- ▁GIRAFFES
- ▁SNOWS
- ▁PLANTS
- ▁LEAD
- ▁EXHIBITION
- ▁FOUGHT
- ▁MARBLE
- 'YES'
- ▁PICKE
- ▁WRONGLY
- ▁HURR
- ▁CONVERSATION
- ▁DETAIL
- ▁WORRYING
- ▁SAVING
- ▁TU
- ▁SECRETLY
- AWAY
- ▁GROWS
- ▁CONTRA
- ▁SCRAMBLE
- BES
- ▁PROMISES
- ▁CHAIRS
- ▁GOGGLES
- ▁OTHERWISE
- ▁VICTOR
- ▁THORNS
- ▁WORTHWHILE
- ▁HIPPOS
- ▁TRICK
- ▁OBSERVATORY
- ▁SHAMPOO
- ▁COKE
- ▁DRAMA
- ▁DELAYED
- ▁GUTS
- ▁AZALEA
- ▁WRAPP
- TIE
- HEAD
- ▁BIGGEST
- ▁ENEMIES
- ▁PUMPKIN
- ▁DOCUMENTARY
- ▁ATOPY
- ▁COUGH
- ▁TOUCHED
- ▁AWARDS
- EWER
- VER
- ▁BEARS
- ▁CACTUS
- ▁LOCK
- ▁LIT
- ▁SKETCH
- ZEN
- ▁DRAGG
- ▁SQUEEZED
- ▁SCOT
- SHY
- ▁CALCULAT
- ▁APPEARED
- ▁RAINED
- ▁WINGS
- ▁CLOTH
- ▁DIG
- ▁DONGSENG
- ▁SPONGE
- ▁STUBBORN
- ▁WAIST
- ▁FLE
- ▁TAG
- CH
- ▁CR
- ▁UMBRELLAS
- ▁TOOTHBRUSH
- ▁POCKETS
- ▁PAJAMA
- ▁HALLA
- ▁GATHER
- ▁BOSS
- ▁DETERGENT
- ▁DOCUMENTS
- ▁GENEROUS
- ▁TOTAL
- ▁CURTAIN
- ▁PUDD
- ▁THICK
- NSIBLE
- ▁HOLIDAYS
- ▁TICKLES
- FLAVORED
- ▁COVID
- ▁GIFTWRAP
- ▁BLINKING
- ▁JUNG
- HOK
- LEANING
- ▁IDOLS
- ▁DRO
- ▁FOUNTAIN
- ▁PHYSIC
- ▁PRESCRIPTION
- ▁LATTE
- ▁TONGUE
- ▁NA
- WORLD
- ▁SURGERY
- ADLINE
- ▁STUFFY
- ▁WAFFLES
- ▁15
- ▁LOGO
- ▁SHORTCUTS
- ▁RESPECTED
- ▁INVENTIONS
- ▁ARTISTS
- RAFFI
- ▁FOSSIL
- ▁GOLDCREST
- ▁MALTESE
- UGGING
- ▁BUCKWHEAT
- ▁PROFESS
- ▁SQUID
- ▁CORRECTION
- IT
- LOOKING
- ▁GENIUS
- ▁WHALES
- ▁OPPA
- ▁DONKEYS
- ▁ELECTRIC
- ▁FAKE
- ▁JUNIOR
- ▁MEDAL
- ▁SONGPYEON
- ▁MO
- ▁LOCKED
- ▁MEMORIZE
- ▁DIZZY
- ▁CAMELS
- ▁Y
- ▁CARING
- ▁PERFORMANCE
- ▁ERRAND
- ▁STRIPE
- ▁SIL
- ▁REDESIGNED
- ▁TIPS
- SCRIPT
- ▁BISCUIT
- ▁TORN
- ▁BRUSHE
- ▁STREETS
- ▁RELIEVED
- ▁HOPS
- ESSER
- ▁INSTRUMENT
- ▁ADVANCE
- ▁GESTURE
- ▁MUGWORT
- ▁PROMOT
- ▁PIN
- ▁SHAD
- IONAL
- '72'
- ▁HEAVEN
- ▁SLOPE
- ▁HAIRDR
- YOU
- ▁OWNERS
- ▁PLANS
- ▁SUNFLOWERS
- ▁CHIMNEY
- ▁HIPHOP
- ▁FOURTH
- ▁C
- ▁COUNTS
- ▁BARK
- SCOPE
- ▁ATOPIC
- ▁DEATH
- ▁FORMALLY
- ▁TWIN
- ▁QUIETLY
- ▁TEAS
- ▁MIN
- ▁CE
- ▁DEPENDS
- ▁TRANSFERRED
- ▁HANDY
- ▁CLEARLY
- CHOCO
- ▁HOTDOGS
- ▁FROWN
- ▁RUB
- ▁PERFORM
- ▁ATTRACT
- ▁DUST
- ▁REVIEW
- ▁SIGNBOARD
- ▁ENDURE
- ▁RIDD
- CKED
- ▁CIRCLES
- ▁AIRPLANES
- ▁MI
- GING
- Q
- ▁YURI
- ▁30
- ▁OFFICERS
- ▁ALMONDS
- ▁SOLVED
- ▁WEREN
- ▁ALBUM
- ▁UNDERGROUND
- ▁WRINKLES
- IL
- ▁TALES
- SOKCHO
- ▁GROCERIES
- ▁RECEIV
- ▁BARE
- ▁PEEL
- ▁COCKROACHES
- ▁DEEPLY
- ▁STATIONS
- ▁DANCED
- ▁CHUBBY
- ▁SATURDAYS
- ▁WING
- ▁CRAFTSMAN
- ▁OCCASION
- ▁WINE
- ▁TELE
- ▁BLUETOOTH
- ▁DISAPPEARED
- ▁SUBM
- ▁FARTED
- ▁PREPARED
- LIST
- ▁CONDITION
- ▁PORTRAIT
- '23'
- ▁POINTS
- ▁TAMBOURINES
- ▁TEND
- ▁SELFISH
- ▁SUBJECT
- RUPTE
- ▁LICKING
- ▁WATERMELONS
- ▁DIES
- ▁BLOWING
- ▁SOIL
- NIFE
- ▁BLAND
- ▁RECYCLED
- ▁SIXTY
- ▁LENGTH
- ILING
- ▁SURVIVED
- ▁HABITS
- WANT
- ▁GRAND
- ▁SAVORY
- ▁APPLAUSE
- ▁APPLY
- ▁MEANER
- ▁DISEASES
- ▁FRUSTRATING
- ▁NOTIFICATION
- ▁CHEOMSEONGDAE
- ▁BADGE
- ▁ABOARD
- ▁DISNEYLAND
- ▁LEE
- ▁SHARPEN
- ▁KETTLES
- ▁HERESY
- ▁CRAM
- ▁BRONZE
- ▁HARSH
- ▁EBS
- ▁GREY
- ▁POSE
- ▁PICKLES
- ▁LEN
- ▁TIGERS
- ARY
- ▁CLAR
- ▁EDUCATION
- ▁NEIGH
- ▁ADDITION
- ▁REASONABLE
- ▁DUMPING
- ▁SPACES
- ▁LIGHTER
- ▁SPELLING
- Z
- ▁CATCHING
- ▁LEVEL
- ▁UPSTAIRS
- ▁RINK
- ▁HANDLE
- AVING
- ▁BOWED
- ▁BEAUTIFULLY
- ▁FARTS
- ▁BOLT
- ▁FAMILIAR
- BBLE
- DO
- ▁FILE
- ▁TREATMENT
- ▁PASTOR
- ▁EEK
- ▁BLOOMING
- CIAL
- TRAINED
- ▁APPEAR
- ▁KNEE
- ▁WHEEL
- RIAN
- ▁ATTEND
- ▁CONFESS
- ▁DVD
- ▁WITNESS
- ▁BATMAN
- ID
- ▁BANGS
- ▁YARD
- ▁LOTION
- ▁RECYCLE
- ▁PRI
- ▁BURDEN
- ▁SCRA
- ▁VEGETA
- ▁TOENAILS
- SUALLY
- ▁YAM
- FORD
- ▁FORMAL
- ▁POK
- ▁FROZE
- ▁MULTIPLICATION
- ▁SEJONG
- ▁TRIES
- ▁SUNSHINE
- ▁HERBS
- ▁STRIPES
- ▁CLIMBING
- ▁SKIPP
- FFE
- ▁DAMAGE
- ▁RIDICULOUS
- ▁QUACK
- ▁PINNOCHIO
- SIDE
- ▁STANDARD
- ▁TRADITION
- GIANT
- ▁YELL
- ▁SUPER
- ▁OVERREACT
- ▁PERFUME
- ▁UNDERCOOK
- BEC
- ▁MAPS
- ▁PARTNERS
- ▁SPINACH
- ▁TTEOKGUK
- ▁JAJANGMYEON
- ▁DIRECTLY
- VATE
- STEE
- ▁MOUSES
- ▁SNOWED
- ▁IGNORE
- GIFT
- ▁LOCKER
- ▁SURVIV
- ▁P
- BBLES
- DAIRY
- ▁TOOLS
- STAR
- LING
- ▁BB
- ▁ACCESSORIES
- ▁NINTENDO
- ▁BIBIMBAP
- ▁DERMATITIS
- ▁ANNOUNCED
- ▁LICK
- ▁AZALEAS
- ▁PEPPER
- VAS
- ▁BODIES
- ▁EXPAND
- PED
- FLOWING
- ▁MIXED
- ▁GROUP
- ▁SAUSAGE
- ▁CEREAL
- ▁EASIEST
- ▁OVERSLEEP
- ▁SATISF
- ▁150
- ▁BAY
- ▁DIP
- UN
- AK
- ▁COINS
- ▁SURPRISES
- ▁WAK
- OL
- ▁EVILDOING
- ▁EYEBROWS
- ▁HEADBAND
- ▁KETCHUP
- ▁PROPERLY
- ▁STRAWBERRIES
- ▁UNFORTUNATE
- ITY
- LIKE
- ONG
- ▁WISHES
- ▁CONSTRUCTION
- ▁RESEARCH
- ▁RIPPED
- ▁FOREIGNERS
- ▁SANDALS
- ▁GOLDEN
- ▁PERFORMANCES
- ▁STEALING
- HA
- ▁SPARE
- ▁KPOP
- ▁LEASH
- ▁TIGHTLY
- CM
- ▁COMME
- ▁500
- ▁ANCHOVIES
- ▁BANKBOOK
- ▁COVIDNINETEEN
- ▁DEFINIT
- ▁UPRIGHT
- ▁MISSION
- BAL
- PHONES
- HO
- ▁GENERAL
- ▁OVEN
- ▁MARCH
- V
- HU
- ▁GROWN
- ▁BROADCAST
- ▁GANGWONDO
- ▁REFRESHING
- ▁DICE
- ▁RACK
- ▁PERM
- ▁SUITCASES
- ▁16
- ▁ENVELOPE
- ▁HOOKED
- ▁ROOT
- ▁TEXT
- ▁CAGE
- GO
- ▁MUS
- ▁DOUGHNUTS
- ▁WASTING
- ▁BETIAN
- ▁PRESENTING
- ▁BRUISE
- ▁ALOUD
- ▁AUDITORIUM
- ▁BTS
- PLE
- RAISED
- MOTION
- ▁GENTLE
- ONIA
- ▁EASIER
- ▁FONDUE
- ▁SEASICK
- ▁VR
- ▁DOLPHINS
- ▁MATCHES
- UR
- ACHE
- ▁CICADAS
- ▁LEAN
- ▁REPORTS
- YING
- ▁CLOUDS
- ▁WOLVES
- ▁HEEL
- ▁FRESHMAN
- ▁SCREAMED
- ▁RELATIVE
- ARIN
- ▁BUR
- ▁PASTE
- ▁FRIENDLY
- ABLE
- ▁VISITING
- ▁INVIT
- ▁LOUDSPEAKERS
- ▁NNN
- ▁OINTMENT
- ▁SWAN
- CLES
- ▁GARDENING
- ▁HICCUP
- IM
- '0'
- ND
- BA
- ▁JULY
- ▁SEMESTER
- ▁SUSHI
- ▁UNIVERSE
- ▁TOSUN
- ▁PILLS
- ▁TAN
- ▁NEAT
- ▁FEATHER
- ▁ANNEX
- ▁PENGO
- ▁SICKNESS
- ▁CANDLES
- LO
- ▁SCRUB
- ▁SHOOT
- ▁TH
- ▁CRACK
- PLAIN
- ▁FRIDGE
- ▁ANSWERING
- ▁INDOORS
- ▁APOLOGIZED
- ▁COMEDIANS
- ▁WOR
- ▁SPIN
- ▁DRACULA
- ▁DRAGONFLIES
- ▁EXTINGUISHER
- ▁GRADUATION
- ▁LADIES
- ▁EX
- ▁PLANNED
- ▁50
- ▁MILLIONS
- ▁TANGERINES
- ▁DRAWN
- ▁CLEANER
- ▁DECORATIONS
- ▁SPI
- ▁VARI
- ▁DRAGONFLY
- ▁SCENT
- ▁GAYAGEUM
- ▁CL
- ▁MONTHS
- ▁PAJAMAS
- ▁RESTING
- ISE
- ▁BADGES
- WORK
- KY
- ▁ADORES
- ▁COLA
- ▁MOTOR
- ▁PRODUCE
- ▁THOROUGHLY
- ▁VOWELS
- ▁COMMON
- PING
- ▁SUNFLOWER
- ▁FOLDING
- ▁DECORAT
- '8'
- ▁SCREAM
- ▁CONNECT
- ▁AUGUST
- ▁PURPOSE
- ▁PIAN
- ▁CHIMNEYS
- ▁MONDAYS
- JU
- ▁BEETLE
- ▁PEED
- ▁INTEREST
- ▁BAN
- ▁SNOR
- ▁MA
- ▁SEW
- ▁COIN
- ▁HAN
- ▁ALPHABETS
- ▁TONKATSU
- ▁HOPEFULLY
- ▁ICECREAM
- ▁REGULARLY
- ▁GALBI
- ▁CHAS
- ▁REALIZE
- ▁WORKERS
- ▁BOATS
- ▁INTERRUPT
- ▁SUBTRACT
- ▁ORGANIZING
- ▁HISTORIC
- ▁POTTER
- ATION
- ▁CHARGER
- ▁BAL
- ▁SUNLIGHT
- ▁DYE
- ▁SHOELACES
- ▁EVENLY
- RY
- '30'
- BIKE
- ▁CRAWL
- ▁CHOOS
- ▁ROBBINS
- ▁SHOOK
- ▁SPLASH
- ASKIN
- ▁UNTIE
- YMP
- ▁STING
- IOUS
- ▁PA
- ▁CAROLS
- ▁SUDDEN
- ▁MACKEREL
- ▁NOSEBLEED
- ▁SCREW
- ▁HANOK
- TOMS
- ▁STRA
- DAY
- ▁RIBBON
- MILKY
- BEAN
- ▁TOMATO
- ▁NATIONAL
- ▁SPRITE
- ▁PANIX
- ▁WISE
- ZED
- ▁CHEWING
- ▁FOOTS
- ▁SHAKES
- ADA
- 'NO'
- ▁DIFFERENTLY
- SLEEVE
- ▁930
- ▁GYEONGJU
- ▁RAPUNZEL
- ▁ROMANTIC
- ▁FARTHER
- ▁CAPE
- IER
- ETY
- ▁HARDEST
- ▁TURNING
- ▁3000
- GENEROUS
- ▁BOO
- ▁ATTENTION
- ▁DWARVES
- ▁HAKNYEON
- ▁OUTDOOR
- ▁RESORT
- ▁SWOLLEN
- ▁PINCH
- ▁PURE
- STER
- ▁GRAB
- ▁BIO
- ▁HURRICANE
- ▁JUDGE
- ▁LANE
- ▁OINK
- ▁SPRAINED
- ▁THIEVES
- ▁TRAPPED
- BIL
- ▁RANCH
- ▁TWENTYTH
- ▁ANNE
- OLD
- NIGHT
- ▁HEIGHTS
- ▁BRICK
- ▁GRATEFUL
- ▁VITAMIN
- ▁HAMSTER
- ▁USELESS
- ▁INVENTOR
- ▁ULSAN
- ▁PRETENDING
- ▁PANDAS
- GGING
- UL
- AG
- COMING
- ▁HUNT
- ▁REMOVE
- ▁OCTOBER
- ▁SEPARATE
- ▁YAWN
- ▁PALE
- ▁UM
- ▁FLOATING
- ▁CO
- HAVE
- ▁SNOWY
- ▁SHOELACE
- GRAPHY
- ▁MELT
- ▁FISHBONE
- UG
- ▁CHIL
- ▁POOPED
- ▁YUT
- ▁PILL
- '0000'
- ▁SURVIVE
- ▁EXAMIN
- ▁TRU
- ▁BACKGROUND
- ▁BEGINNING
- ▁MACARONS
- ▁SURFING
- ▁VERANDA
- ▁ASSEMBLE
- ▁HANGUL
- ▁REACTION
- ▁DAUGHTERS
- MENT
- QUET
- RMALLY
- ANG
- ▁LID
- ▁RESERVATION
- SOON
- ▁FLIP
- CAN
- ▁JUICY
- ▁KINGDOM
- ▁SOCIETY
- ▁TADPOLE
- ▁JAMSIL
- ▁WI
- ▁GRADUATED
- ▁PRE
- ▁SCRATCHING
- ▁PO
- ▁APPEARS
- ILY
- FAT
- FOOD
- ▁DISAPPEAR
- ▁FAINT
- ▁FLOAT
- ▁RUBB
- ▁TRANSFER
- ▁COMFORT
- ▁BALLERINA
- ▁DESCRIPTION
- ▁GENTLY
- ▁HAPPIER
- ▁RINGTONE
- ▁ARGUING
- ▁CONDITIONER
- PM
- IET
- CU
- ▁EARTHQUAKES
- ▁CHICK
- ▁TR
- ▁TYPHOON
- ▁BUNS
- ▁RUNNER
- NDC
- ▁WAH
- ▁JELL
- ENDY
- ▁COMMU
- ▁FARMS
- ▁SLEEVES
- ▁BEETLES
- LOW
- ▁MEATBALL
- ALKIE
- ▁MAGNIF
- ▁CONNIE
- ▁NEIGHBOR
- ▁OPERA
- ▁PINOCCHIO
- ▁SHOEMAKER
- ▁CRAFT
- ▁ONESIX
- ▁FLOW
- WD
- HOO
- ▁PRESENTATIONS
- ▁CHIP
- ITE
- ▁ANIMAT
- ▁DUB
- ▁FLOOD
- ▁KAKAO
- ▁RESU
- ▁UNBELIEVABLE
- ▁GRIN
- ▁HEALTHIER
- ▁SIXTH
- ▁CHOSEN
- ▁LOSER
- ▁BLED
- REALLY
- ▁IGNOR
- ▁PRODUCT
- RIST
- ▁DISCOURAGED
- ▁DODGE
- ▁FORECAST
- ▁OWL
- ▁TREASURE
- ▁UNIFORM
- ▁LOCAT
- ▁TUBE
- DON
- ▁FOLDED
- ▁WEIGH
- ▁RUIN
- ▁CRUSH
- ▁PARAD
- ▁OBESE
- ▁ORGANIZE
- ▁PRINCIPAL
- ▁RATTLING
- ▁RESERVE
- ▁RHYM
- ▁SIP
- ▁UNDERWATER
- ▁TAEG
- ▁TRAVELLING
- ▁STACK
- ▁RI
- ▁BUNDLES
- YEAR
- SAME
- AND
- ▁CHEESECAKE
- ▁EPISODE
- ▁FAMILIES
- ▁FIFTH
- ▁RHINITIS
- ▁SAUNA
- NCHES
- ▁EXCE
- TIQUE
- ▁COMBO
- ▁STRINGS
- ▁COLORFUL
- ▁FLOWS
- ▁COOLEST
- ▁OPPAS
- ATING
- ATE
- ▁MELTS
- ▁CHOPSTICK
- ▁BRANCH
- ▁FRUSTRATED
- ▁GREASY
- ▁EXIST
- ▁WAVING
- ▁APP
- ▁SODA
- ▁FALLEN
- ▁PRO
- SHAPED
- NG
- ▁CONNECTED
- ▁12
- ▁BANDAID
- ▁DISTANCE
- ▁DRAIN
- ▁MEASURE
- ▁TEMPLE
- ▁WORKBOOK
- ▁EIGHTAM
- ▁WARN
- ▁BURNT
- BOARD
- ▁DE
- IFF
- RTH
- ▁MUSHROOMS
- ▁POWERFUL
- STICK
- ▁VOUCHERS
- ▁BLEED
- ▁BRAID
- ▁CREPE
- ▁HAWKING
- ▁FLAM
- ▁SCORE
- ▁RELEASED
- ▁TICKLED
- BU
- FISH
- ATIVE
- CLUSI
- ▁CLINIC
- ▁CROOKED
- ▁RELAY
- ▁SCOOTER
- ▁SEBASTIAN
- ▁SUFFER
- ▁TEENAGER
- ▁BATHHOUSE
- ▁WRIST
- ▁BAKERIES
- ▁BRANCHES
- ▁SAMYUKGU
- ▁SCU
- ENDER
- ▁INGREDIENTS
- ▁INVENTED
- ▁BOWING
- SSES
- WAR
- ▁PRESSED
- ▁SQUEEZ
- SIGNED
- WON
- ▁70
- ▁APPROACH
- ▁CHAPPED
- ▁DUMB
- ▁FREEZING
- ▁MAGNIFIER
- ENTIAL
- IE
- ▁CLOSELY
- ▁DIAPERS
- OUS
- ▁DIRT
- ▁CENTIMETER
- ▁FLOWERPOT
- ▁FOAM
- ▁POLITIC
- ▁PORRIDGE
- ▁PEDIATRICIAN
- ▁FIREWORKS
- ▁TROUBLEMAKER
- ▁PILLAR
- ▁EVACUATE
- ▁SILLA
- EUK
- ANDING
- ▁FAINTED
- ERMAN
- ▁SEAGULL
- ▁CHICKS
- ▁SWEATING
- INGO
- PAPER
- ▁AGREED
- ▁CLAPP
- VA
- ▁STRENGTH
- SOONGSIL
- ‘
- ▁CONVENIENT
- ▁DECEMBER
- ▁FORTUNATELY
- ▁FURNITURE
- ▁HAGWON
- ▁LOUNGE
- ▁MOKDONG
- ▁PALM
- ▁SPRINKLE
- ▁STIRFR
- RUNK
- ▁ANKLE
- ▁SELF
- ▁SEVENTH
- LESS
- ▁DIVING
- ADE
- ▁RANG
- SHINY
- WITH
- ▁BRAVELY
- ▁BADMINTON
- ▁BULGUKSA
- ▁KARAOKE
- ▁ADMIT
- ▁GINGER
- ▁LAID
- ▁SNOWBOARD
- ▁HOPPING
- ▁UDO
- ▁BULGING
- ▁CARP
- ▁FACT
- ▁GROUPS
- ▁ENTERING
- ▁RIP
- ▁MAR
- LOCK
- ▁JE
- ▁ADMISSION
- ▁CHRYSANTHEMUM
- ▁DIARIES
- ▁DISPOSABLE
- ▁LOACH
- ▁PARROT
- ▁SCULPTURE
- ▁TERRIF
- ▁VOLUME
- ▁REPRESENTATIVE
- ▁MEOW
- ▁CHEEK
- ▁JEJUDO
- ▁HARMFUL
- ▁BRUISED
- ▁MINERAL
- AINT
- ▁EDIT
- WARDS
- HY
- ▁VIEW
- ▁EXACT
- ROUGHT
- OCKPAPERSCISSORS
- ▁CHESTNUT
- ▁HAWAII
- ▁PIMPLES
- ▁REMOTE
- ▁SOLUTION
- ▁COMPETE
- ▁SOFTLY
- ▁BUNDLE
- ▁LIP
- ▁GRADER
- WOO
- RIS
- STORY
- DAYS
- COLORED
- FOR
- ▁COLLAPSE
- ▁STEPP
- ▁BRILL
- RSELVES
- ▁ACCORDING
- ▁BACON
- ▁BAEK
- ▁BUTTERFLIES
- ▁COSMOS
- ▁CYCLING
- ▁DISTRICT
- ▁ESTATE
- ▁HUMID
- ▁MERMAID
- ▁PAPRIKA
- ▁PHONICS
- ▁BELONG
- ▁YUKJANG
- ▁ANIMATION
- ▁FLIPP
- ▁DUMPLING
- ▁BLOSSOM
- UNG
- ▁EXPLORE
- ▁INSECTS
- ▁JI
- HEART
- GHTS
- ▁ASTRONAUT
- ▁BELLHAMMER
- ▁LICENSE
- ▁NEPTUNE
- ▁OPPOS
- ▁REFRIGERATOR
- ▁STONEBUSH
- ▁1000
- ▁APPLI
- ▁SUBTRACTION
- ▁HOOD
- ▁WIDER
- ▁BROOM
- ▁UNIVERSITY
- ▁PRINCESSES
- ▁MINT
- ▁PARENT
- ▁PEEING
- ▁ADORE
- DONG
- ▁SP
- ANCE
- ▁EXPLOR
- TTEOKBOKKI
- WHEEL
- ▁ABANDONED
- ▁CALLUSES
- ▁COSMETICS
- ▁LADYBUG
- ▁MARIA
- ▁PRONUNCIATION
- ▁BOUQUET
- ▁SOGGY
- ▁LEFTOVERS
- ▁MIKE
- ▁TANK
- ▁SPAC
- ▁FRAME
- MADE
- IVAL
- ▁YE
- ▁GATHERING
- IAN
- ▁KITTENS
- IBLE
- ▁ABBREVIAT
- ▁CHAPAGETTI
- ▁ENGINES
- ▁EQUIPMENT
- ▁INTERSECTION
- ▁SANITIZER
- ▁DOKDO
- ▁GENERATOR
- ▁MEDIUM
- ▁BALANCE
- ▁CHART
- ▁TELEVISION
- ▁JAJANG
- ▁LOLLY
- ▁PHOTOGRAPH
- ORD
- ▁KKA
- ▁SOLES
- ▁BALM
- ▁DECORATION
- ▁THORN
- ▁ARMY
- ▁YU
- EEK
- NK
- BOY
- LENGTH
- TONY
- HEN
- ▁RELEASE
- ▁LOOSE
- ▁COMPLETE
- KYOCHON
- ▁ARCADE
- ▁BRIM
- ▁CORONA
- ▁CRANE
- ▁CUPCAKE
- ▁KITCHENWARE
- ▁LULLABY
- ▁MODER
- ▁MUSKET
- ▁OBEDIEN
- ▁PIKACHU
- ▁PROVERBS
- ▁SALMON
- ▁YUKGAEJANG
- ▁TANNED
- ▁VILLA
- ▁DIRECTIONS
- ▁CLAY
- ▁ADMIR
- ▁DIRECTOR
- ▁DAMAGED
- ▁BURST
- ▁TOPIC
- ▁DOODLED
- ▁COMPAR
- ▁BUBBLE
- ▁HO
- ▁KISSE
- ▁JO
- ▁BLOATED
- ▁CONSONANTS
- ▁DOWNLOAD
- ▁ELBOW
- ▁FUNNIEST
- ▁PORORO
- ▁SLOTS
- ▁VACUUM
- ▁BOTTOM
- ▁MANDELA
- ▁IMSIL
- ▁VIP
- ▁TOMMY
- EATURE
- ▁PINE
- ▁EIGHTTHIRTY
- ▁HIDEANDSEEK
- ▁COLLAPSED
- ▁UNDERSTOOD
- ▁CRUSHED
- ▁TRI
- OF
- ▁DI
- ▁CARNATION
- ORY
- NAILS
- LENT
- ▁PUBLISH
- PLACE
- ▁CLIP
- ILLA
- ▁SUNSHIN
- ▁ACTUAL
- ▁SUCCESS
- COCK
- ▁60
- ▁BENEFITS
- ▁CLAW
- ▁HAUNT
- ▁LIBRARIES
- ▁LOTTERIA
- ▁MERCURY
- ▁MITTEN
- ▁SWAM
- ▁ROTTEN
- ▁SERVANT
- DENTAL
- ▁LEGEND
- ▁ROT
- ▁PRICKED
- ▁230
- ▁TUB
- ▁WINK
- ▁HUNTER
- ▁SCREAMING
- ▁FINALE
- ▁SOAPY
- ▁REDESIGNING
- NNA
- ▁DIAPER
- ▁BANG
- IK
- CHAN
- TIER
- ▁MOR
- ▁METERS
- ▁HUGG
- DAE
- FTER
- CHO
- SHIP
- EITHER
- CTIVE
- ▁KI
- ▁RU
- ▁BRAND
- ▁AMOUNT
- ▁EXPLANATION
- ▁HAIRPIN
- ▁HORRIBLE
- ▁INTERIOR
- ▁LANDSLIDE
- ▁NEVERTHELESS
- ▁PERSIMMON
- ▁POSTPONE
- ▁SCIENTIST
- ▁SLACK
- ▁STORM
- ▁STREAM
- ▁SURPRISING
- ▁URGENT
- ▁ZOMBIE
- ▁STOOL
- ▁LOAD
- NAMBU
- ▁ANNOUNCEMENT
- IKES
- GRAN
- ▁ABC
- ▁COMPLE
- ▁FASCINATING
- ▁REMOVED
- ▁CRAWLING
- ▁INTERRUPTING
- RELLA
- RAGE
- ▁PEELING
- ▁HUMANS
- ▁MON
- ▁BEGIN
- ▁VEGETABLE
- ▁SLEEVE
- GLE
- ▁THA
- ISH
- TRAINER
- '7'
- ROAD
- DRIVER
- ▁PRETEN
- ▁ALLOW
- UZZLE
- ▁DEMONSTRAT
- ▁STIR
- ▁BROC
- ▁CARCASON
- ▁EQUALLY
- ▁EXPERIMENT
- ▁HESITAT
- ▁SPINNING
- ▁MENTOR
- ▁ABBREVIATION
- ▁RASHES
- ▁ASSEMBLING
- ▁DUNG
- MEMOR
- ▁PEACEFUL
- ▁HARDENS
- OSU
- SSUED
- ▁FRECKLE
- TIOUS
- ▁REALIZ
- ▁SQUA
- LIFE
- THINK
- ▁BIK
- ▁KNIT
- ZZA
- ▁ALITTLE
- ▁BAREFOOT
- ▁CONCENTRATE
- ▁DALGONA
- ▁GUIDEBOOK
- ▁KIDZANIA
- ▁PALACE
- ▁ROSHEN
- ▁TEXTBOOK
- ▁TUNAKIMBAP
- OTTEOK
- ▁830
- ▁HOSE
- ITIES
- NIX
- ▁FIFTEENCM
- ▁IMAGE
- ▁CHEESEKIMBAP
- ▁HOTTER
- ▁PATT
- ▁CLIPPE
- ▁FOXES
- EAGLE
- ▁QUE
- NDING
- ▁DETER
- AP
- YEO
- UED
- ▁PAI
- ▁EXCITEDLY
- ▁WAVED
- ▁BUL
- BUT
- ▁METER
- KIMBAP
- HAND
- WATCHING
- ▁CONVERS
- ▁FLICK
- ▁PEDIATRIC
- NAMENT
- REIGN
- ▁BIKINI
- ▁BUCKWHEATCREPE
- ▁JENGA
- ▁LAUNCH
- ▁OPTICIAN
- ▁PIGTAIL
- ▁SIMON
- ▁SUBSCRIBE
- ▁TICKLISH
- NELS
- ▁PINWHEEL
- INATED
- ▁DRUG
- ▁ONESIXCM
- ▁EIGHTH
- ▁SMARTEST
- ▁HUNTING
- ▁PIL
- UMMY
- ITION
- UNNI
- ▁SU
- ▁POWERFULL
- ▁WAFFLE
- DIA
- ▁TICK
- EIGHT
- PICKED
- FIFTY
- WENT
- ▁BOT
- ▁REPRESENT
- OKKI
- ▁COCOA
- ▁CUSHION
- ▁FARTHEST
- ▁PENTAGON
- ▁SLIDING
- ▁SWEAR
- ▁MOLD
- ▁BBOY
- ▁80
- ▁WATERPROOF
- ▁RAIL
- ▁CREATED
- ▁CHIRPING
- ▁SEARCH
- SEOK
- ▁TOAST
- ▁BETRAYE
- JOR
- ▁NI
- ZI
- ▁SLAMM
- ▁GU
- ▁NAG
- ▁SERVED
- UFFY
- ▁INSECT
- ▁ZIPPE
- LP
- YEONG
- ESSION
- IPPED
- ▁CELEBRAT
- ▁CHANG
- '50'
- POST
- ENTI
- ▁DISAPPOINT
- ▁QU
- ▁FOREIGN
- ▁POSSIB
- ▁CONGRATULAT
- ADOW
- ▁TAE
- CAFÉ
- ▁COURIER
- ▁DAEJEON
- ▁DOWNSTAIRS
- ▁EXPER
- ▁PREFERENCE
- ▁LACT
- ▁OCCUR
- ORIENT
- ▁SPACIOUS
- INARY
- ▁KNITTING
- ▁LIBERTY
- VILLE
- RB
- ▁BARKED
- DAN
- ▁TIN
- ATOR
- ▁PHO
- RIED
- ▁JINDA
- OUND
- HOE
- ▁STRETCHE
- ▁SNEEZ
- EVI
- QUALITY
- MOM
- ▁BLIND
- HYEON
- ECTION
- ROKE
- ▁ANCHOVY
- ▁ASHAMED
- ▁COASTER
- ▁CONFUSING
- ▁CYCLIST
- ▁DANDELION
- ▁FIREFLIES
- ▁HYUNG
- ▁KNOWLEDGE
- ▁NARACULA
- ▁SCAB
- ▁VOCABULARY
- ▁CONFIDENT
- ▁RELAT
- ▁FOOLISH
- ▁NINEAM
- ▁ZO
- ▁BOU
- ▁FLATTERED
- ▁BLINDING
- ▁SKATER
- ▁ROLLER
- ▁FIRM
- COTT
- NURI
- ▁WARMER
- ▁LONGEST
- ▁TICKLE
- ▁AMERICAN
- GI
- AGGED
- CHARGE
- TODAY
- ▁CREATE
- UMPING
- JJAEK
- ▁BEGINNER
- ▁CLICKING
- ▁CORRIDORS
- ▁DAZZLING
- ▁DERMATOLOGIST
- ▁DILIGENT
- ▁FEBRUARY
- ▁FISHBOWL
- ▁GARAETTEOK
- ▁GARGLE
- ▁INJURED
- ▁MANTISES
- ▁NAKSEONGDAE
- ▁ROAST
- ▁SNITCH
- ▁SLIMMER
- ▁DISCHARGE
- ▁SOAKED
- ▁SELECTED
- ▁VICE
- ▁INFECT
- ▁CONTAINER
- ▁NEATLY
- ▁STARSHAPED
- LOTTEWORLD
- ▁SUPPLEMENT
- ▁EIGHTTH
- ISTERS
- ▁TICKL
- ▁STRAIGHTEN
- ▁SKINN
- RANGE
- ▁TANGERINE
- ▁STO
- PREPARED
- SPROUT
- TWELVE
- TONIGHT
- ▁RECOGNI
- VAN
- BEEN
- ▁EXPLODE
- ▁CHUBB
- ANGGU
- ▁SAVI
- ▁950
- ▁ADJUST
- ▁CASTANETS
- ▁FAITH
- ▁GONGJU
- ▁GRAIN
- ▁GROSS
- ▁JUPITER
- ▁MAGPIE
- ▁SAIPAN
- ▁SKULL
- ▁SPARROW
- ▁VACCINATED
- ▁VIGOROUSLY
- ▁AUTOMATIC
- ▁NEARBY
- SEVENTEEN
- ▁TWENTI
- ▁NIKE
- ▁SEORA
- DATORS
- ▁PONG
- ▁730
- ▁SCARIER
- ▁TRUNK
- ▁BETRAYER
- ▁CHEESEGIMBAP
- ONGDAE
- ▁SEVERE
- ▁SPOONFUL
- CTATION
- ▁WITCH
- ▁LIMIT
- ▁EATTTEOKBOKKI
- GEOUS
- ▁CRAWLED
- ▁SUC
- AVED
- AGE
- ▁KITTEN
- ▁SKEWER
- IZED
- ▁TEAR
- WAVE
- ▁RACI
- ▁CONTAIN
- ▁TRO
- ▁GUGUDAN
- ▁GEPPET
- ▁PHARMACI
- MULGUK
- PPAK
- SAMJANG
- ▁ACORN
- ▁APPETITE
- ▁BRUNCH
- ▁BUMMER
- ▁DIARRHEA
- ▁FLAP
- ▁GERMS
- ▁GWANSUN
- ▁HOMETOWN
- ▁KILOMETERS
- ▁MARRIAGE
- ▁PRANKS
- ▁RADISH
- '5'
- ′
- 수
- '2'
- ́
- 子
- 예
- 요
- '3'
- É
- '6'
- '9'
- “
- .
- '1'
- 단
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/ko_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ko_bpe5000_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: contextual_block_conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
block_size: 40
hop_size: 16
look_ahead: 16
init_average: true
ctx_pos_enc: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Waterhorse/chessgpt-chat-v1
|
Waterhorse
| 2023-07-06T06:20:40Z | 124 | 10 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:Waterhorse/chess_data",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:OpenAssistant/oasst1",
"dataset:vicgalle/alpaca-gpt4",
"arxiv:2306.09200",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-03T21:18:08Z |
---
license: apache-2.0
language:
- en
datasets:
- Waterhorse/chess_data
- anon8231489123/ShareGPT_Vicuna_unfiltered
- OpenAssistant/oasst1
- vicgalle/alpaca-gpt4
---
# Chessgpt-Chat-v1
Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1.
- Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1)
- Chat Version: [Chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1)
Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact xidong.feng.20@ucl.ac.uk.
## Model Details
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model in Chess.
## GPU Inference
This requires a GPU with 8GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-chat-v1")
model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-chat-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
# Conversation between two
prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1:"
# Conversation between more than two
#prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1: Sicilian defense.<|endoftext|>Human 2:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
```
# Uses
Excluded uses are described below.
### Direct Use
`chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.
#### Out-of-Scope Use
`chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.
#### Bias, Risks, and Limitations
Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.
# Evaluation
Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results.
# Citation Information
```bash
@article{feng2023chessgpt,
title={ChessGPT: Bridging Policy Learning and Language Modeling},
author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
journal={arXiv preprint arXiv:2306.09200},
year={2023}
}
```
|
yuuhan/roberta-base-rte-lora
|
yuuhan
| 2023-07-06T06:12:21Z | 6 | 0 |
peft
|
[
"peft",
"text-classification",
"en",
"dataset:SetFit/rte",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-06T06:03:00Z |
---
license: apache-2.0
datasets:
- SetFit/rte
language:
- en
metrics:
- accuracy
library_name: peft
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
Accuracy: 0.7328519855595668 on RTE
|
saintzeno/a2c-PandaReachDense-v3
|
saintzeno
| 2023-07-06T06:10:45Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T05:52:59Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yuuhan/roberta-base-mnli-lora
|
yuuhan
| 2023-07-06T06:01:55Z | 0 | 0 |
peft
|
[
"peft",
"text-classification",
"en",
"dataset:SetFit/mnli",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-06T05:57:19Z |
---
license: apache-2.0
datasets:
- SetFit/mnli
language:
- en
metrics:
- accuracy
library_name: peft
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
Accurate: 0.8654100866021396 on glue/mnli
|
LarryAIDraw/sakurako
|
LarryAIDraw
| 2023-07-06T06:00:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:27:47Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/100652/sakurako-busujima-grand-blue
|
aroot/eng-guj-simcse_central
|
aroot
| 2023-07-06T05:52:24Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T05:29:33Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_central
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_central
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2829
- Bleu: 2.7255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/nkbllcfrmgtvrvcv2275pchsnltrx
|
nolanaatama
| 2023-07-06T05:50:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:46:52Z |
---
license: creativeml-openrail-m
---
|
Ryukijano/whisper-small-dv
|
Ryukijano
| 2023-07-06T05:36:17Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-05T06:25:50Z |
---
license: mit
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
---
---
# Whisper Small DV Model

## Model Description
The `whisper-small-dv` model is an advanced Automatic Speech Recognition (ASR) model, trained on the extensive [Mozilla Common Voice 13.0](https://commonvoice.mozilla.org/en/datasets) dataset. This model is capable of transcribing spoken language into written text with high accuracy, making it a valuable tool for a wide range of applications, from transcription services to voice assistants.
## Training
The model was trained using the PyTorch framework and the Transformers library. Training metrics and visualizations can be viewed on TensorBoard.
## Performance
The model's performance was evaluated on a held-out test set. The evaluation metrics and results can be found in the "Eval Results" section.
## Usage
The model can be used for any ASR task. To use the model, you can load it using the Transformers library:
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load the model
model = Wav2Vec2ForCTC.from_pretrained("Ryukijano/whisper-small-dv")
processor = Wav2Vec2Processor.from_pretrained("Ryukijano/whisper-small-dv")
# Use the model for ASR
inputs = processor("path_to_audio_file", return_tensors="pt", padding=True)
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0])
```
## License
This model is released under the MIT license.
---
P
|
eigenscribe/etzHayim
|
eigenscribe
| 2023-07-06T05:34:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:33:49Z |
---
license: creativeml-openrail-m
---
|
insub/distilbert-base-uncased-finetuned-imdb
|
insub
| 2023-07-06T05:22:05Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-06T05:17:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_central
|
aroot
| 2023-07-06T05:13:08Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T04:53:14Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_central
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_central
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1521
- Bleu: 31.5479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AAOBA/ppo-PyramidsRND
|
AAOBA
| 2023-07-06T05:05:37Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-06T05:04:49Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chikoto/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nhung/layoutxlm-de-durch
|
nhung
| 2023-07-06T05:02:06Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-06T02:20:52Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-de-durch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-de-durch
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 15000
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.10.0+cu111
- Datasets 2.11.0
- Tokenizers 0.13.3
|
whiteDandelion/swin-tiny-patch4-window7-224-finetuned-eurosat
|
whiteDandelion
| 2023-07-06T05:01:12Z | 228 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-06T04:12:49Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9805
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [andupets/real-estate-image-classification](https://huggingface.co/andupets/real-estate-image-classification) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Accuracy: 0.9805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.089 | 0.99 | 140 | 0.1050 | 0.9635 |
| 0.0565 | 2.0 | 281 | 0.0760 | 0.9725 |
| 0.0421 | 2.98 | 420 | 0.0613 | 0.9805 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w4-s45
|
squeeze-ai-lab
| 2023-07-06T04:47:33Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-06T03:46:56Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s45
|
squeeze-ai-lab
| 2023-07-06T04:46:32Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-06T03:46:53Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
KPrashanth/Reinforce_Agent_playing_Cartpole_v1
|
KPrashanth
| 2023-07-06T04:36:55Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T04:36:41Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Agent_playing_Cartpole_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aroot/eng-mya-wsample.43a
|
aroot
| 2023-07-06T04:28:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T04:06:12Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-wsample.43a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8306
- Bleu: 4.6779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.