modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-12 18:33:19
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-12 18:33:14
card
stringlengths
11
1.01M
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-10_sixties-0_s366
jonatasgrosman
2022-12-11T17:33:06Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T17:32:55Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_age_teens-10_sixties-0_s366 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s539
jonatasgrosman
2022-12-11T17:24:55Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T17:24:43Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s539 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
Yanjie24/t5-samsung-5e
Yanjie24
2022-12-11T17:24:48Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:samsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-12-11T16:52:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: t5-samsung-5e results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: train args: samsum metrics: - name: Rouge1 type: rouge value: 43.1484 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-samsung-5e This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.7108 - Rouge1: 43.1484 - Rouge2: 20.4563 - Rougel: 36.6379 - Rougelsum: 40.196 - Gen Len: 16.7677 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.873 | 1.0 | 1841 | 1.7460 | 41.7428 | 19.2191 | 35.2428 | 38.8578 | 16.7286 | | 1.8627 | 2.0 | 3682 | 1.7268 | 42.4494 | 19.8301 | 36.1459 | 39.5271 | 16.6039 | | 1.8293 | 3.0 | 5523 | 1.7223 | 42.8908 | 19.9782 | 36.1848 | 39.8482 | 16.7164 | | 1.8163 | 4.0 | 7364 | 1.7101 | 43.2291 | 20.3177 | 36.6418 | 40.2878 | 16.8472 | | 1.8174 | 5.0 | 9205 | 1.7108 | 43.1484 | 20.4563 | 36.6379 | 40.196 | 16.7677 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s227
jonatasgrosman
2022-12-11T17:22:25Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T17:22:13Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_age_teens-0_sixties-10_s227 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
RajMoodley/ppo-Huggy
RajMoodley
2022-12-11T17:21:18Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-11T17:21:12Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: RajMoodley/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s682
jonatasgrosman
2022-12-11T17:19:57Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T17:19:46Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s682 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s197
jonatasgrosman
2022-12-11T17:14:23Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T17:14:12Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s197 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
PakanunNoa/ppo-Huggy
PakanunNoa
2022-12-11T17:02:56Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-11T17:02:48Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: PakanunNoa/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s510
jonatasgrosman
2022-12-11T17:00:35Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T17:00:23Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s510 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
nemanjar/ppo-LunarLander-v2
nemanjar
2022-12-11T16:57:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-07T20:28:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 287.80 +/- 16.52 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s273
jonatasgrosman
2022-12-11T16:56:47Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T16:56:35Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s273 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s693
jonatasgrosman
2022-12-11T16:53:19Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T16:53:08Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s693 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s362
jonatasgrosman
2022-12-11T16:50:03Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T16:49:52Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-10_sixties-0_s362 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jjj-hf123/ppo-LunarLander-v2
jjj-hf123
2022-12-11T16:44:05Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T16:43:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 244.10 +/- 24.54 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s304
jonatasgrosman
2022-12-11T16:39:01Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T16:38:44Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s304 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s872
jonatasgrosman
2022-12-11T16:32:46Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T16:32:23Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s872 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s852
jonatasgrosman
2022-12-11T16:29:07Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T16:28:56Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s852 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s102
jonatasgrosman
2022-12-11T16:24:34Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T16:24:17Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_age_teens-5_sixties-5_s102 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
enyaelvis/Goody
enyaelvis
2022-12-11T16:08:48Z
0
0
null
[ "region:us" ]
null
2022-12-11T16:08:04Z
git lfs install git clone https://huggingface.co/enyaelvis/Goody
EffyLi/bert-base-uncased-finetuned-ner
EffyLi
2022-12-11T16:08:02Z
14
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-11T16:00:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9144678979771328 - name: Recall type: recall value: 0.9305291419621882 - name: F1 type: f1 value: 0.9224286110341003 - name: Accuracy type: accuracy value: 0.9825726404753206 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0618 - Precision: 0.9145 - Recall: 0.9305 - F1: 0.9224 - Accuracy: 0.9826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 220 | 0.0809 | 0.8923 | 0.9051 | 0.8987 | 0.9784 | | No log | 2.0 | 440 | 0.0643 | 0.9108 | 0.9262 | 0.9184 | 0.9817 | | 0.1657 | 3.0 | 660 | 0.0618 | 0.9145 | 0.9305 | 0.9224 | 0.9826 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.0 - Datasets 2.7.1 - Tokenizers 0.11.0
ntinosmg/ppo-Huggy
ntinosmg
2022-12-11T16:02:27Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-11T16:02:16Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: ntinosmg/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
kjul/ppo-LunarLander-v2
kjul
2022-12-11T16:00:18Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T15:52:39Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -166.90 +/- 40.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
parinzee/whisper-base-th-newmm
parinzee
2022-12-11T15:46:59Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "th", "dataset:mozilla-foundation/common_voice_11_0", "dataset:google/fleurs", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-10T09:28:58Z
--- language: - th license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 - google/fleurs model-index: - name: Whisper Base Thai Newmm Tokenized - Parinthapat Pengpun results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Base Thai Newmm Tokenized - Parinthapat Pengpun This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 and the FLEURS datasets. It achieves the following results on the evaluation set: - eval_loss: 0.5888 - eval_wer: 67.3381 - eval_cer: 32.4281 - eval_runtime: 6393.9778 - eval_samples_per_second: 1.709 - eval_steps_per_second: 0.214 - epoch: 1.0 - step: 2000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 7500 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
khaled5321/PPO-LunarLander-v2
khaled5321
2022-12-11T15:25:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-10T19:54:12Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 296.58 +/- 16.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
tomthefreak/Mud-Forest
tomthefreak
2022-12-11T15:04:02Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-12-11T14:38:29Z
--- license: creativeml-openrail-m --- 3D Fantasy Horror textual embedding for Stable Diffusion 2.1. This embedding is trained on 63 images generated via SD 2.1. The generations used previous embeddings "Macro Terror" and "Verdict Rubicon" among others. Stylistic influences and prompting terminology influenced by Beksinski, Giger and NBC's Hannibal TV show. Training images were generated through img2img diffusion on images of near black noise in order to bias the resulting exposures of the generations. These images were colorgraded then captioned manually prior to training. Example generations: ![00365-1244879260-freakstyle.png](https://s3.amazonaws.com/moonup/production/uploads/1670770190575-632799fd3476801d8f27a0b9.png) _Prompt: Mud Forest, Steps: 10, Sampler: DPM++ SDE Karras, CFG scale: 3, Seed: 1244879260, Size: 768x768, Model hash: 4bdfc29c_ ![00350-2168042904-freakstyle.png](https://s3.amazonaws.com/moonup/production/uploads/1670770299145-632799fd3476801d8f27a0b9.png) _Prompt: Mud Forest, Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 5, Seed: 2168042904, Size: 768x768, Model hash: 4bdfc29c_ ![00361-2168042915-freakstyle.png](https://s3.amazonaws.com/moonup/production/uploads/1670770410289-632799fd3476801d8f27a0b9.png) _Prompt: Mud Forest, Steps: 10, Sampler: DPM++ SDE Karras, CFG scale: 3, Seed: 2168042915, Size: 768x768, Model hash: 4bdfc29c_
AI-MeisterBin/ko-sentence-bert-MeisterBin
AI-MeisterBin
2022-12-11T14:52:37Z
4
0
transformers
[ "transformers", "pytorch", "tf", "roberta", "feature-extraction", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-12-11T10:19:49Z
심리상담 챗봇 메아리를 만들기 위한 버트 모델입니다. 챗봇 https://ai-meisterbin-project-chatbot-main-chatbot-qj3hxl.streamlit.app/ 깃허브 https://github.com/AI-MeisterBin/project_chatbot
ScrappyCoco666/ppo-Huggy-1
ScrappyCoco666
2022-12-11T14:25:43Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-11T14:25:35Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: ScrappyCoco666/ppo-Huggy-1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
sohm/ppo-LunarLander-v2
sohm
2022-12-11T14:04:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-10T22:54:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.39 +/- 18.24 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
polejowska/convnext-tiny-224-eurosat
polejowska
2022-12-11T14:00:13Z
26
0
transformers
[ "transformers", "pytorch", "convnext", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-12-11T13:48:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-tiny-224-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9537037037037037 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-eurosat This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3153 - Accuracy: 0.9537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.863 | 0.98 | 33 | 1.5775 | 0.7619 | | 1.039 | 1.98 | 66 | 0.8142 | 0.9008 | | 0.5825 | 2.98 | 99 | 0.4442 | 0.9339 | | 0.3228 | 3.98 | 132 | 0.3153 | 0.9537 | | 0.2641 | 4.98 | 165 | 0.2868 | 0.9524 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
paulkm/autotrain-lottery_v2-2420075389
paulkm
2022-12-11T13:36:25Z
5
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "zh", "dataset:paulkm/autotrain-data-lottery_v2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T13:31:07Z
--- tags: - autotrain - text-classification language: - zh widget: - text: "I love AutoTrain 🤗" datasets: - paulkm/autotrain-data-lottery_v2 co2_eq_emissions: emissions: 0.06047934032845949 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 2420075389 - CO2 Emissions (in grams): 0.0605 ## Validation Metrics - Loss: 0.122 - Accuracy: 0.965 - Precision: 0.976 - Recall: 0.946 - AUC: 0.988 - F1: 0.961 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/paulkm/autotrain-lottery_v2-2420075389 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("paulkm/autotrain-lottery_v2-2420075389", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("paulkm/autotrain-lottery_v2-2420075389", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
gyronee/ppo-LunarLander-V2
gyronee
2022-12-11T13:30:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T13:30:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.84 +/- 14.15 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Eilons/ppo-LunarLander-v2
Eilons
2022-12-11T12:07:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T12:06:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 247.38 +/- 22.45 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
janzw/ppo-lunar-lander-v2_r5
janzw
2022-12-11T12:03:45Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T12:03:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 280.49 +/- 16.28 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ahmetfirat/ppo-LunarLander-v2
ahmetfirat
2022-12-11T12:02:27Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T11:30:13Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.93 +/- 12.24 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sanchit-gandhi/whisper-small-sl-1k-steps
sanchit-gandhi
2022-12-11T11:22:31Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "sl", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-11T10:15:40Z
--- language: - sl license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Slovenian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 sl type: mozilla-foundation/common_voice_11_0 config: sl split: test args: sl metrics: - name: Wer type: wer value: 26.588921282798832 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Slovenian This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 sl dataset. It achieves the following results on the evaluation set: - Loss: 0.4625 - Wer: 26.5889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0027 | 13.01 | 1000 | 0.4625 | 26.5889 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 2.0.0.dev20221210+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
harryrudolph/ppo-Huggy
harryrudolph
2022-12-11T11:07:00Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-11T11:06:45Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: harryrudolph/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
vantezzen/pankocat
vantezzen
2022-12-11T10:55:24Z
1
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-11T10:44:50Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Pnkct1 Dreambooth model trained by vantezzen with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
polejowska/convnext-tiny-224-finetuned-eurosat-vitconfig-test-1
polejowska
2022-12-11T10:12:45Z
28
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-12-11T09:59:58Z
--- tags: - generated_from_trainer datasets: - imagefolder model-index: - name: convnext-tiny-224-finetuned-eurosat-vitconfig-test-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat-vitconfig-test-1 This model is a fine-tuned version of [](https://huggingface.co/) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
Alan1999/ppo-LunarLander-v2
Alan1999
2022-12-11T09:24:12Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T09:23:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 270.83 +/- 15.56 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bnriiitb/whisper-small-te
bnriiitb
2022-12-11T09:11:11Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "te", "dataset:Chai_Bisket_Stories_16-08-2021_14-17", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-21T19:28:59Z
--- language: - te license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - Chai_Bisket_Stories_16-08-2021_14-17 metrics: - wer model-index: - name: Whisper Small Telugu - Naga Budigam results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Chai_Bisket_Stories_16-08-2021_14-17 type: Chai_Bisket_Stories_16-08-2021_14-17 config: None split: None args: 'config: te, split: test' metrics: - name: Wer type: wer value: 77.48711850971065 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Telugu - Naga Budigam This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset. It achieves the following results on the evaluation set: - Loss: 0.7063 - Wer: 77.4871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2933 | 2.62 | 500 | 0.3849 | 86.6429 | | 0.0692 | 5.24 | 1000 | 0.3943 | 82.7190 | | 0.0251 | 7.85 | 1500 | 0.4720 | 82.4415 | | 0.0098 | 10.47 | 2000 | 0.5359 | 81.6092 | | 0.0061 | 13.09 | 2500 | 0.5868 | 75.9413 | | 0.0025 | 15.71 | 3000 | 0.6235 | 76.6944 | | 0.0009 | 18.32 | 3500 | 0.6634 | 78.3987 | | 0.0005 | 20.94 | 4000 | 0.6776 | 77.1700 | | 0.0002 | 23.56 | 4500 | 0.6995 | 78.2798 | | 0.0001 | 26.18 | 5000 | 0.7063 | 77.4871 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0 - Datasets 2.7.1 - Tokenizers 0.13.2
SerdarHelli/SDF-StyleGAN-3D
SerdarHelli
2022-12-11T09:01:38Z
0
4
null
[ "Shape modeling", "Volumetric models", "dataset:shapenet", "arxiv:2206.12055", "license:other", "region:us" ]
null
2022-12-08T07:19:24Z
--- license: other tags: - Shape modeling - Volumetric models datasets: - shapenet --- ### Model Description - SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation - Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin, 2022 The proposed deeplearning model for 3D shape generation called signed distance field (SDF) - SDF-StyleGAN, whicH is based on StyleGAN2. The goal of this approach is to minimize the visual and geometric differences between the generated shapes and a collection of existing shapes. ### Documents - [GitHub Repo](https://github.com/Zhengxinyang/SDF-StyleGAN) - [Paper - SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation](https://arxiv.org/pdf/2206.12055.pdf) ### Datasets ShapeNet is a comprehensive 3D shape dataset created for research in computer graphics, computer vision, robotics and related diciplines. - [Offical Dataset of ShapeNet](https://shapenet.org/) - [author's data preparation script](https://github.com/Zhengxinyang/SDF-StyleGAN) - [author's training data](https://pan.baidu.com/s/1nVS7wlcOz62nYBgjp_M8Yg?pwd=oj1b) ### How to use Training snippets are published under the official GitHub repository above. ### BibTeX Entry and Citation Info ``` @inproceedings{zheng2022sdfstylegan, title = {SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation}, author = {Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin}, booktitle = {Comput. Graph. Forum (SGP)}, year = {2022}, } ```
CarpetCleaningofFriscoTX/CarpetCleaningofFriscoTX
CarpetCleaningofFriscoTX
2022-12-11T08:54:26Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:53:52Z
--- license: other --- Carpet Cleaning of Frisco TX http://carpetcleaningoffrisco.com/ 972-674-8941 Our truck mounted Floor covering Cleaning of Frisco TX administrations will be precisely exact thing you want in the event that your rugs require an uncompromising purging. Our portable professionals are generally outfitted with extra hardware within their trucks that will permit them to play out an incredibly strong profound disinfection of your ground surface. Pet stain and scent evacuation is effectively dealt with when you have Rug Cleaning of Frisco TX on your side. Try not to worry over a little wreck that your doggies made. Our cleaners will make quick work of it and eliminate the splotch and smell in a matter of moments. You should simply settle on the fast decision.
CarpetCleaningLewisvilleTX/UpholsteryCleaningLewisvilleTX
CarpetCleaningLewisvilleTX
2022-12-11T08:51:06Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:50:33Z
--- license: other --- Upholstery Cleaning Lewisville TX https://carpetcleaninglewisville.com/upholstery-cleaning.html 972-338-5376 We have all been in situations where someone accidentally spills wine on your couch during a family get-together or where children misbehave and throw food all over your furniture.You can't go back to these times.However, Carpet Cleaning Lewisville, TX can get rid of all of these unsightly stains and give your upholstery a new look and scent.
CarpetCleaningLewisvilleTX/RugCleaningLewisvilleTX
CarpetCleaningLewisvilleTX
2022-12-11T08:48:53Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:48:23Z
--- license: other --- Rug Cleaning Lewisville TX https://carpetcleaninglewisville.com/rug-cleaning.html 972-338-5376 If you're looking for a rug cleaning company near me in Lewisville, Texas, we found the right one for you.The best rug cleaning services at the most affordable prices are available from Carpet Cleaning Lewisville, TX.Simply pick up the phone and give us a call to receive exceptional service.
CarpetCleaningLewisvilleTX/AirDuctCleaningLewisvilleTX
CarpetCleaningLewisvilleTX
2022-12-11T08:48:00Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:47:23Z
--- license: other --- Air Duct Cleaning Lewisville TX https://carpetcleaninglewisville.com/air-duct.html 972-338-5376 To ensure that your home's air is clean, air ducts need to be cleaned on a regular basis.Because you have Carpet Cleaning Lewisville, TX, it won't cost you as much as it did before.In addition to professional and thorough cleaning, we will offer you the best deals on air duct cleaning.For more information, contact us right away.
CultivatorX/Chinese-Digital-Art
CultivatorX
2022-12-11T08:44:04Z
0
25
null
[ "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2022-12-11T06:48:37Z
--- language: - en thumbnail: "https://s3.amazonaws.com/moonup/production/uploads/1670742434498-633a20a88f27255b6b56290b.png" license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- # Chinese Digital Art Diffusion **Trigger Words: CNDigitalArt Style** This is a fine-tuned Stable Diffusion model trained on some of the **Chinese Digital Arts** style that usually uses on Chinese Interactive Reading (Visual Novel) platforms such as **Orange Light** [66rpg.com](https://66rpg.com) or **NetEase Interactive Reading Platform** [avg.163.com](https://avg.163.com/). _if you don't know what that is, don't worry, it's just one of those really big thing in China that majority of Westerners had no clue about._ ![Trained.png](https://s3.amazonaws.com/moonup/production/uploads/1670748193502-633a20a88f27255b6b56290b.png) Use the tokens **_CNDigitalArt Style_** in your prompts to test and experiment it yourself. **EXAMPLES:** _These results were tested on the 2000 Steps model [ **CNDigitalArt_2000.ckpt**](https://huggingface.co/CultivatorX/Chinese-Digital-Art/blob/main/CNDigitalArt_2000.ckpt). I just did 20 batches of -1 seeds in random for each of the prompt (most of which isn't that good) but it does have some really good ones. Prompt: **a portrait of Megan Fox in CNDigitalArt Style** Negative prompt: _lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads_ Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 593563256, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119 ![Scarlett Fox.png](https://s3.amazonaws.com/moonup/production/uploads/1670742434498-633a20a88f27255b6b56290b.png) Prompt: **a portrait of Scarlett Johansson in CNDigitalArt Style** Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 4272335413, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119 ===================================================================== ===================================================================== Prompt: **a portrait of Emma Watson in CNDigitalArt Style** Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 3813059825, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119 ![Emma Zendeya.png](https://s3.amazonaws.com/moonup/production/uploads/1670742782225-633a20a88f27255b6b56290b.png) Prompt: **a portrait of Zendaya in CNDigitalArt Style** Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, two faces, two heads Steps: 20, Sampler: Euler, CFG scale: 7, Seed: 962052606, Face restoration: GFPGAN, Size: 512x512, Model hash: 2258c119
RichardsonTXCarpetCleaning/AirDuctCleaningRichardsonTX
RichardsonTXCarpetCleaning
2022-12-11T08:38:59Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:38:34Z
--- license: other --- Air Duct Cleaning Richardson TX https://carpetcleaning-richardson.com/air-duct-cleaning.html (972) 454-9815 Do you require a cleaning service from professionals with years of experience?If so, contact us right away.We have been working to improve customers' homes' climates for a long time and can also assist you.Because our equipment can reach far to remove all harmful material from your ducts, we do not leave any area unclean.
RichardsonTXCarpetCleaning/TileandGroutCleaningRichardsonTX
RichardsonTXCarpetCleaning
2022-12-11T08:36:55Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:36:05Z
--- license: other --- Tile and Grout Cleaning Richardson TX https://carpetcleaning-richardson.com/tile-and-grout-cleaning.html (972) 454-9815 We have a Cheap Tile Cleaning service that brightens your floor and gives your home a clean look if you've been putting off cleaning your tiles because of the cost.Carpet cleaning in Richardson, Texas, doesn't just clean carpets.We cover everything when it comes to cleaning your home, from your ducts and vents to your tile and grout.
RichardsonTXCarpetCleaning/UpholsteryCleaningRichardsonTX
RichardsonTXCarpetCleaning
2022-12-11T08:34:22Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:28:17Z
--- license: other --- Upholstery Cleaning Richardson TX https://carpetcleaning-richardson.com/upholstery-cleaning.html (972) 454-9815 Your furniture is the most expensive item in your home, along with probably your jewelry and electronics, cars, and other possessions.It's possible that some of this furniture was passed down through generations.You want to take care of it so that future generations can continue to enjoy it.Call Richardson TX Carpet Cleaning right away if you require steam cleaning for your upholstery!
luigisaetta/whisper-medium-it
luigisaetta
2022-12-11T08:19:08Z
18
2
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "whisper-event", "it", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-08T18:00:42Z
--- language: - it license: apache-2.0 tags: - generated_from_trainer - whisper-event datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: luigisaetta/whisper-medium-it results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 it type: mozilla-foundation/common_voice_11_0 config: it split: test args: it metrics: - name: Wer type: wer value: 5.7191 --- # luigisaetta/whisper-medium-it This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.1452 - Wer: 5.7191 ## Model description This model is a fine-tuning of the OpenAI Whisper Medium model, on the specified dataset. ## Intended uses & limitations This model has been developed as part of the Hugging Face Whisper Fine Tuning sprint, December 2022. It is meant to spread the knowledge on how these models are built and can be used to develop solutions where it is needed ASR on the Italian Language. It has not been extensively tested. It is possible that on other datasets the accuracy will be lower. Please, test it before using it. ## Training and evaluation data Trained and tested on Mozilla Common Voice, vers. 11 ## Training procedure The script **run.sh**, and the Python file, used for the training are saved in the repository. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1216 | 0.2 | 1000 | 0.2289 | 10.0594 | | 0.1801 | 0.4 | 2000 | 0.1851 | 7.6593 | | 0.1763 | 0.6 | 3000 | 0.1615 | 6.5258 | | 0.1337 | 0.8 | 4000 | 0.1506 | 6.0427 | | 0.0742 | 1.05 | 5000 | 0.1452 | 5.7191 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
GreenCarpetCleaningGarland/GreenCarpetCleaningGarland
GreenCarpetCleaningGarland
2022-12-11T08:12:46Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:12:22Z
--- license: other --- Green Carpet Cleaning Garland http://garlandcarpetcleaner.com/ (972) 256-8544 One of methods we follow at cover cleaning is "Steam Cleaning Administration" that depends on utilizing minimal high temp water and more steam, centering steam - which infiltrating into profound on spots and stain to dissolve every one of them even the hardest ones and kill all poisons from your rug. Then, at that point, the job of our compelling green items starts to clear this large number of components, returning your floor covering shimmered and bright. At last, we utilize our excellent dry machines, so your rug will be full dry inside no time. We have specific floor covering steam cleaners, so they know how to follow the high amazing skill simultaneously, safeguarding your rug from any harms.
polixonrio/whisper-small-fy-NL
polixonrio
2022-12-11T08:09:11Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "fy", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-10T17:27:53Z
--- language: - fy license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Western Frisian (Netherlands) results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 fy-NL type: mozilla-foundation/common_voice_11_0 config: fy-NL split: test args: fy-NL metrics: - name: Wer type: wer value: 22.29686271707282 --- # Whisper Small Western Frisian (Netherlands) This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fy-NL dataset. This is an attempt for cross lingual transfer from Dutch to Frisian, since Whisper doesn't support Frisian. It achieves the following results on the evaluation set: - Loss: 0.5443 - Wer: 22.2969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0067 | 10.01 | 1000 | 0.4810 | 23.0115 | | 0.0008 | 21.0 | 2000 | 0.5200 | 22.3576 | | 0.0004 | 31.01 | 3000 | 0.5443 | 22.2969 | | 0.0003 | 42.0 | 4000 | 0.5610 | 22.3719 | | 0.0002 | 52.01 | 5000 | 0.5674 | 22.3898 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
CarpetCleaningMesquiteTX/AirDuctCleaningMesquiteTX
CarpetCleaningMesquiteTX
2022-12-11T08:00:43Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T08:00:17Z
--- license: other --- Air Duct Cleaning Mesquite TX http://mesquitecarpetcleaningtx.com/air-duct-cleaning.html (469) 213-8132 Cleaning the air ducts is very important.We ensure that your carpets, tile flooring, and rugs are kept clean and in good condition.We can deal with a variety of heater and air conditioner cleaning issues in addition to cleaning air ducts.Your air ducts can be cleaned quickly and inexpensively of dust and debris.No matter how big or small the job is, our team of certified and professionally trained technicians will complete it correctly.
CarpetCleaningMesquiteTX/RugCleaningMesquiteTX
CarpetCleaningMesquiteTX
2022-12-11T07:58:08Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:57:46Z
--- license: other --- Rug Cleaning Mesquite TX http://mesquitecarpetcleaningtx.com/rug-cleaning.html (469) 213-8132 Carpet and area rug manufacturers recommend using the free hot water extraction system from Our Rug Cleaning.Carpet Cleaning Mesquite TX can also clean some area rugs at a lower temperature, depending on how many fibers they have. These rugs need to be cleaned with cool water routines.Using a high-controlled cleaning process and a deposit-free cleaning result, we remove all dirt, sand, coarseness, and grime from the area rugs.
CarpetCleaningMesquiteTX/CarpetCleaningMesquiteTX
CarpetCleaningMesquiteTX
2022-12-11T07:57:15Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:56:56Z
--- license: other --- Carpet Cleaning Mesquite TX http://mesquitecarpetcleaningtx.com/ (469) 213-8132 The most ideal way to discard these bugs is expert and master steam cleaning with a truck mount. Cover Cleaning Mesquite TX will give you the total cleaning Administration that you expect from truly capable administrators. Our cleaners assurance to constantly give total, compelling, high audit cover administration and cleaning all over Mesquite TX and its district. We have bewildering cleaning counselors who are accessible to return to work for cleaning administrations over the course of the day nearby.
CarpetCleaningMckinneyTX/CarpetCleaningMckinneyTX
CarpetCleaningMckinneyTX
2022-12-11T07:53:59Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:53:36Z
--- license: other --- Carpet Cleaning Mckinney TX https://carpetcleaningmckinneytx.com/ (469) 702-1202 Individuals search for elite administrations to keep their homes tidy and cutting-edge. We are certain about what we do in light of the fact that, we consolidate our long stretches of involvement in the cutting edge gear, drawing out the ideal outcome. For instance, our steam clean floor coverings technique guarantees the oil stains on your rug are for all time cleaned out with little water. Your rug will have insignificant drying time and be back on the floor quicker than expected.
FortWorthCarpetCleaning/UpholsteryCleaningFortWorthTX
FortWorthCarpetCleaning
2022-12-11T07:51:04Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:50:42Z
--- license: other --- Upholstery Cleaning Fort Worth TX https://txfortworthcarpetcleaning.com/upholstery-cleaning.html (817) 523-1237 When you sit on your upholstery, you inhale allergens, dirt, and dust that are trapped in its fibers.Therefore, if you want to ensure the safety of your upholstery—especially if you have children or pets—you need to hire experts in carpet cleaning for upholstery in Worth, Texas.We have the best upholstery cleaners who will come to your house and do an excellent job of cleaning it.Understanding the various fibers of your furniture is important to our technicians because it helps them choose effective and safe cleaning methods.When you hire us, we promise to give you a lot of attention and care, and we won't start cleaning your upholstery until we make sure the products we use are safe for the kind of fabric it is made of.
FortWorthCarpetCleaning/RugCleaningFortWorthTX
FortWorthCarpetCleaning
2022-12-11T07:49:51Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:49:30Z
--- license: other --- Rug Cleaning Fort Worth TX https://txfortworthcarpetcleaning.com/rug-cleaning.html (817) 523-1237 Carpet cleaning Fort Worth TX is nearby and able to provide you with professional cleaning services if you require an efficient and high-quality rug cleaning service.Simply contact our professionals, and your rug will regain its vibrant color and stunning appearance.We use products and equipment that enable us to provide you with the best results, such as rug shampooing, which enables us to restore your rug's beautiful appearance and the amazing scent that permeates your entire home.Call us for $20 off these services if you need them.
FortWorthCarpetCleaning/CarpetCleaningFortWorthTX
FortWorthCarpetCleaning
2022-12-11T07:49:00Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:48:41Z
--- license: other --- Carpet Cleaning Fort Worth TX https://txfortworthcarpetcleaning.com/carpet-cleaning.html (817) 523-1237 Carpet cleaning Fort Worth TX always focuses on making your home appear beautiful, particularly if this beauty is dependent on the appearance of your carpets, furniture, rugs, and tiles and ducts.We are the business that works to make your life in your home better. With our help, you can have a healthy and beautiful home.Call us if your current carpet has numerous stains and odors and you are unable to use it again due to its poor appearance and are considering purchasing a new one.
CarpetCleaningArlingtonTX/CarpetCleaningArlingtonTX
CarpetCleaningArlingtonTX
2022-12-11T07:39:36Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:39:07Z
--- license: other --- Carpet Cleaning Arlington TX https://carpetcleaning-arlington-tx.com/ (817) 381-5072 At Rug Cleaning Plano in TX we likewise have a truck mounted cover cleaning framework. These versatile vehicles have a force to be reckoned with of hardware. They generally have these on them and they can finish any occupation properly. Whether it is a little home, an enormous house or a gigantic modern intricate, the undertaking is rarely too large or intense.
CarpetCleaningPlanoTX/AirVentCleaningPlanoTX
CarpetCleaningPlanoTX
2022-12-11T07:34:27Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:34:07Z
--- license: other --- Air Vent Cleaning Plano TX https://carpetcleaningplanotx.com/air-vent-cleaning.html ‪(469) 444-1903‬ Cleaning air vents need not be difficult.Carpet Cleaning Plano in Texas is a team of experienced air vent cleaners who know how to do the job right.Professionals with certifications make up our team of technicians, who will arrive in our cutting-edge mobile cleaning units.
CarpetCleaningPlanoTX/AirDuctCleaningPlanoTX
CarpetCleaningPlanoTX
2022-12-11T07:33:31Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:33:09Z
--- license: other --- Air Duct Cleaning Plano TX https://carpetcleaningplanotx.com/air-duct-cleaning.html ‪(469) 444-1903‬ Airborne irritants are bad for your health, according to studies and other health research for a long time.Mold, pollen, and dust are examples.Your capacity to breathe is seriously impacted by these.Allergies and other respiratory issues are brought on by these pollutants.They may occasionally carry out attacks that can be fatal.What is the most important way to keep the air in your home, place of business, or place of business clean?It is cleaning air ducts.
CarpetCleaningPlanoTX/UpholsteryCleaningPlanoTX
CarpetCleaningPlanoTX
2022-12-11T07:31:41Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:31:20Z
--- license: other --- Upholstery Cleaning Plano TX https://carpetcleaningplanotx.com/upholstery-cleaning.html ‪(469) 444-1903‬ We remove stains from sofas.When you have a nice, comfortable sofa in your home, spills are common.On that new couch, game day weekends can be difficult.When they are excited about who is winning on the playing field, friends, family, and pets can cause havoc.After a party, upholstery cleaning is not a problem.We can arrive with our mobile unit, which simplifies the task.
CarpetCleaningPlanoTX/RugCleaningPlanoTX
CarpetCleaningPlanoTX
2022-12-11T07:30:50Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:30:22Z
--- license: other --- Rug Cleaning Plano TX https://carpetcleaningplanotx.com/rug-cleaning.html ‪(469) 444-1903‬ Put your carpets, rugs, and other cleaning needs at risk.Avoid immersing them in hazardous and wasteful chemical processes in particular.We use cutting-edge Green Rug Cleaners services at carpet cleaning Plano, Texas.Texas cannot match these.Rug cleaning is safe and good for the environment thanks to our cutting-edge washing technology.This will not harm your property or put your friends, family, or pets in danger.
muhtasham/medium-mlm-tweet-target-tweet
muhtasham
2022-12-11T07:30:40Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T07:25:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: medium-mlm-tweet-target-tweet results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - name: Accuracy type: accuracy value: 0.7593582887700535 - name: F1 type: f1 value: 0.7637254221785755 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium-mlm-tweet-target-tweet This model is a fine-tuned version of [muhtasham/medium-mlm-tweet](https://huggingface.co/muhtasham/medium-mlm-tweet) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.9066 - Accuracy: 0.7594 - F1: 0.7637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4702 | 4.9 | 500 | 0.8711 | 0.7540 | 0.7532 | | 0.0629 | 9.8 | 1000 | 1.2918 | 0.7701 | 0.7668 | | 0.0227 | 14.71 | 1500 | 1.4801 | 0.7727 | 0.7696 | | 0.0181 | 19.61 | 2000 | 1.5118 | 0.7888 | 0.7870 | | 0.0114 | 24.51 | 2500 | 1.6747 | 0.7754 | 0.7745 | | 0.0141 | 29.41 | 3000 | 1.8765 | 0.7674 | 0.7628 | | 0.0177 | 34.31 | 3500 | 1.9066 | 0.7594 | 0.7637 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
CarpetCleaningPlanoTX/CarpetStainRemovalPlanoTX
CarpetCleaningPlanoTX
2022-12-11T07:29:56Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:29:29Z
--- license: other --- Carpet Stain Removal Plano TX https://carpetcleaningplanotx.com/carpet-stain-removal.html ‪(469) 444-1903‬ Carpet Cleaning Plano in Texas is the company of choice for the majority of customers when it comes to stain removal.We have the best-trained staff and professional technology.We will get rid of even the worst stain.That is if it comes from your upholstery, fabrics, curtains, and carpets.Try us out today, and you'll see why the majority of people prefer us to everyone else.
CandyCarpetCleaningIrving/DryerVentCleaningIrvingTX
CandyCarpetCleaningIrving
2022-12-11T07:22:36Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:21:49Z
--- license: other --- Dryer Vent Cleaning Irving TX ‪(214) 744-3341‬ https://carpetcleaninginirving.com/dryer-vent.html We can assist you if you need Lint Buildup Removal in Irving, Texas.Our cleaning technicians have a lot of knowledge and experience to help you.Your washing machine won't dry your clothes as well as it used to when it had a lot of this material in it.
CandyCarpetCleaningIrving/AirDuctCleaningIrvingTX
CandyCarpetCleaningIrving
2022-12-11T07:19:05Z
0
0
null
[ "region:us" ]
null
2022-12-11T07:18:37Z
Air Duct Cleaning Irving TX https://carpetcleaninginirving.com/air-duct.html ‪(214) 744-3341‬ service for cleaning your home's ducts that gets rid of harmful substances that could make you sick.It's likely that you've been sneezing a lot at home when the air conditioner or heater is on.If that is the case, your ducts most likely contain mold, pollen, dirt, or dirt.
CandyCarpetCleaningIrving/TileGroutCleaningIrvingTX
CandyCarpetCleaningIrving
2022-12-11T07:18:00Z
0
0
null
[ "region:us" ]
null
2022-12-11T07:17:20Z
Tile Grout Cleaning Irving TX license: other https://carpetcleaninginirving.com/tile-grout.html ‪(214) 744-3341‬ We are available and can assist you at any time if you require Tile and Grout Cleaners in Irving, Texas who view this occupation as a career and make significant investments in comprehending the most effective ways to serve their customers.It's possible that the household cleaners you use are actually making your tile dirty.This includes your mop, which occasionally mixes grease, spills, and dirt with the grout.
muhtasham/base-mlm-imdb-target-tweet
muhtasham
2022-12-11T07:16:23Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T07:11:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: base-mlm-imdb-target-tweet results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - name: Accuracy type: accuracy value: 0.7754010695187166 - name: F1 type: f1 value: 0.77889743305892 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-mlm-imdb-target-tweet This model is a fine-tuned version of [muhtasham/base-mlm-imdb](https://huggingface.co/muhtasham/base-mlm-imdb) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.7516 - Accuracy: 0.7754 - F1: 0.7789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3412 | 4.9 | 500 | 1.0525 | 0.7888 | 0.7891 | | 0.0365 | 9.8 | 1000 | 1.4590 | 0.7540 | 0.7572 | | 0.0127 | 14.71 | 1500 | 1.4788 | 0.7888 | 0.7890 | | 0.0137 | 19.61 | 2000 | 1.7516 | 0.7754 | 0.7789 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
CandyCarpetCleaningIrving/RugCleaningIrvingTX
CandyCarpetCleaningIrving
2022-12-11T07:15:12Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:12:39Z
--- license: other --- Rug Cleaning Irving TX https://carpetcleaninginirving.com/rug.html ‪(214) 744-3341‬ We can help you with Area Rug Cleaning in Irving, Texas, if you need it.We have developed superior cleaning techniques that can bring out the beauty of this home accent, especially if it hasn't been cleaned in a while.
EmadSalem/SpeakToChatGPT
EmadSalem
2022-12-11T07:08:41Z
0
0
null
[ "region:us" ]
null
2022-12-10T13:09:48Z
--- title: SpeakToChatGPT emoji: 📊 colorFrom: blue colorTo: blue sdk: gradio sdk_version: 3.12.0 app_file: app.py pinned: false duplicated_from: yizhangliu/chatGPT --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
Sanjay-Papaiahgari/ppo-Huggy
Sanjay-Papaiahgari
2022-12-11T07:06:57Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2022-12-11T07:06:49Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Sanjay-Papaiahgari/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CleaningCarpetDallas/DryerVentCleaningDallasTX
CleaningCarpetDallas
2022-12-11T07:04:43Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T07:04:23Z
--- license: other --- http://cleaningcarpetdallas.com/dryer-vent-cleaning.html (972) 643-8799 Another skill that our Dallas technicians have mastered is cleaning dryer vents.Do you believe that the level of operation of your drying machine is lower than its normal and typical performance?Please let us know if you think there may be clogged ducts and vents so we can assist you.
muhtasham/mini-mlm-imdb-target-tweet
muhtasham
2022-12-11T07:03:10Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T07:00:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: mini-mlm-imdb-target-tweet results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - name: Accuracy type: accuracy value: 0.767379679144385 - name: F1 type: f1 value: 0.7668830990510893 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mini-mlm-imdb-target-tweet This model is a fine-tuned version of [muhtasham/mini-mlm-imdb](https://huggingface.co/muhtasham/mini-mlm-imdb) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.3042 - Accuracy: 0.7674 - F1: 0.7669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8543 | 4.9 | 500 | 0.6920 | 0.7674 | 0.7571 | | 0.3797 | 9.8 | 1000 | 0.7231 | 0.7727 | 0.7709 | | 0.1668 | 14.71 | 1500 | 0.9171 | 0.7594 | 0.7583 | | 0.068 | 19.61 | 2000 | 1.1558 | 0.7647 | 0.7642 | | 0.0409 | 24.51 | 2500 | 1.3042 | 0.7674 | 0.7669 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
Shiry/Whisper_hebrew_medium
Shiry
2022-12-11T07:00:26Z
35
1
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "he", "dataset:google/fleurs", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-03T15:11:25Z
--- language: - he license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - google/fleurs metrics: - wer model-index: - name: Whisper Medium Hebrew results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: google/fleurs he_il type: google/fleurs config: he_il split: test args: he_il metrics: - name: Wer type: wer value: 34 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium Hebrew This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the google/fleurs he_il dataset. It achieves the following results on the evaluation set: - Wer: 34 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
CleaningCarpetDallas/UpholsteryCleaningDallasTX
CleaningCarpetDallas
2022-12-11T06:58:59Z
0
0
null
[ "license:other", "region:us" ]
null
2022-12-11T06:58:36Z
--- license: other --- http://cleaningcarpetdallas.com/upholstery-cleaning.html (972) 643-8799 Spots and stains on your microfiber sofa, couch, or loveseat can seriously ruin the appearance of your living room.You won't stand out with your gourmet and designer rugs, grandfather clocks, and artwork, and you'll also make your friends laugh.
muhtasham/base-vanilla-target-tweet
muhtasham
2022-12-11T06:56:07Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T06:46:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: base-vanilla-target-tweet results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - name: Accuracy type: accuracy value: 0.7780748663101604 - name: F1 type: f1 value: 0.7772664883136655 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-vanilla-target-tweet This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.8380 - Accuracy: 0.7781 - F1: 0.7773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3831 | 4.9 | 500 | 0.9800 | 0.7807 | 0.7785 | | 0.0414 | 9.8 | 1000 | 1.4175 | 0.7754 | 0.7765 | | 0.015 | 14.71 | 1500 | 1.6411 | 0.7754 | 0.7708 | | 0.0166 | 19.61 | 2000 | 1.5930 | 0.7941 | 0.7938 | | 0.0175 | 24.51 | 2500 | 1.3934 | 0.7888 | 0.7852 | | 0.0191 | 29.41 | 3000 | 1.9407 | 0.7647 | 0.7658 | | 0.0137 | 34.31 | 3500 | 1.8380 | 0.7781 | 0.7773 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/small-vanilla-target-tweet
muhtasham
2022-12-11T06:40:08Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T06:37:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: small-vanilla-target-tweet results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - name: Accuracy type: accuracy value: 0.7540106951871658 - name: F1 type: f1 value: 0.7525253900501888 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-vanilla-target-tweet This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.8718 - Accuracy: 0.7540 - F1: 0.7525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5858 | 4.9 | 500 | 0.8189 | 0.7380 | 0.7364 | | 0.1039 | 9.8 | 1000 | 1.1965 | 0.7594 | 0.7568 | | 0.0264 | 14.71 | 1500 | 1.5387 | 0.7433 | 0.7460 | | 0.0142 | 19.61 | 2000 | 1.6758 | 0.7620 | 0.7551 | | 0.0113 | 24.51 | 2500 | 1.8718 | 0.7540 | 0.7525 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/mini-vanilla-target-tweet
muhtasham
2022-12-11T06:37:03Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T06:33:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: mini-vanilla-target-tweet results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - name: Accuracy type: accuracy value: 0.7540106951871658 - name: F1 type: f1 value: 0.7568814825340653 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mini-vanilla-target-tweet This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.5603 - Accuracy: 0.7540 - F1: 0.7569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.9285 | 4.9 | 500 | 0.7493 | 0.7273 | 0.7207 | | 0.4468 | 9.8 | 1000 | 0.7630 | 0.7460 | 0.7437 | | 0.2194 | 14.71 | 1500 | 0.8997 | 0.7406 | 0.7455 | | 0.1062 | 19.61 | 2000 | 1.0822 | 0.7433 | 0.7435 | | 0.0568 | 24.51 | 2500 | 1.2225 | 0.7620 | 0.7622 | | 0.0439 | 29.41 | 3000 | 1.3475 | 0.7513 | 0.7527 | | 0.0304 | 34.31 | 3500 | 1.4999 | 0.7433 | 0.7399 | | 0.0247 | 39.22 | 4000 | 1.5603 | 0.7540 | 0.7569 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/medium-mlm-tweet-target-imdb
muhtasham
2022-12-11T05:41:16Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T05:08:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: medium-mlm-tweet-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93632 - name: F1 type: f1 value: 0.9671128739051397 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium-mlm-tweet-target-imdb This model is a fine-tuned version of [muhtasham/medium-mlm-tweet](https://huggingface.co/muhtasham/medium-mlm-tweet) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3691 - Accuracy: 0.9363 - F1: 0.9671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3135 | 0.64 | 500 | 0.2323 | 0.9056 | 0.9505 | | 0.2094 | 1.28 | 1000 | 0.2166 | 0.9187 | 0.9576 | | 0.1622 | 1.92 | 1500 | 0.2011 | 0.9206 | 0.9587 | | 0.112 | 2.56 | 2000 | 0.3647 | 0.9032 | 0.9491 | | 0.093 | 3.2 | 2500 | 0.5445 | 0.8788 | 0.9355 | | 0.0692 | 3.84 | 3000 | 0.2071 | 0.9452 | 0.9718 | | 0.0545 | 4.48 | 3500 | 0.2308 | 0.9548 | 0.9769 | | 0.0482 | 5.12 | 4000 | 0.3297 | 0.9373 | 0.9676 | | 0.0464 | 5.75 | 4500 | 0.3698 | 0.926 | 0.9616 | | 0.0308 | 6.39 | 5000 | 0.3691 | 0.9363 | 0.9671 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
aungmyatv8/ppo-LunarLander-v2
aungmyatv8
2022-12-11T05:23:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T05:04:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 252.93 +/- 21.79 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sagawa/ZINC-t5-v2
sagawa
2022-12-11T05:11:31Z
13
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "dataset:sagawa/ZINC-canonicalized", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-06T01:33:39Z
--- license: mit datasets: - sagawa/ZINC-canonicalized metrics: - accuracy model-index: - name: ZINC-deberta results: - task: name: Masked Language Modeling type: fill-mask dataset: name: sagawa/ZINC-canonicalized type: sagawa/ZINC-canonicalized metrics: - name: Accuracy type: accuracy value: 0.9475839734077454 --- # ZINC-t5 This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset. It achieves the following results on the evaluation set: - Loss: 0.1228 - Accuracy: 0.9476 ## Model description We trained t5 on SMILES from ZINC using the task of masked-language modeling (MLM). Compared to ZINC-t5, ZINC-t5-v2 uses a character-level tokenizer, and it was also trained on ZINC. ## Intended uses & limitations This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning. As an example, We finetuned this model to predict products. The model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5). Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5). ## Training and evaluation data We downloaded [ZINC data](https://drive.google.com/drive/folders/1lSPCqh31zxTVEhuiPde7W3rZG8kPgp-z) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 22992522, and they were randomly split into train:validation=10:1. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-03 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Step | Accuracy | Validation Loss | |:-------------:|:------:|:--------:|:---------------:| | 0.2090 | 100000 | 0.9264 | 0.1860 | | 0.1628 | 200000 | 0.9349 | 0.1613 | | 0.1632 | 300000 | 0.9395 | 0.1467 | | 0.1451 | 400000 | 0.9435 | 0.1345 | | 0.1311 | 500000 | 0.9465 | 0.1261 |
muhtasham/base-mlm-imdb-target-imdb
muhtasham
2022-12-11T04:41:55Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T04:02:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: base-mlm-imdb-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.89184 - name: F1 type: f1 value: 0.942828146143437 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-mlm-imdb-target-imdb This model is a fine-tuned version of [muhtasham/base-mlm-imdb](https://huggingface.co/muhtasham/base-mlm-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Accuracy: 0.8918 - F1: 0.9428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2453 | 0.64 | 500 | 0.1892 | 0.9334 | 0.9656 | | 0.1764 | 1.28 | 1000 | 0.1267 | 0.9581 | 0.9786 | | 0.117 | 1.92 | 1500 | 0.1926 | 0.9290 | 0.9632 | | 0.0727 | 2.56 | 2000 | 0.3109 | 0.9182 | 0.9574 | | 0.0665 | 3.2 | 2500 | 0.4659 | 0.8918 | 0.9428 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/medium-mlm-imdb-target-imdb
muhtasham
2022-12-11T04:00:58Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T03:44:43Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: medium-mlm-imdb-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9064 - name: F1 type: f1 value: 0.9509022240872849 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium-mlm-imdb-target-imdb This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/medium-mlm-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3883 - Accuracy: 0.9064 - F1: 0.9509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2923 | 0.64 | 500 | 0.1860 | 0.9310 | 0.9642 | | 0.2049 | 1.28 | 1000 | 0.0830 | 0.9708 | 0.9852 | | 0.1569 | 1.92 | 1500 | 0.1258 | 0.9547 | 0.9768 | | 0.1067 | 2.56 | 2000 | 0.5306 | 0.8643 | 0.9272 | | 0.0837 | 3.2 | 2500 | 0.3883 | 0.9064 | 0.9509 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
redevaaa/fin3
redevaaa
2022-12-11T03:59:45Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:fin", "license:cc-by-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-11T03:32:16Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - fin metrics: - precision - recall - f1 - accuracy model-index: - name: fin3 results: - task: name: Token Classification type: token-classification dataset: name: fin type: fin config: default split: train args: default metrics: - name: Precision type: precision value: 0.944 - name: Recall type: recall value: 0.9402390438247012 - name: F1 type: f1 value: 0.9421157684630739 - name: Accuracy type: accuracy value: 0.9921209540034072 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fin3 This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the fin dataset. It achieves the following results on the evaluation set: - Loss: 0.0748 - Precision: 0.944 - Recall: 0.9402 - F1: 0.9421 - Accuracy: 0.9921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 129 | 0.0669 | 0.8821 | 0.9243 | 0.9027 | 0.9883 | | No log | 2.0 | 258 | 0.0568 | 0.9289 | 0.9363 | 0.9325 | 0.9913 | | No log | 3.0 | 387 | 0.0565 | 0.9141 | 0.9323 | 0.9231 | 0.9904 | | 0.0556 | 4.0 | 516 | 0.0617 | 0.9237 | 0.9163 | 0.92 | 0.9904 | | 0.0556 | 5.0 | 645 | 0.0658 | 0.9243 | 0.9243 | 0.9243 | 0.9904 | | 0.0556 | 6.0 | 774 | 0.0695 | 0.944 | 0.9402 | 0.9421 | 0.9921 | | 0.0556 | 7.0 | 903 | 0.0731 | 0.932 | 0.9283 | 0.9301 | 0.9917 | | 0.0016 | 8.0 | 1032 | 0.0750 | 0.9283 | 0.9283 | 0.9283 | 0.9917 | | 0.0016 | 9.0 | 1161 | 0.0737 | 0.944 | 0.9402 | 0.9421 | 0.9921 | | 0.0016 | 10.0 | 1290 | 0.0748 | 0.944 | 0.9402 | 0.9421 | 0.9921 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/small-mlm-imdb-target-imdb
muhtasham
2022-12-11T03:43:44Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T03:31:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: small-mlm-imdb-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.91736 - name: F1 type: f1 value: 0.9568990695539701 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-imdb-target-imdb This model is a fine-tuned version of [muhtasham/small-mlm-imdb](https://huggingface.co/muhtasham/small-mlm-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3145 - Accuracy: 0.9174 - F1: 0.9569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.315 | 0.64 | 500 | 0.1711 | 0.9310 | 0.9642 | | 0.2248 | 1.28 | 1000 | 0.1385 | 0.9471 | 0.9728 | | 0.1824 | 1.92 | 1500 | 0.1044 | 0.9610 | 0.9801 | | 0.1326 | 2.56 | 2000 | 0.2382 | 0.9294 | 0.9634 | | 0.1056 | 3.2 | 2500 | 0.5074 | 0.8698 | 0.9304 | | 0.0804 | 3.84 | 3000 | 0.3145 | 0.9174 | 0.9569 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/mini-mlm-imdb-target-imdb
muhtasham
2022-12-11T03:30:57Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T03:23:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: mini-mlm-imdb-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.95016 - name: F1 type: f1 value: 0.9744431226155804 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mini-mlm-imdb-target-imdb This model is a fine-tuned version of [muhtasham/mini-mlm-imdb](https://huggingface.co/muhtasham/mini-mlm-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1353 - Accuracy: 0.9502 - F1: 0.9744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3856 | 0.64 | 500 | 0.1902 | 0.9298 | 0.9636 | | 0.2794 | 1.28 | 1000 | 0.2200 | 0.9127 | 0.9544 | | 0.2369 | 1.92 | 1500 | 0.1269 | 0.9539 | 0.9764 | | 0.1963 | 2.56 | 2000 | 0.2422 | 0.9079 | 0.9517 | | 0.1765 | 3.2 | 2500 | 0.3789 | 0.8644 | 0.9273 | | 0.1486 | 3.84 | 3000 | 0.1353 | 0.9502 | 0.9744 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/tiny-mlm-imdb-target-imdb
muhtasham
2022-12-11T03:22:48Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T03:18:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: tiny-mlm-imdb-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.88952 - name: F1 type: f1 value: 0.9415301240526694 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-imdb-target-imdb This model is a fine-tuned version of [muhtasham/tiny-mlm-imdb](https://huggingface.co/muhtasham/tiny-mlm-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2699 - Accuracy: 0.8895 - F1: 0.9415 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5432 | 0.64 | 500 | 0.3567 | 0.8578 | 0.9235 | | 0.366 | 1.28 | 1000 | 0.3687 | 0.8414 | 0.9138 | | 0.32 | 1.92 | 1500 | 0.2648 | 0.8922 | 0.9430 | | 0.2868 | 2.56 | 2000 | 0.3868 | 0.8314 | 0.9079 | | 0.2671 | 3.2 | 2500 | 0.3092 | 0.8774 | 0.9347 | | 0.248 | 3.84 | 3000 | 0.2699 | 0.8895 | 0.9415 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/medium-vanilla-target-imdb
muhtasham
2022-12-11T02:36:24Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T02:20:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: medium-vanilla-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8964 - name: F1 type: f1 value: 0.945370175068551 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium-vanilla-target-imdb This model is a fine-tuned version of [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4330 - Accuracy: 0.8964 - F1: 0.9454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3068 | 0.64 | 500 | 0.2373 | 0.9061 | 0.9507 | | 0.2143 | 1.28 | 1000 | 0.1204 | 0.9534 | 0.9761 | | 0.1655 | 1.92 | 1500 | 0.1557 | 0.942 | 0.9701 | | 0.1107 | 2.56 | 2000 | 0.2791 | 0.9268 | 0.9620 | | 0.0905 | 3.2 | 2500 | 0.4330 | 0.8964 | 0.9454 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
Alex2135/ppo-LunarLander-v2
Alex2135
2022-12-11T02:21:23Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T01:51:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.03 +/- 40.60 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
muhtasham/small-vanilla-target-imdb
muhtasham
2022-12-11T02:19:12Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T02:09:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: small-vanilla-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.81456 - name: F1 type: f1 value: 0.8978044264174235 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-vanilla-target-imdb This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.7710 - Accuracy: 0.8146 - F1: 0.8978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3417 | 0.64 | 500 | 0.1678 | 0.9286 | 0.9630 | | 0.2401 | 1.28 | 1000 | 0.1262 | 0.9525 | 0.9757 | | 0.1907 | 1.92 | 1500 | 0.2724 | 0.8963 | 0.9453 | | 0.1397 | 2.56 | 2000 | 0.2378 | 0.9247 | 0.9609 | | 0.11 | 3.2 | 2500 | 0.7710 | 0.8146 | 0.8978 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
ScrappyCoco666/ppo-LunarLander-v2-5
ScrappyCoco666
2022-12-11T02:14:08Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T02:13:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 302.61 +/- 18.97 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
redevaaa/fin1
redevaaa
2022-12-11T02:12:04Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:fin", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-11T01:38:21Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - fin metrics: - precision - recall - f1 - accuracy model-index: - name: fin1 results: - task: name: Token Classification type: token-classification dataset: name: fin type: fin config: default split: train args: default metrics: - name: Precision type: precision value: 0.8315412186379928 - name: Recall type: recall value: 0.9243027888446215 - name: F1 type: f1 value: 0.8754716981132076 - name: Accuracy type: accuracy value: 0.985175455057234 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fin1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the fin dataset. It achieves the following results on the evaluation set: - Loss: 0.0778 - Precision: 0.8315 - Recall: 0.9243 - F1: 0.8755 - Accuracy: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 129 | 0.0860 | 0.8535 | 0.9283 | 0.8893 | 0.9904 | | No log | 2.0 | 258 | 0.1513 | 0.7993 | 0.9203 | 0.8556 | 0.9799 | | No log | 3.0 | 387 | 0.0977 | 0.8221 | 0.9203 | 0.8684 | 0.9831 | | 0.0017 | 4.0 | 516 | 0.0783 | 0.8286 | 0.9243 | 0.8738 | 0.9848 | | 0.0017 | 5.0 | 645 | 0.0778 | 0.8315 | 0.9243 | 0.8755 | 0.9852 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
sd-concepts-library/pokemon-rgby-sprite
sd-concepts-library
2022-12-11T02:10:06Z
0
7
null
[ "license:mit", "region:us" ]
null
2022-12-11T02:02:35Z
--- license: mit --- ### Pokemon RGBY sprite on Stable Diffusion Pokémon Red/Green/Blue/Yellow battle sprite concept (GameBoy 56x56 upscaled to 512x512) This is the `<pkmn-rgby>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<pkmn-rgby> 0](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/0.jpeg) ![<pkmn-rgby> 1](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/1.jpeg) ![<pkmn-rgby> 2](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/2.jpeg) ![<pkmn-rgby> 3](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/3.jpeg) ![<pkmn-rgby> 4](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/4.jpeg) ![<pkmn-rgby> 5](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/5.jpeg) ![<pkmn-rgby> 6](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/6.jpeg) ![<pkmn-rgby> 7](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/7.jpeg) ![<pkmn-rgby> 8](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/8.jpeg) ![<pkmn-rgby> 9](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/9.jpeg) ![<pkmn-rgby> 10](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/10.jpeg) ![<pkmn-rgby> 11](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/11.jpeg) ![<pkmn-rgby> 12](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/12.jpeg) ![<pkmn-rgby> 13](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/13.jpeg) ![<pkmn-rgby> 14](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/14.jpeg) ![<pkmn-rgby> 15](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/15.jpeg) ![<pkmn-rgby> 16](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/16.jpeg) ![<pkmn-rgby> 17](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/17.jpeg) ![<pkmn-rgby> 18](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/18.jpeg) ![<pkmn-rgby> 19](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/19.jpeg) ![<pkmn-rgby> 20](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/20.jpeg) ![<pkmn-rgby> 21](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/21.jpeg) ![<pkmn-rgby> 22](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/22.jpeg) ![<pkmn-rgby> 23](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/23.jpeg) ![<pkmn-rgby> 24](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/24.jpeg) ![<pkmn-rgby> 25](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/25.jpeg) ![<pkmn-rgby> 26](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/26.jpeg) ![<pkmn-rgby> 27](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/27.jpeg) ![<pkmn-rgby> 28](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/28.jpeg) ![<pkmn-rgby> 29](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/29.jpeg) ![<pkmn-rgby> 30](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/30.jpeg) ![<pkmn-rgby> 31](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/31.jpeg) ![<pkmn-rgby> 32](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/32.jpeg) ![<pkmn-rgby> 33](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/33.jpeg) ![<pkmn-rgby> 34](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/34.jpeg) ![<pkmn-rgby> 35](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/35.jpeg) ![<pkmn-rgby> 36](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/36.jpeg) ![<pkmn-rgby> 37](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/37.jpeg) ![<pkmn-rgby> 38](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/38.jpeg) ![<pkmn-rgby> 39](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/39.jpeg) ![<pkmn-rgby> 40](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/40.jpeg) ![<pkmn-rgby> 41](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/41.jpeg) ![<pkmn-rgby> 42](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/42.jpeg) ![<pkmn-rgby> 43](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/43.jpeg) ![<pkmn-rgby> 44](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/44.jpeg) ![<pkmn-rgby> 45](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/45.jpeg) ![<pkmn-rgby> 46](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/46.jpeg) ![<pkmn-rgby> 47](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/47.jpeg) ![<pkmn-rgby> 48](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/48.jpeg) ![<pkmn-rgby> 49](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/49.jpeg) ![<pkmn-rgby> 50](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/50.jpeg) ![<pkmn-rgby> 51](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/51.jpeg) ![<pkmn-rgby> 52](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/52.jpeg) ![<pkmn-rgby> 53](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/53.jpeg) ![<pkmn-rgby> 54](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/54.jpeg) ![<pkmn-rgby> 55](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/55.jpeg) ![<pkmn-rgby> 56](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/56.jpeg) ![<pkmn-rgby> 57](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/57.jpeg) ![<pkmn-rgby> 58](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/58.jpeg) ![<pkmn-rgby> 59](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/59.jpeg) ![<pkmn-rgby> 60](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/60.jpeg) ![<pkmn-rgby> 61](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/61.jpeg) ![<pkmn-rgby> 62](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/62.jpeg) ![<pkmn-rgby> 63](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/63.jpeg) ![<pkmn-rgby> 64](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/64.jpeg) ![<pkmn-rgby> 65](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/65.jpeg) ![<pkmn-rgby> 66](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/66.jpeg) ![<pkmn-rgby> 67](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/67.jpeg) ![<pkmn-rgby> 68](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/68.jpeg) ![<pkmn-rgby> 69](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/69.jpeg) ![<pkmn-rgby> 70](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/70.jpeg) ![<pkmn-rgby> 71](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/71.jpeg) ![<pkmn-rgby> 72](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/72.jpeg) ![<pkmn-rgby> 73](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/73.jpeg) ![<pkmn-rgby> 74](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/74.jpeg) ![<pkmn-rgby> 75](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/75.jpeg) ![<pkmn-rgby> 76](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/76.jpeg) ![<pkmn-rgby> 77](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/77.jpeg) ![<pkmn-rgby> 78](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/78.jpeg) ![<pkmn-rgby> 79](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/79.jpeg) ![<pkmn-rgby> 80](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/80.jpeg) ![<pkmn-rgby> 81](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/81.jpeg) ![<pkmn-rgby> 82](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/82.jpeg) ![<pkmn-rgby> 83](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/83.jpeg) ![<pkmn-rgby> 84](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/84.jpeg) ![<pkmn-rgby> 85](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/85.jpeg) ![<pkmn-rgby> 86](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/86.jpeg) ![<pkmn-rgby> 87](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/87.jpeg) ![<pkmn-rgby> 88](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/88.jpeg) ![<pkmn-rgby> 89](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/89.jpeg) ![<pkmn-rgby> 90](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/90.jpeg) ![<pkmn-rgby> 91](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/91.jpeg) ![<pkmn-rgby> 92](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/92.jpeg) ![<pkmn-rgby> 93](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/93.jpeg) ![<pkmn-rgby> 94](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/94.jpeg) ![<pkmn-rgby> 95](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/95.jpeg) ![<pkmn-rgby> 96](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/96.jpeg) ![<pkmn-rgby> 97](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/97.jpeg) ![<pkmn-rgby> 98](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/98.jpeg) ![<pkmn-rgby> 99](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/99.jpeg) ![<pkmn-rgby> 100](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/100.jpeg) ![<pkmn-rgby> 101](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/101.jpeg) ![<pkmn-rgby> 102](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/102.jpeg) ![<pkmn-rgby> 103](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/103.jpeg) ![<pkmn-rgby> 104](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/104.jpeg) ![<pkmn-rgby> 105](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/105.jpeg) ![<pkmn-rgby> 106](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/106.jpeg) ![<pkmn-rgby> 107](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/107.jpeg) ![<pkmn-rgby> 108](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/108.jpeg) ![<pkmn-rgby> 109](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/109.jpeg) ![<pkmn-rgby> 110](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/110.jpeg) ![<pkmn-rgby> 111](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/111.jpeg) ![<pkmn-rgby> 112](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/112.jpeg) ![<pkmn-rgby> 113](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/113.jpeg) ![<pkmn-rgby> 114](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/114.jpeg) ![<pkmn-rgby> 115](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/115.jpeg) ![<pkmn-rgby> 116](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/116.jpeg) ![<pkmn-rgby> 117](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/117.jpeg) ![<pkmn-rgby> 118](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/118.jpeg) ![<pkmn-rgby> 119](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/119.jpeg) ![<pkmn-rgby> 120](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/120.jpeg) ![<pkmn-rgby> 121](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/121.jpeg) ![<pkmn-rgby> 122](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/122.jpeg) ![<pkmn-rgby> 123](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/123.jpeg) ![<pkmn-rgby> 124](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/124.jpeg) ![<pkmn-rgby> 125](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/125.jpeg) ![<pkmn-rgby> 126](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/126.jpeg) ![<pkmn-rgby> 127](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/127.jpeg) ![<pkmn-rgby> 128](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/128.jpeg) ![<pkmn-rgby> 129](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/129.jpeg) ![<pkmn-rgby> 130](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/130.jpeg) ![<pkmn-rgby> 131](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/131.jpeg) ![<pkmn-rgby> 132](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/132.jpeg) ![<pkmn-rgby> 133](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/133.jpeg) ![<pkmn-rgby> 134](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/134.jpeg) ![<pkmn-rgby> 135](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/135.jpeg) ![<pkmn-rgby> 136](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/136.jpeg) ![<pkmn-rgby> 137](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/137.jpeg) ![<pkmn-rgby> 138](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/138.jpeg) ![<pkmn-rgby> 139](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/139.jpeg) ![<pkmn-rgby> 140](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/140.jpeg) ![<pkmn-rgby> 141](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/141.jpeg) ![<pkmn-rgby> 142](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/142.jpeg) ![<pkmn-rgby> 143](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/143.jpeg) ![<pkmn-rgby> 144](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/144.jpeg) ![<pkmn-rgby> 145](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/145.jpeg) ![<pkmn-rgby> 146](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/146.jpeg) ![<pkmn-rgby> 147](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/147.jpeg) ![<pkmn-rgby> 148](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/148.jpeg) ![<pkmn-rgby> 149](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/149.jpeg) ![<pkmn-rgby> 150](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/150.jpeg) ![<pkmn-rgby> 151](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/151.jpeg) ![<pkmn-rgby> 152](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/152.jpeg) ![<pkmn-rgby> 153](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/153.jpeg) ![<pkmn-rgby> 154](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/154.jpeg) ![<pkmn-rgby> 155](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/155.jpeg) ![<pkmn-rgby> 156](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/156.jpeg) ![<pkmn-rgby> 157](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/157.jpeg) ![<pkmn-rgby> 158](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/158.jpeg) ![<pkmn-rgby> 159](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/159.jpeg) ![<pkmn-rgby> 160](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/160.jpeg) ![<pkmn-rgby> 161](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/161.jpeg) ![<pkmn-rgby> 162](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/162.jpeg) ![<pkmn-rgby> 163](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/163.jpeg) ![<pkmn-rgby> 164](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/164.jpeg) ![<pkmn-rgby> 165](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/165.jpeg) ![<pkmn-rgby> 166](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/166.jpeg) ![<pkmn-rgby> 167](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/167.jpeg) ![<pkmn-rgby> 168](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/168.jpeg) ![<pkmn-rgby> 169](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/169.jpeg) ![<pkmn-rgby> 170](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/170.jpeg) ![<pkmn-rgby> 171](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/171.jpeg) ![<pkmn-rgby> 172](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/172.jpeg) ![<pkmn-rgby> 173](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/173.jpeg) ![<pkmn-rgby> 174](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/174.jpeg) ![<pkmn-rgby> 175](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/175.jpeg) ![<pkmn-rgby> 176](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/176.jpeg) ![<pkmn-rgby> 177](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/177.jpeg) ![<pkmn-rgby> 178](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/178.jpeg) ![<pkmn-rgby> 179](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/179.jpeg) ![<pkmn-rgby> 180](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/180.jpeg) ![<pkmn-rgby> 181](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/181.jpeg) ![<pkmn-rgby> 182](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/182.jpeg) ![<pkmn-rgby> 183](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/183.jpeg) ![<pkmn-rgby> 184](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/184.jpeg) ![<pkmn-rgby> 185](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/185.jpeg) ![<pkmn-rgby> 186](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/186.jpeg) ![<pkmn-rgby> 187](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/187.jpeg) ![<pkmn-rgby> 188](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/188.jpeg) ![<pkmn-rgby> 189](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/189.jpeg) ![<pkmn-rgby> 190](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/190.jpeg) ![<pkmn-rgby> 191](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/191.jpeg) ![<pkmn-rgby> 192](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/192.jpeg) ![<pkmn-rgby> 193](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/193.jpeg) ![<pkmn-rgby> 194](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/194.jpeg) ![<pkmn-rgby> 195](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/195.jpeg) ![<pkmn-rgby> 196](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/196.jpeg) ![<pkmn-rgby> 197](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/197.jpeg) ![<pkmn-rgby> 198](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/198.jpeg) ![<pkmn-rgby> 199](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/199.jpeg) ![<pkmn-rgby> 200](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/200.jpeg) ![<pkmn-rgby> 201](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/201.jpeg) ![<pkmn-rgby> 202](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/202.jpeg) ![<pkmn-rgby> 203](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/203.jpeg) ![<pkmn-rgby> 204](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/204.jpeg) ![<pkmn-rgby> 205](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/205.jpeg) ![<pkmn-rgby> 206](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/206.jpeg) ![<pkmn-rgby> 207](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/207.jpeg) ![<pkmn-rgby> 208](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/208.jpeg) ![<pkmn-rgby> 209](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/209.jpeg) ![<pkmn-rgby> 210](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/210.jpeg) ![<pkmn-rgby> 211](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/211.jpeg) ![<pkmn-rgby> 212](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/212.jpeg) ![<pkmn-rgby> 213](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/213.jpeg) ![<pkmn-rgby> 214](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/214.jpeg) ![<pkmn-rgby> 215](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/215.jpeg) ![<pkmn-rgby> 216](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/216.jpeg) ![<pkmn-rgby> 217](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/217.jpeg) ![<pkmn-rgby> 218](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/218.jpeg) ![<pkmn-rgby> 219](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/219.jpeg) ![<pkmn-rgby> 220](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/220.jpeg) ![<pkmn-rgby> 221](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/221.jpeg) ![<pkmn-rgby> 222](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/222.jpeg) ![<pkmn-rgby> 223](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/223.jpeg) ![<pkmn-rgby> 224](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/224.jpeg) ![<pkmn-rgby> 225](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/225.jpeg) ![<pkmn-rgby> 226](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/226.jpeg) ![<pkmn-rgby> 227](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/227.jpeg) ![<pkmn-rgby> 228](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/228.jpeg) ![<pkmn-rgby> 229](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/229.jpeg) ![<pkmn-rgby> 230](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/230.jpeg) ![<pkmn-rgby> 231](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/231.jpeg) ![<pkmn-rgby> 232](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/232.jpeg) ![<pkmn-rgby> 233](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/233.jpeg) ![<pkmn-rgby> 234](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/234.jpeg) ![<pkmn-rgby> 235](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/235.jpeg) ![<pkmn-rgby> 236](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/236.jpeg) ![<pkmn-rgby> 237](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/237.jpeg) ![<pkmn-rgby> 238](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/238.jpeg) ![<pkmn-rgby> 239](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/239.jpeg) ![<pkmn-rgby> 240](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/240.jpeg) ![<pkmn-rgby> 241](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/241.jpeg) ![<pkmn-rgby> 242](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/242.jpeg) ![<pkmn-rgby> 243](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/243.jpeg) ![<pkmn-rgby> 244](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/244.jpeg) ![<pkmn-rgby> 245](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/245.jpeg) ![<pkmn-rgby> 246](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/246.jpeg) ![<pkmn-rgby> 247](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/247.jpeg) ![<pkmn-rgby> 248](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/248.jpeg) ![<pkmn-rgby> 249](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/249.jpeg) ![<pkmn-rgby> 250](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/250.jpeg) ![<pkmn-rgby> 251](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/251.jpeg) ![<pkmn-rgby> 252](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/252.jpeg) ![<pkmn-rgby> 253](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/253.jpeg) ![<pkmn-rgby> 254](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/254.jpeg) ![<pkmn-rgby> 255](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/255.jpeg) ![<pkmn-rgby> 256](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/256.jpeg) ![<pkmn-rgby> 257](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/257.jpeg) ![<pkmn-rgby> 258](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/258.jpeg) ![<pkmn-rgby> 259](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/259.jpeg) ![<pkmn-rgby> 260](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/260.jpeg) ![<pkmn-rgby> 261](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/261.jpeg) ![<pkmn-rgby> 262](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/262.jpeg) ![<pkmn-rgby> 263](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/263.jpeg) ![<pkmn-rgby> 264](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/264.jpeg) ![<pkmn-rgby> 265](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/265.jpeg) ![<pkmn-rgby> 266](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/266.jpeg) ![<pkmn-rgby> 267](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/267.jpeg) ![<pkmn-rgby> 268](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/268.jpeg) ![<pkmn-rgby> 269](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/269.jpeg) ![<pkmn-rgby> 270](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/270.jpeg) ![<pkmn-rgby> 271](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/271.jpeg) ![<pkmn-rgby> 272](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/272.jpeg) ![<pkmn-rgby> 273](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/273.jpeg) ![<pkmn-rgby> 274](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/274.jpeg) ![<pkmn-rgby> 275](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/275.jpeg) ![<pkmn-rgby> 276](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/276.jpeg) ![<pkmn-rgby> 277](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/277.jpeg) ![<pkmn-rgby> 278](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/278.jpeg) ![<pkmn-rgby> 279](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/279.jpeg) ![<pkmn-rgby> 280](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/280.jpeg) ![<pkmn-rgby> 281](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/281.jpeg) ![<pkmn-rgby> 282](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/282.jpeg) ![<pkmn-rgby> 283](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/283.jpeg) ![<pkmn-rgby> 284](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/284.jpeg) ![<pkmn-rgby> 285](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/285.jpeg) ![<pkmn-rgby> 286](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/286.jpeg) ![<pkmn-rgby> 287](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/287.jpeg) ![<pkmn-rgby> 288](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/288.jpeg) ![<pkmn-rgby> 289](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/289.jpeg) ![<pkmn-rgby> 290](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/290.jpeg) ![<pkmn-rgby> 291](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/291.jpeg) ![<pkmn-rgby> 292](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/292.jpeg) ![<pkmn-rgby> 293](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/293.jpeg) ![<pkmn-rgby> 294](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/294.jpeg) ![<pkmn-rgby> 295](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/295.jpeg) ![<pkmn-rgby> 296](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/296.jpeg) ![<pkmn-rgby> 297](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/297.jpeg) ![<pkmn-rgby> 298](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/298.jpeg) ![<pkmn-rgby> 299](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/299.jpeg) ![<pkmn-rgby> 300](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/300.jpeg) ![<pkmn-rgby> 301](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/301.jpeg) ![<pkmn-rgby> 302](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/302.jpeg) ![<pkmn-rgby> 303](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/303.jpeg) ![<pkmn-rgby> 304](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/304.jpeg) ![<pkmn-rgby> 305](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/305.jpeg) ![<pkmn-rgby> 306](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/306.jpeg) ![<pkmn-rgby> 307](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/307.jpeg) ![<pkmn-rgby> 308](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/308.jpeg) ![<pkmn-rgby> 309](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/309.jpeg) ![<pkmn-rgby> 310](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/310.jpeg) ![<pkmn-rgby> 311](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/311.jpeg) ![<pkmn-rgby> 312](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/312.jpeg) ![<pkmn-rgby> 313](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/313.jpeg) ![<pkmn-rgby> 314](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/314.jpeg) ![<pkmn-rgby> 315](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/315.jpeg) ![<pkmn-rgby> 316](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/316.jpeg) ![<pkmn-rgby> 317](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/317.jpeg) ![<pkmn-rgby> 318](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/318.jpeg) ![<pkmn-rgby> 319](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/319.jpeg) ![<pkmn-rgby> 320](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/320.jpeg) ![<pkmn-rgby> 321](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/321.jpeg) ![<pkmn-rgby> 322](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/322.jpeg) ![<pkmn-rgby> 323](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/323.jpeg) ![<pkmn-rgby> 324](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/324.jpeg) ![<pkmn-rgby> 325](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/325.jpeg) ![<pkmn-rgby> 326](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/326.jpeg) ![<pkmn-rgby> 327](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/327.jpeg) ![<pkmn-rgby> 328](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/328.jpeg) ![<pkmn-rgby> 329](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/329.jpeg) ![<pkmn-rgby> 330](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/330.jpeg) ![<pkmn-rgby> 331](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/331.jpeg) ![<pkmn-rgby> 332](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/332.jpeg) ![<pkmn-rgby> 333](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/333.jpeg) ![<pkmn-rgby> 334](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/334.jpeg) ![<pkmn-rgby> 335](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/335.jpeg) ![<pkmn-rgby> 336](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/336.jpeg) ![<pkmn-rgby> 337](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/337.jpeg) ![<pkmn-rgby> 338](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/338.jpeg) ![<pkmn-rgby> 339](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/339.jpeg) ![<pkmn-rgby> 340](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/340.jpeg) ![<pkmn-rgby> 341](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/341.jpeg) ![<pkmn-rgby> 342](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/342.jpeg) ![<pkmn-rgby> 343](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/343.jpeg) ![<pkmn-rgby> 344](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/344.jpeg) ![<pkmn-rgby> 345](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/345.jpeg) ![<pkmn-rgby> 346](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/346.jpeg) ![<pkmn-rgby> 347](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/347.jpeg) ![<pkmn-rgby> 348](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/348.jpeg) ![<pkmn-rgby> 349](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/349.jpeg) ![<pkmn-rgby> 350](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/350.jpeg) ![<pkmn-rgby> 351](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/351.jpeg) ![<pkmn-rgby> 352](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/352.jpeg) ![<pkmn-rgby> 353](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/353.jpeg) ![<pkmn-rgby> 354](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/354.jpeg) ![<pkmn-rgby> 355](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/355.jpeg) ![<pkmn-rgby> 356](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/356.jpeg) ![<pkmn-rgby> 357](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/357.jpeg) ![<pkmn-rgby> 358](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/358.jpeg) ![<pkmn-rgby> 359](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/359.jpeg) ![<pkmn-rgby> 360](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/360.jpeg) ![<pkmn-rgby> 361](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/361.jpeg) ![<pkmn-rgby> 362](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/362.jpeg) ![<pkmn-rgby> 363](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/363.jpeg) ![<pkmn-rgby> 364](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/364.jpeg) ![<pkmn-rgby> 365](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/365.jpeg) ![<pkmn-rgby> 366](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/366.jpeg) ![<pkmn-rgby> 367](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/367.jpeg) ![<pkmn-rgby> 368](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/368.jpeg) ![<pkmn-rgby> 369](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/369.jpeg) ![<pkmn-rgby> 370](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/370.jpeg) ![<pkmn-rgby> 371](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/371.jpeg) ![<pkmn-rgby> 372](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/372.jpeg) ![<pkmn-rgby> 373](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/373.jpeg) ![<pkmn-rgby> 374](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/374.jpeg) ![<pkmn-rgby> 375](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/375.jpeg) ![<pkmn-rgby> 376](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/376.jpeg) ![<pkmn-rgby> 377](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/377.jpeg) ![<pkmn-rgby> 378](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/378.jpeg) ![<pkmn-rgby> 379](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/379.jpeg) ![<pkmn-rgby> 380](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/380.jpeg) ![<pkmn-rgby> 381](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/381.jpeg) ![<pkmn-rgby> 382](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/382.jpeg) ![<pkmn-rgby> 383](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/383.jpeg) ![<pkmn-rgby> 384](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/384.jpeg) ![<pkmn-rgby> 385](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/385.jpeg) ![<pkmn-rgby> 386](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/386.jpeg) ![<pkmn-rgby> 387](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/387.jpeg) ![<pkmn-rgby> 388](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/388.jpeg) ![<pkmn-rgby> 389](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/389.jpeg) ![<pkmn-rgby> 390](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/390.jpeg) ![<pkmn-rgby> 391](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/391.jpeg) ![<pkmn-rgby> 392](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/392.jpeg) ![<pkmn-rgby> 393](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/393.jpeg) ![<pkmn-rgby> 394](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/394.jpeg) ![<pkmn-rgby> 395](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/395.jpeg) ![<pkmn-rgby> 396](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/396.jpeg) ![<pkmn-rgby> 397](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/397.jpeg) ![<pkmn-rgby> 398](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/398.jpeg) ![<pkmn-rgby> 399](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/399.jpeg) ![<pkmn-rgby> 400](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/400.jpeg) ![<pkmn-rgby> 401](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/401.jpeg) ![<pkmn-rgby> 402](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/402.jpeg) ![<pkmn-rgby> 403](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/403.jpeg) ![<pkmn-rgby> 404](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/404.jpeg) ![<pkmn-rgby> 405](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/405.jpeg) ![<pkmn-rgby> 406](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/406.jpeg) ![<pkmn-rgby> 407](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/407.jpeg) ![<pkmn-rgby> 408](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/408.jpeg) ![<pkmn-rgby> 409](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/409.jpeg) ![<pkmn-rgby> 410](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/410.jpeg) ![<pkmn-rgby> 411](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/411.jpeg) ![<pkmn-rgby> 412](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/412.jpeg) ![<pkmn-rgby> 413](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/413.jpeg) ![<pkmn-rgby> 414](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/414.jpeg) ![<pkmn-rgby> 415](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/415.jpeg) ![<pkmn-rgby> 416](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/416.jpeg) ![<pkmn-rgby> 417](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/417.jpeg) ![<pkmn-rgby> 418](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/418.jpeg) ![<pkmn-rgby> 419](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/419.jpeg) ![<pkmn-rgby> 420](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/420.jpeg) ![<pkmn-rgby> 421](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/421.jpeg) ![<pkmn-rgby> 422](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/422.jpeg) ![<pkmn-rgby> 423](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/423.jpeg) ![<pkmn-rgby> 424](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/424.jpeg) ![<pkmn-rgby> 425](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/425.jpeg) ![<pkmn-rgby> 426](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/426.jpeg) ![<pkmn-rgby> 427](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/427.jpeg) ![<pkmn-rgby> 428](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/428.jpeg) ![<pkmn-rgby> 429](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/429.jpeg) ![<pkmn-rgby> 430](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/430.jpeg) ![<pkmn-rgby> 431](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/431.jpeg) ![<pkmn-rgby> 432](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/432.jpeg) ![<pkmn-rgby> 433](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/433.jpeg) ![<pkmn-rgby> 434](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/434.jpeg) ![<pkmn-rgby> 435](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/435.jpeg) ![<pkmn-rgby> 436](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/436.jpeg) ![<pkmn-rgby> 437](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/437.jpeg) ![<pkmn-rgby> 438](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/438.jpeg) ![<pkmn-rgby> 439](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/439.jpeg) ![<pkmn-rgby> 440](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/440.jpeg) ![<pkmn-rgby> 441](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/441.jpeg) ![<pkmn-rgby> 442](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/442.jpeg) ![<pkmn-rgby> 443](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/443.jpeg) ![<pkmn-rgby> 444](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/444.jpeg) ![<pkmn-rgby> 445](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/445.jpeg) ![<pkmn-rgby> 446](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/446.jpeg) ![<pkmn-rgby> 447](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/447.jpeg) ![<pkmn-rgby> 448](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/448.jpeg) ![<pkmn-rgby> 449](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/449.jpeg) ![<pkmn-rgby> 450](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/450.jpeg) ![<pkmn-rgby> 451](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/451.jpeg) ![<pkmn-rgby> 452](https://huggingface.co/sd-concepts-library/pokemon-rgby-sprite/resolve/main/concept_images/452.jpeg)
muhtasham/mini-vanilla-target-imdb
muhtasham
2022-12-11T02:08:26Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T01:57:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: mini-vanilla-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87528 - name: F1 type: f1 value: 0.9334925984386332 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mini-vanilla-target-imdb This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4773 - Accuracy: 0.8753 - F1: 0.9335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4272 | 0.64 | 500 | 0.2066 | 0.92 | 0.9583 | | 0.299 | 1.28 | 1000 | 0.2608 | 0.8906 | 0.9422 | | 0.2533 | 1.92 | 1500 | 0.1706 | 0.9337 | 0.9657 | | 0.2126 | 2.56 | 2000 | 0.3601 | 0.8576 | 0.9233 | | 0.1913 | 3.2 | 2500 | 0.3955 | 0.8594 | 0.9244 | | 0.1541 | 3.84 | 3000 | 0.1432 | 0.9484 | 0.9735 | | 0.1432 | 4.48 | 3500 | 0.2027 | 0.9346 | 0.9662 | | 0.1256 | 5.12 | 4000 | 0.3797 | 0.8898 | 0.9417 | | 0.1026 | 5.75 | 4500 | 0.4773 | 0.8753 | 0.9335 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
muhtasham/tiny-vanilla-target-imdb
muhtasham
2022-12-11T01:56:41Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-11T01:49:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: tiny-vanilla-target-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.83488 - name: F1 type: f1 value: 0.9100104638995464 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-vanilla-target-imdb This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4589 - Accuracy: 0.8349 - F1: 0.9100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5912 | 0.64 | 500 | 0.4160 | 0.8295 | 0.9068 | | 0.3949 | 1.28 | 1000 | 0.4095 | 0.8228 | 0.9028 | | 0.3386 | 1.92 | 1500 | 0.2948 | 0.8804 | 0.9364 | | 0.2993 | 2.56 | 2000 | 0.4798 | 0.7868 | 0.8807 | | 0.2791 | 3.2 | 2500 | 0.4555 | 0.8205 | 0.9014 | | 0.2585 | 3.84 | 3000 | 0.2815 | 0.8859 | 0.9395 | | 0.2371 | 4.48 | 3500 | 0.4446 | 0.8316 | 0.9081 | | 0.2189 | 5.12 | 4000 | 0.6102 | 0.7693 | 0.8696 | | 0.1989 | 5.75 | 4500 | 0.4589 | 0.8349 | 0.9100 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
jlondonobo/whisper-large-v2-pt
jlondonobo
2022-12-11T01:36:51Z
7
11
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "pt", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-12-09T01:47:30Z
--- language: - pt license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Large v2 Portuguese results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 pt type: mozilla-foundation/common_voice_11_0 config: pt split: test args: pt metrics: - name: Wer type: wer value: 5.590020342630419 --- # Whisper Large V2 Portuguese 🇧🇷🇵🇹 Bem-vindo ao **whisper large-v2** para transcrição em português 👋🏻 Transcribe Portuguese audio to text with the highest precision. - Loss: 0.282 - Wer: 5.590 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the [mozilla-foundation/common_voice_11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) dataset. If you want a lighter model, you may be interested in [jlondonobo/whisper-medium-pt](https://huggingface.co/jlondonobo/whisper-medium-pt). It achieves faster inference with almost no difference in WER. ### Comparable models Reported **WER** is based on the evaluation subset of Common Voice. | Model | WER | # Parameters | |--------------------------------------------------|:--------:|:------------:| | [jlondonobo/whisper-large-v2-pt](https://huggingface.co/jlondonobo/whisper-large-v2-pt) | **5.590** 🤗 | 1550M | | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 6.300 | 1550M | | [jlondonobo/whisper-medium-pt](https://huggingface.co/jlondonobo/whisper-medium-pt) | 6.579 | 769M | | [jonatasgrosman/wav2vec2-large-xlsr-53-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-portuguese) | 11.310 | 317M | | [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) | 20.080 | 317M | ### Training hyperparameters We used the following hyperparameters for training: - `learning_rate`: 1e-05 - `train_batch_size`: 16 - `eval_batch_size`: 8 - `seed`: 42 - `gradient_accumulation_steps`: 2 - `total_train_batch_size`: 32 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `lr_scheduler_warmup_steps`: 500 - `training_steps`: 5000 - `mixed_precision_training`: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0828 | 1.09 | 1000 | 0.1868 | 6.778 | | 0.0241 | 3.07 | 2000 | 0.2057 | 6.109 | | 0.0084 | 5.06 | 3000 | 0.2367 | 6.029 | | 0.0015 | 7.04 | 4000 | 0.2469 | 5.709 | | 0.0009 | 9.02 | 5000 | 0.2821 | 5.590 🤗| ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
eublefar/bigbird-dialogue-score
eublefar
2022-12-11T01:18:15Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "big_bird", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-12-10T13:26:00Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bigbird-dialogue-score results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bigbird-dialogue-score This model is a fine-tuned version of [google/bigbird-roberta-large](https://huggingface.co/google/bigbird-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2129 - eval_f1: 0.9290 - eval_precision: 0.9173 - eval_recall: 0.9410 - eval_runtime: 311.0516 - eval_samples_per_second: 49.304 - eval_steps_per_second: 6.163 - epoch: 1.0 - step: 5432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 6 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
odiaz1066/PPO-LunarLander
odiaz1066
2022-12-11T00:58:29Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-11T00:58:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.66 +/- 14.11 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```