modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-26 18:28:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-26 18:28:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MStarn/ppo-SnowballTarget
|
MStarn
| 2023-09-25T03:02:06Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-25T03:01:59Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MStarn/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NousResearch/Redmond-Puffin-13B
|
NousResearch
| 2023-09-25T02:53:42Z | 1,453 | 110 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"sft",
"eng",
"dataset:LDJnr/Puffin",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T13:08:59Z |
---
language:
- eng
tags:
- llama-2
- sft
license:
- mit
datasets:
- LDJnr/Puffin
---
## **Redmond-Puffin-13b-V1.3**
**The first commercially available language model released by Nous Research!**
Redmond-Puffin-13B is likely the worlds first llama-2 based, fine-tuned language models, leveraging a hand curated set of 3K high quality examples, many of which take full advantage of the 4096 context length of Llama 2. This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha.
Special thank you to Redmond AI for sponsoring the compute.
Special thank you to Emozilla for assisting with training experimentations and many issues encountered during training.
Notable mentions for assisting in some of the training issues goes to: Caseus and Teknium.
## Model Training
Redmond-Puffin 13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
## Prompt Format
The reccomended model usage is:
WARNING, THE PREVIOUS RECCOMENDATION THAT SAID TO USE "### human" and "# response" WAS A CRITICAL ERROR, PLEASE USE THE ACCURATE PREFIX AND SUFFIX BELOW.
```
USER:
ASSISTANT:
```
## When should I use Puffin or Hermes 2?
Puffin and Hermes-2 both beat previous SOTA for GPT4ALL benchmarks, with Hermes-2 winning by a 0.1% margin over Puffin.
- Hermes 2 is trained on purely single turn instruction examples.
- Puffin is trained mostly on multi-turn, long context, highly curated and cleaned GPT-4 conversations with real humans, as well as curated single-turn examples relating to Physics, Bio, Math and Chem.
For these reasons, it's reccomended to give Puffin a try if you want to have multi-turn conversations and/or long context communication.
## Example Outputs!:





## Notable Features:
- The first Llama-2 based fine-tuned model released by Nous Research.
- Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021)
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
- The first commercially available language model released by Nous Research.
## Current Limitations
Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality.
We plan to have these solved in an updated Puffin model in the very near future, please stay tuned!
## Future Plans
This is a relatively early build amongst the grand plans for the future of Puffin!
Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Benchmarks!
As of Puffins release, it achieves a new SOTA for the GPT4All benchmarks! Supplanting Hermes for the #1 position!
(Rounded to nearest tenth)
Previous Sota: Hermes - 68.8
New Sota: Puffin - 69.9 (+1.1)
note: After release, Puffin has since had its average GPT4All score beaten by 0.1%, by Nous' very own Model Hermes-2!
Latest SOTA w/ Hermes 2- 70.0 (+0.1 over Puffins 69.9 score)
That being said, Puffin supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
GPT4all :
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4983|± |0.0146|
| | |acc_norm|0.5068|± |0.0146|
|arc_easy | 0|acc |0.7980|± |0.0082|
| | |acc_norm|0.7757|± |0.0086|
|boolq | 1|acc |0.8150|± |0.0068|
|hellaswag | 0|acc |0.6132|± |0.0049|
| | |acc_norm|0.8043|± |0.0040|
|openbookqa | 0|acc |0.3560|± |0.0214|
| | |acc_norm|0.4560|± |0.0223|
|piqa | 0|acc |0.7954|± |0.0094|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7245|± |0.0126|
```
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5368|± |0.0363|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7127|± |0.0236|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3023|± |0.0286|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1003|± |0.0159|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2520|± |0.0194|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1743|± |0.0143|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4200|± |0.0285|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2900|± |0.0203|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5430|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4442|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2074|± |0.0128|
|bigbench_snarks | 0|multiple_choice_grade|0.5083|± |0.0373|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.4970|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3260|± |0.0148|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2136|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1326|± |0.0081|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4200|± |0.0285|
```
AGI Eval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2283|± |0.0264|
| | |acc_norm|0.2244|± |0.0262|
|agieval_logiqa_en | 0|acc |0.2780|± |0.0176|
| | |acc_norm|0.3164|± |0.0182|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2043|± |0.0266|
|agieval_lsat_lr | 0|acc |0.3392|± |0.0210|
| | |acc_norm|0.2961|± |0.0202|
|agieval_lsat_rc | 0|acc |0.4387|± |0.0303|
| | |acc_norm|0.3569|± |0.0293|
|agieval_sat_en | 0|acc |0.5874|± |0.0344|
| | |acc_norm|0.5194|± |0.0349|
|agieval_sat_en_without_passage| 0|acc |0.4223|± |0.0345|
| | |acc_norm|0.3447|± |0.0332|
|agieval_sat_math | 0|acc |0.3364|± |0.0319|
| | |acc_norm|0.2773|± |0.0302|
```
|
NousResearch/Redmond-Puffin-13B-GGML
|
NousResearch
| 2023-09-25T02:52:54Z | 0 | 23 | null |
[
"llama-2",
"sft",
"eng",
"dataset:LDJnr/Puffin",
"license:mit",
"region:us"
] | null | 2023-07-20T03:25:10Z |
---
language:
- eng
tags:
- llama-2
- sft
license:
- mit
datasets:
- LDJnr/Puffin
---
GGML 4bit Quantization of Nous Research's Puffin V1.3 Model: https://huggingface.co/NousResearch/Redmond-Puffin-13B-V1.3
*Thank you to Eachadea for making this quantization possible immediately upon launch*
For other faster or more accurate quantization methods, please check out Eachadea's hugging face page!

## **Redmond-Puffin-13b-V1.3**
**The first commercially available language model released by Nous Research!**
Redmond-Puffin-13B is one of the worlds first llama-2 based, fine-tuned language models, leveraging a hand curated set of 3K high quality examples, many of which take full advantage of the 4096 context length of Llama 2. This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha.
Special thank you to Redmond AI for sponsoring the compute.
Special thank you to Emozilla for assisting with training experimentations and many issues encountered during training.
Notable mentions for assisting in some of the training issues goes to: Caseus and Teknium.
## Model Training
Redmond-Puffin-13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
## Prompt Format
The reccomended model usage is:
WARNING, THE PREVIOUS RECCOMENDATION THAT SAID TO USE "### human" and "# response" WAS A CRITICAL ERROR, PLEASE USE THE ACCURATE PREFIX AND SUFFIX BELOW.
```
USER:
ASSISTANT:
```
## When should I use Puffin or Hermes 2?
Puffin and Hermes-2 both beat previous SOTA for GPT4ALL benchmarks, with Hermes-2 winning by a 0.1% margin over Puffin.
- Hermes 2 is trained on purely single turn instruction examples.
- Puffin is trained mostly on multi-turn, long context, highly curated and cleaned GPT-4 conversations with real humans, as well as curated single-turn examples relating to Physics, Bio, Math and Chem.
For these reasons, it's reccomended to give Puffin a try if you want to have multi-turn conversations and/or long context communication.
## Example Outputs!:





## Notable Features:
- The first Llama-2 based fine-tuned model released by Nous Research.
- Ability to recall information upto 2023 without internet (ChatGPT cut off date is in 2021)
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
- The first commercially available language model released by Nous Research.
## Current Limitations
Some token mismatch problems and formatting issues have been idenitifed, these may very possibly effect the current output quality.
We plan to have these solved in an updated Puffin model in the very near future, please stay tuned!
## Future Plans
This is a relatively early build amongst the grand plans for the future of Puffin!
Current limitations: Some token mismatch problems have been identified, these may effect the current output quality, we plan to have this solved in Puffin V2 along with other improvements.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Benchmarks!
As of Puffins release, it achieves a new SOTA for the GPT4All benchmarks! Supplanting Hermes for the #1 position!
(Rounded to nearest tenth)
Previous Sota: Hermes - 68.8
New Sota: Puffin - 69.9 (+1.1)
note: After release, Puffin has since had its average GPT4All score beaten by 0.1%, by Nous' very own Model Hermes-2!
Latest SOTA w/ Hermes 2- 70.0 (+0.1 over Puffins 69.9 score)
That being said, Puffin supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
GPT4all :
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4983|± |0.0146|
| | |acc_norm|0.5068|± |0.0146|
|arc_easy | 0|acc |0.7980|± |0.0082|
| | |acc_norm|0.7757|± |0.0086|
|boolq | 1|acc |0.8150|± |0.0068|
|hellaswag | 0|acc |0.6132|± |0.0049|
| | |acc_norm|0.8043|± |0.0040|
|openbookqa | 0|acc |0.3560|± |0.0214|
| | |acc_norm|0.4560|± |0.0223|
|piqa | 0|acc |0.7954|± |0.0094|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7245|± |0.0126|
```
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5368|± |0.0363|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7127|± |0.0236|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3023|± |0.0286|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1003|± |0.0159|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2520|± |0.0194|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1743|± |0.0143|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4200|± |0.0285|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2900|± |0.0203|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5430|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4442|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2074|± |0.0128|
|bigbench_snarks | 0|multiple_choice_grade|0.5083|± |0.0373|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.4970|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3260|± |0.0148|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2136|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1326|± |0.0081|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4200|± |0.0285|
```
AGI Eval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2283|± |0.0264|
| | |acc_norm|0.2244|± |0.0262|
|agieval_logiqa_en | 0|acc |0.2780|± |0.0176|
| | |acc_norm|0.3164|± |0.0182|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2043|± |0.0266|
|agieval_lsat_lr | 0|acc |0.3392|± |0.0210|
| | |acc_norm|0.2961|± |0.0202|
|agieval_lsat_rc | 0|acc |0.4387|± |0.0303|
| | |acc_norm|0.3569|± |0.0293|
|agieval_sat_en | 0|acc |0.5874|± |0.0344|
| | |acc_norm|0.5194|± |0.0349|
|agieval_sat_en_without_passage| 0|acc |0.4223|± |0.0345|
| | |acc_norm|0.3447|± |0.0332|
|agieval_sat_math | 0|acc |0.3364|± |0.0319|
| | |acc_norm|0.2773|± |0.0302|
```
|
RockySong/Taxi-v3
|
RockySong
| 2023-09-25T02:51:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-25T02:51:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RockySong/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Mayte1GarciaGarcia/Ejernlp
|
Mayte1GarciaGarcia
| 2023-09-25T02:36:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-25T02:33:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: Ejernlp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8480392156862745
- name: F1
type: f1
value: 0.8908450704225351
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ejernlp
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5381
- Accuracy: 0.8480
- F1: 0.8908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5096 | 1.09 | 500 | 0.5683 | 0.8235 | 0.8737 |
| 0.3446 | 2.18 | 1000 | 0.5381 | 0.8480 | 0.8908 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
k4west/xlm-roberta-base-finetuned-panx-all
|
k4west
| 2023-09-25T02:14:28Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-25T02:01:18Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- F1: 0.8564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3005 | 1.0 | 835 | 0.1896 | 0.8180 |
| 0.1591 | 2.0 | 1670 | 0.1704 | 0.8399 |
| 0.1033 | 3.0 | 2505 | 0.1752 | 0.8564 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
k4west/xlm-roberta-base-finetuned-panx-en
|
k4west
| 2023-09-25T02:01:14Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-25T01:58:52Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6861971830985916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4047
- F1: 0.6862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1924 | 1.0 | 50 | 0.6348 | 0.5026 |
| 0.5685 | 2.0 | 100 | 0.4398 | 0.6478 |
| 0.3822 | 3.0 | 150 | 0.4047 | 0.6862 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/hoshizora_rin_lovelive
|
CyberHarem
| 2023-09-25T01:59:28Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/hoshizora_rin_lovelive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T19:56:14Z |
---
license: mit
datasets:
- CyberHarem/hoshizora_rin_lovelive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hoshizora_rin_lovelive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3240, you need to download `3240/hoshizora_rin_lovelive.pt` as the embedding and `3240/hoshizora_rin_lovelive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3240**, with the score of 0.989. The trigger words are:
1. `hoshizora_rin_lovelive`
2. `short_hair, orange_hair, smile, yellow_eyes, blush, open_mouth, green_eyes, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.939 | [Download](8100/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.946 | [Download](7560/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.944 | [Download](7020/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.948 | [Download](6480/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.948 | [Download](5940/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.947 | [Download](5400/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.944 | [Download](4860/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.948 | [Download](4320/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.944 | [Download](3780/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| **3240** | **0.989** | [**Download**](3240/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.988 | [Download](2700/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.975 | [Download](2160/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.959 | [Download](1620/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.960 | [Download](1080/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.938 | [Download](540/hoshizora_rin_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
k4west/xlm-roberta-base-finetuned-panx-it
|
k4west
| 2023-09-25T01:58:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-25T01:55:49Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8138492871690427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2602
- F1: 0.8138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8241 | 1.0 | 70 | 0.3261 | 0.7356 |
| 0.2933 | 2.0 | 140 | 0.2585 | 0.8006 |
| 0.2013 | 3.0 | 210 | 0.2602 | 0.8138 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/shinomiya_karen_theidolmstermillionlive
|
CyberHarem
| 2023-09-25T01:53:58Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/shinomiya_karen_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-25T01:43:03Z |
---
license: mit
datasets:
- CyberHarem/shinomiya_karen_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of shinomiya_karen_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2040, you need to download `2040/shinomiya_karen_theidolmstermillionlive.pt` as the embedding and `2040/shinomiya_karen_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2040**, with the score of 0.959. The trigger words are:
1. `shinomiya_karen_theidolmstermillionlive`
2. `long_hair, blonde_hair, blue_eyes, blush, breasts, open_mouth, smile, large_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.769 | [Download](5100/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](5100/previews/pattern_1.png) |  | [<NSFW, click to see>](5100/previews/pattern_3.png) | [<NSFW, click to see>](5100/previews/pattern_4.png) | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.864 | [Download](4760/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](4760/previews/pattern_1.png) |  | [<NSFW, click to see>](4760/previews/pattern_3.png) | [<NSFW, click to see>](4760/previews/pattern_4.png) | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.734 | [Download](4420/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](4420/previews/pattern_1.png) |  | [<NSFW, click to see>](4420/previews/pattern_3.png) | [<NSFW, click to see>](4420/previews/pattern_4.png) | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.865 | [Download](4080/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](4080/previews/pattern_1.png) |  | [<NSFW, click to see>](4080/previews/pattern_3.png) | [<NSFW, click to see>](4080/previews/pattern_4.png) | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.843 | [Download](3740/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](3740/previews/pattern_1.png) |  | [<NSFW, click to see>](3740/previews/pattern_3.png) | [<NSFW, click to see>](3740/previews/pattern_4.png) | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.874 | [Download](3400/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](3400/previews/pattern_1.png) |  | [<NSFW, click to see>](3400/previews/pattern_3.png) | [<NSFW, click to see>](3400/previews/pattern_4.png) | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.945 | [Download](3060/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](3060/previews/pattern_1.png) |  | [<NSFW, click to see>](3060/previews/pattern_3.png) | [<NSFW, click to see>](3060/previews/pattern_4.png) | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.948 | [Download](2720/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](2720/previews/pattern_1.png) |  | [<NSFW, click to see>](2720/previews/pattern_3.png) | [<NSFW, click to see>](2720/previews/pattern_4.png) | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.827 | [Download](2380/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](2380/previews/pattern_1.png) |  | [<NSFW, click to see>](2380/previews/pattern_3.png) | [<NSFW, click to see>](2380/previews/pattern_4.png) | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| **2040** | **0.959** | [**Download**](2040/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](2040/previews/pattern_1.png) |  | [<NSFW, click to see>](2040/previews/pattern_3.png) | [<NSFW, click to see>](2040/previews/pattern_4.png) | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.925 | [Download](1700/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](1700/previews/pattern_1.png) |  | [<NSFW, click to see>](1700/previews/pattern_3.png) | [<NSFW, click to see>](1700/previews/pattern_4.png) | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.869 | [Download](1360/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](1360/previews/pattern_1.png) |  | [<NSFW, click to see>](1360/previews/pattern_3.png) | [<NSFW, click to see>](1360/previews/pattern_4.png) | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.892 | [Download](1020/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](1020/previews/pattern_1.png) |  | [<NSFW, click to see>](1020/previews/pattern_3.png) | [<NSFW, click to see>](1020/previews/pattern_4.png) | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.880 | [Download](680/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](680/previews/pattern_1.png) |  | [<NSFW, click to see>](680/previews/pattern_3.png) | [<NSFW, click to see>](680/previews/pattern_4.png) | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.748 | [Download](340/shinomiya_karen_theidolmstermillionlive.zip) | [<NSFW, click to see>](340/previews/pattern_1.png) |  | [<NSFW, click to see>](340/previews/pattern_3.png) | [<NSFW, click to see>](340/previews/pattern_4.png) | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ialvarenga/setfit-experiment-32-examples
|
ialvarenga
| 2023-09-25T01:19:51Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-25T01:19:33Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# ialvarenga/setfit-experiment-32-examples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("ialvarenga/setfit-experiment-32-examples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
vfu/ccc_doc_vqa_test
|
vfu
| 2023-09-25T01:03:55Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/layoutlmv2-base-uncased",
"base_model:finetune:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2023-09-25T00:42:03Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv2-base-uncased
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: ccc_doc_vqa_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ccc_doc_vqa_test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LumosD/sloth
|
LumosD
| 2023-09-25T00:59:58Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-25T00:56:31Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - LumosD/sloth
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
CyberHarem/kousaka_honoka_lovelive
|
CyberHarem
| 2023-09-25T00:52:30Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kousaka_honoka_lovelive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T19:35:19Z |
---
license: mit
datasets:
- CyberHarem/kousaka_honoka_lovelive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kousaka_honoka_lovelive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7800, you need to download `7800/kousaka_honoka_lovelive.pt` as the embedding and `7800/kousaka_honoka_lovelive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7800**, with the score of 0.965. The trigger words are:
1. `kousaka_honoka_lovelive`
2. `blue_eyes, orange_hair, one_side_up, smile, blush, short_hair, open_mouth, bow, hair_bow, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **7800** | **0.965** | [**Download**](7800/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.946 | [Download](7280/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.937 | [Download](6760/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.958 | [Download](6240/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.959 | [Download](5720/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.946 | [Download](5200/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.953 | [Download](4680/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.950 | [Download](4160/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.943 | [Download](3640/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.928 | [Download](3120/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.927 | [Download](2600/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.945 | [Download](2080/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.930 | [Download](1560/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.853 | [Download](1040/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.893 | [Download](520/kousaka_honoka_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
ialvarenga/setfit-experiment-all-data
|
ialvarenga
| 2023-09-25T00:29:24Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-25T00:29:05Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# ialvarenga/setfit-experiment-all-data
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("ialvarenga/setfit-experiment-all-data")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
xoyeop/distilkobert-KEmoFact-0925
|
xoyeop
| 2023-09-25T00:17:00Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:monologg/distilkobert",
"base_model:finetune:monologg/distilkobert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-24T22:30:54Z |
---
base_model: monologg/distilkobert
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilkobert-KEmoFact-0925
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilkobert-KEmoFact-0925
This model is a fine-tuned version of [monologg/distilkobert](https://huggingface.co/monologg/distilkobert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7166
- Precision: 0.0137
- Recall: 0.0022
- F1: 0.0039
- Accuracy: 0.6299
- Jaccard Scores: 0.0916
- Cls Accuracy: 0.0345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Jaccard Scores | Cls Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:--------------:|:------------:|
| No log | 1.0 | 414 | 1.7574 | 0.0 | 0.0 | 0.0 | 0.6440 | 0.0022 | 0.0048 |
| 1.87 | 2.0 | 828 | 1.7244 | 0.0135 | 0.0017 | 0.0030 | 0.6446 | 0.0675 | 0.0315 |
| 1.7332 | 3.0 | 1242 | 1.6964 | 0.0 | 0.0 | 0.0 | 0.6450 | 0.0172 | 0.0127 |
| 1.7095 | 4.0 | 1656 | 1.6881 | 0.0212 | 0.0023 | 0.0041 | 0.6461 | 0.0578 | 0.0309 |
| 1.6899 | 5.0 | 2070 | 1.6827 | 0.0163 | 0.0023 | 0.0040 | 0.6455 | 0.0732 | 0.0357 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
vfu/trained_model
|
vfu
| 2023-09-25T00:03:08Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/layoutlmv2-base-uncased",
"base_model:finetune:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2023-09-24T22:25:20Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv2-base-uncased
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: trained_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
treei/llama-2-7b-keyword-ft-nochat
|
treei
| 2023-09-24T23:54:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T23:54:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
goodatinvesting/ppo-LunarLander-v2
|
goodatinvesting
| 2023-09-24T23:52:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T23:51:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 144.31 +/- 90.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Arjer/mandy
|
Arjer
| 2023-09-24T23:49:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-21T00:27:06Z |
Sample proompt:
masterpiece, beautiful, hires, ultra-detailed, beautiful digital illustration, 1girl, cute, beautiful, perfect face,
mandy, (thick:1.2), chubby, angry, from behind, butt, strong, thick thighs, [short hair], bra, underwear,(looking back at viewer:1.1), < lora:mandy:0.70 >
(worst quality, low quality:1.3), (depth of field, blurry:1.2), (greyscale, monochrome:1.1), 3D face, nose, cropped, lowres, text, jpeg artifacts, signature, watermark, username, blurry, artist name, trademark, watermark, title, (tan, child, infant, toddlers, chibi, sd character:1.1), multiple view, reference sheet, (collar:1.2), (shirt:1.3)
Good results with cartunafied_v3 model
|
CyberHarem/sonoda_umi_lovelive
|
CyberHarem
| 2023-09-24T23:48:35Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/sonoda_umi_lovelive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T18:59:04Z |
---
license: mit
datasets:
- CyberHarem/sonoda_umi_lovelive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of sonoda_umi_lovelive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3240, you need to download `3240/sonoda_umi_lovelive.pt` as the embedding and `3240/sonoda_umi_lovelive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3240**, with the score of 0.867. The trigger words are:
1. `sonoda_umi_lovelive`
2. `long_hair, blue_hair, yellow_eyes, blush, bangs, hair_between_eyes, smile, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.864 | [Download](8100/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.848 | [Download](7560/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.839 | [Download](7020/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.846 | [Download](6480/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.819 | [Download](5940/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.834 | [Download](5400/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.821 | [Download](4860/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.859 | [Download](4320/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.851 | [Download](3780/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| **3240** | **0.867** | [**Download**](3240/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.805 | [Download](2700/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.806 | [Download](2160/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.749 | [Download](1620/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.737 | [Download](1080/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.771 | [Download](540/sonoda_umi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
felixquinihildebet/Pyramids-training
|
felixquinihildebet
| 2023-09-24T23:44:34Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-09-24T23:44:31Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: felixquinihildebet/Pyramids-training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Meztli66/distilhubert-finetuned-gtzan
|
Meztli66
| 2023-09-24T23:33:57Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-24T20:09:39Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.74
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8432
- Accuracy: 0.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9055 | 1.0 | 113 | 1.7174 | 0.46 |
| 1.3089 | 2.0 | 226 | 1.2256 | 0.7 |
| 1.0414 | 3.0 | 339 | 1.0002 | 0.71 |
| 0.9251 | 4.0 | 452 | 0.9033 | 0.75 |
| 0.9292 | 5.0 | 565 | 0.8432 | 0.74 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/miyao_miya_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T23:29:53Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/miyao_miya_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T23:16:17Z |
---
license: mit
datasets:
- CyberHarem/miyao_miya_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of miyao_miya_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7280, you need to download `7280/miyao_miya_theidolmstermillionlive.pt` as the embedding and `7280/miyao_miya_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7280**, with the score of 0.850. The trigger words are:
1. `miyao_miya_theidolmstermillionlive`
2. `long_hair, brown_hair, blush, smile, brown_eyes, bangs, thick_eyebrows, open_mouth, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.765 | [Download](7800/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](7800/previews/pattern_1.png) |  | [<NSFW, click to see>](7800/previews/pattern_3.png) | [<NSFW, click to see>](7800/previews/pattern_4.png) |  | [<NSFW, click to see>](7800/previews/pattern_6.png) |  | [<NSFW, click to see>](7800/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| **7280** | **0.850** | [**Download**](7280/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](7280/previews/pattern_1.png) |  | [<NSFW, click to see>](7280/previews/pattern_3.png) | [<NSFW, click to see>](7280/previews/pattern_4.png) |  | [<NSFW, click to see>](7280/previews/pattern_6.png) |  | [<NSFW, click to see>](7280/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.771 | [Download](6760/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](6760/previews/pattern_1.png) |  | [<NSFW, click to see>](6760/previews/pattern_3.png) | [<NSFW, click to see>](6760/previews/pattern_4.png) |  | [<NSFW, click to see>](6760/previews/pattern_6.png) |  | [<NSFW, click to see>](6760/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](6760/previews/bikini.png) | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.803 | [Download](6240/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](6240/previews/pattern_1.png) |  | [<NSFW, click to see>](6240/previews/pattern_3.png) | [<NSFW, click to see>](6240/previews/pattern_4.png) |  | [<NSFW, click to see>](6240/previews/pattern_6.png) |  | [<NSFW, click to see>](6240/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.834 | [Download](5720/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](5720/previews/pattern_1.png) |  | [<NSFW, click to see>](5720/previews/pattern_3.png) | [<NSFW, click to see>](5720/previews/pattern_4.png) |  | [<NSFW, click to see>](5720/previews/pattern_6.png) |  | [<NSFW, click to see>](5720/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.815 | [Download](5200/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](5200/previews/pattern_1.png) |  | [<NSFW, click to see>](5200/previews/pattern_3.png) | [<NSFW, click to see>](5200/previews/pattern_4.png) |  | [<NSFW, click to see>](5200/previews/pattern_6.png) |  | [<NSFW, click to see>](5200/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5200/previews/bikini.png) | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.826 | [Download](4680/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](4680/previews/pattern_1.png) |  | [<NSFW, click to see>](4680/previews/pattern_3.png) | [<NSFW, click to see>](4680/previews/pattern_4.png) |  | [<NSFW, click to see>](4680/previews/pattern_6.png) |  | [<NSFW, click to see>](4680/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.785 | [Download](4160/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](4160/previews/pattern_1.png) |  | [<NSFW, click to see>](4160/previews/pattern_3.png) | [<NSFW, click to see>](4160/previews/pattern_4.png) |  | [<NSFW, click to see>](4160/previews/pattern_6.png) |  | [<NSFW, click to see>](4160/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4160/previews/bikini.png) | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.748 | [Download](3640/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](3640/previews/pattern_1.png) |  | [<NSFW, click to see>](3640/previews/pattern_3.png) | [<NSFW, click to see>](3640/previews/pattern_4.png) |  | [<NSFW, click to see>](3640/previews/pattern_6.png) |  | [<NSFW, click to see>](3640/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.683 | [Download](3120/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](3120/previews/pattern_1.png) |  | [<NSFW, click to see>](3120/previews/pattern_3.png) | [<NSFW, click to see>](3120/previews/pattern_4.png) |  | [<NSFW, click to see>](3120/previews/pattern_6.png) |  | [<NSFW, click to see>](3120/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.700 | [Download](2600/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](2600/previews/pattern_1.png) |  | [<NSFW, click to see>](2600/previews/pattern_3.png) | [<NSFW, click to see>](2600/previews/pattern_4.png) |  | [<NSFW, click to see>](2600/previews/pattern_6.png) |  | [<NSFW, click to see>](2600/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.654 | [Download](2080/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](2080/previews/pattern_1.png) |  | [<NSFW, click to see>](2080/previews/pattern_3.png) | [<NSFW, click to see>](2080/previews/pattern_4.png) |  | [<NSFW, click to see>](2080/previews/pattern_6.png) |  | [<NSFW, click to see>](2080/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.722 | [Download](1560/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](1560/previews/pattern_1.png) |  | [<NSFW, click to see>](1560/previews/pattern_3.png) | [<NSFW, click to see>](1560/previews/pattern_4.png) |  | [<NSFW, click to see>](1560/previews/pattern_6.png) |  | [<NSFW, click to see>](1560/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.609 | [Download](1040/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](1040/previews/pattern_1.png) |  | [<NSFW, click to see>](1040/previews/pattern_3.png) | [<NSFW, click to see>](1040/previews/pattern_4.png) |  | [<NSFW, click to see>](1040/previews/pattern_6.png) |  | [<NSFW, click to see>](1040/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.476 | [Download](520/miyao_miya_theidolmstermillionlive.zip) | [<NSFW, click to see>](520/previews/pattern_1.png) |  | [<NSFW, click to see>](520/previews/pattern_3.png) | [<NSFW, click to see>](520/previews/pattern_4.png) |  | [<NSFW, click to see>](520/previews/pattern_6.png) |  | [<NSFW, click to see>](520/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
dracero/Reinforce-ceartPole
|
dracero
| 2023-09-24T23:08:07Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T23:07:58Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-ceartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sudo-ai/zero123plus-v1.1
|
sudo-ai
| 2023-09-24T22:43:22Z | 5,245 | 36 |
diffusers
|
[
"diffusers",
"art",
"image-to-image",
"dataset:allenai/objaverse",
"license:openrail",
"diffusers:Zero123PlusPipeline",
"region:us"
] |
image-to-image
| 2023-09-23T03:55:07Z |
---
license: openrail
datasets:
- allenai/objaverse
library_name: diffusers
pipeline_tag: image-to-image
tags:
- art
---
Recommended version of `diffusers` is `0.20.2` with `torch` `2`.
Usage Example:
```python
import torch
import requests
from PIL import Image
from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler
# Load the pipeline
pipeline = DiffusionPipeline.from_pretrained(
"sudo-ai/zero123plus-v1.1", custom_pipeline="sudo-ai/zero123plus-pipeline",
torch_dtype=torch.float16
)
# Feel free to tune the scheduler
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(
pipeline.scheduler.config, timestep_spacing='trailing'
)
pipeline.to('cuda:0')
# Run the pipeline
cond = Image.open(requests.get("https://d.skis.ltd/nrp/sample-data/lysol.png", stream=True).raw)
result = pipeline(cond).images[0]
result.show()
result.save("output.png")
```
|
BBBBirdIsTheWord/a2c-PandaPickAndPlace-v3
|
BBBBirdIsTheWord
| 2023-09-24T22:42:35Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T22:37:33Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -45.70 +/- 12.90
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/toujou_nozomi_lovelive
|
CyberHarem
| 2023-09-24T22:39:41Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/toujou_nozomi_lovelive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T18:24:48Z |
---
license: mit
datasets:
- CyberHarem/toujou_nozomi_lovelive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of toujou_nozomi_lovelive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6240, you need to download `6240/toujou_nozomi_lovelive.pt` as the embedding and `6240/toujou_nozomi_lovelive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6240**, with the score of 0.975. The trigger words are:
1. `toujou_nozomi_lovelive`
2. `purple_hair, long_hair, green_eyes, breasts, smile, blush, twintails, large_breasts, hair_ornament, low_twintails`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.925 | [Download](7800/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.968 | [Download](7280/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.958 | [Download](6760/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| **6240** | **0.975** | [**Download**](6240/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.946 | [Download](5720/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.963 | [Download](5200/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.941 | [Download](4680/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.949 | [Download](4160/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.926 | [Download](3640/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.930 | [Download](3120/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.926 | [Download](2600/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.918 | [Download](2080/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.922 | [Download](1560/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.921 | [Download](1040/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.771 | [Download](520/toujou_nozomi_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
DriveMyScream/News_Summarization_Model_hf
|
DriveMyScream
| 2023-09-24T22:39:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-24T22:35:38Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: News_Summarization_Model_hf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# News_Summarization_Model_hf
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6109
- Validation Loss: 1.3430
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.2412 | 1.7102 | 0 |
| 1.8711 | 1.5629 | 1 |
| 1.7493 | 1.4707 | 2 |
| 1.6688 | 1.3819 | 3 |
| 1.6109 | 1.3430 | 4 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LarryAIDraw/gremory_anything4_5
|
LarryAIDraw
| 2023-09-24T22:36:18Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-24T22:34:57Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/151059?modelVersionId=168913
|
LarryAIDraw/Kiriko-04
|
LarryAIDraw
| 2023-09-24T22:36:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-24T22:34:36Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/150850/kiriko-yukoku-idolmaster
|
LarryAIDraw/shining_v1
|
LarryAIDraw
| 2023-09-24T22:34:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-24T22:27:54Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/151007/shining-arknights
|
DriveMyScream/Blenderbot_ChatBot
|
DriveMyScream
| 2023-09-24T22:34:12Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"blenderbot",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/blenderbot-400M-distill",
"base_model:finetune:facebook/blenderbot-400M-distill",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-24T22:27:54Z |
---
license: apache-2.0
base_model: facebook/blenderbot-400M-distill
tags:
- generated_from_keras_callback
model-index:
- name: Blenderbot_ChatBot
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Blenderbot_ChatBot
This model is a fine-tuned version of [facebook/blenderbot-400M-distill](https://huggingface.co/facebook/blenderbot-400M-distill) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.4332
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 6e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 6e-05, 'decay_steps': 2749, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.4332 | 0 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LarryAIDraw/qingS-v1
|
LarryAIDraw
| 2023-09-24T22:33:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-24T22:26:58Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/150916/or-naruse-haru-or-or-snowbreak-containment-zone-or-or-qing
|
DelusionalDreams/vit-base-patch16-224-in21k-finetuned-lora-food101
|
DelusionalDreams
| 2023-09-24T22:32:45Z | 8 | 0 |
peft
|
[
"peft",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | 2023-08-27T00:38:32Z |
---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
LarryAIDraw/illustrious-tea
|
LarryAIDraw
| 2023-09-24T22:32:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-24T22:26:34Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/150730/illustrious-azur-lane-never-ending-tea-party
|
LarryAIDraw/rio-v1-nai-8ep-resize
|
LarryAIDraw
| 2023-09-24T22:31:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-24T22:25:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/150461/characterrio-blue-archive
|
CyberHarem/baba_konomi_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T22:23:51Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/baba_konomi_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T22:11:10Z |
---
license: mit
datasets:
- CyberHarem/baba_konomi_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of baba_konomi_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5400, you need to download `5400/baba_konomi_theidolmstermillionlive.pt` as the embedding and `5400/baba_konomi_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5400**, with the score of 0.734. The trigger words are:
1. `baba_konomi_theidolmstermillionlive`
2. `brown_hair, braid, long_hair, blush, single_braid, aqua_eyes, smile, breasts, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.720 | [Download](8100/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](8100/previews/pattern_2.png) |  | [<NSFW, click to see>](8100/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](8100/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](8100/previews/bikini.png) | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.655 | [Download](7560/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](7560/previews/pattern_2.png) |  | [<NSFW, click to see>](7560/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](7560/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7560/previews/bikini.png) | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.698 | [Download](7020/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](7020/previews/pattern_2.png) |  | [<NSFW, click to see>](7020/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](7020/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7020/previews/bikini.png) | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.687 | [Download](6480/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](6480/previews/pattern_2.png) |  | [<NSFW, click to see>](6480/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](6480/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.711 | [Download](5940/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5940/previews/pattern_2.png) |  | [<NSFW, click to see>](5940/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](5940/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| **5400** | **0.734** | [**Download**](5400/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5400/previews/pattern_2.png) |  | [<NSFW, click to see>](5400/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](5400/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.720 | [Download](4860/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](4860/previews/pattern_2.png) |  | [<NSFW, click to see>](4860/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](4860/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4860/previews/bikini.png) | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.665 | [Download](4320/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](4320/previews/pattern_2.png) |  | [<NSFW, click to see>](4320/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](4320/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.724 | [Download](3780/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](3780/previews/pattern_2.png) |  | [<NSFW, click to see>](3780/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](3780/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.678 | [Download](3240/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](3240/previews/pattern_2.png) |  | [<NSFW, click to see>](3240/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](3240/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.681 | [Download](2700/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](2700/previews/pattern_2.png) |  | [<NSFW, click to see>](2700/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](2700/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2700/previews/bikini.png) | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.617 | [Download](2160/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](2160/previews/pattern_2.png) |  | [<NSFW, click to see>](2160/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](2160/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.654 | [Download](1620/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](1620/previews/pattern_2.png) |  | [<NSFW, click to see>](1620/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](1620/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1620/previews/bikini.png) | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.600 | [Download](1080/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](1080/previews/pattern_2.png) |  | [<NSFW, click to see>](1080/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](1080/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.502 | [Download](540/baba_konomi_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](540/previews/pattern_2.png) |  | [<NSFW, click to see>](540/previews/pattern_4.png) |  |  |  | [<NSFW, click to see>](540/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](540/previews/bikini.png) | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
JeremiahZ/bert-base-uncased-sst2
|
JeremiahZ
| 2023-09-24T22:18:46Z | 278 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-21T14:48:54Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
base_model: bert-base-uncased
model-index:
- name: bert-base-uncased-sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- type: accuracy
value: 0.9323394495412844
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2478
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1668 | 1.0 | 2105 | 0.2513 | 0.9174 |
| 0.1119 | 2.0 | 4210 | 0.2478 | 0.9323 |
| 0.0699 | 3.0 | 6315 | 0.2764 | 0.9266 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
JeremiahZ/roberta-base-mrpc
|
JeremiahZ
| 2023-09-24T22:17:46Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-13T13:38:44Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
base_model: roberta-base
model-index:
- name: roberta-base-mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.9019607843137255
name: Accuracy
- type: f1
value: 0.9295774647887324
name: F1
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- type: accuracy
value: 0.9019607843137255
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTgxMmY3ZTkyZmYyZTJhZjQzNzkxYWRhMzRkNjQ4MDU3NmRhNzJmNDUwMmI5NWQyYTQ1ODRmMGVhOGI3NzMxZCIsInZlcnNpb24iOjF9.E6AhJwh_S4LfzhJjvlUzGWDmJYzxwbzL0IKqIIiNhFGg-_N5G9_VJAgqiQz-6i9xGHB2fJM-G5XinjHRk4SeBA
- type: precision
value: 0.9134948096885813
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2NmZThjNDI0YThmMzE4MjdhNjM3OTFmYzAwNzY4ZTM4ZDc4ZDA3NTYzYWRhNTdlNWMyZWI1NTMwZmFhNzQ5NyIsInZlcnNpb24iOjF9.nOkbqzXVD3r9LrIePn7o9Ny8_GiPoSBskCx3ey3Hrexrx00Gj6B9wkVvc8EcV5bAsBTeAJSeqO7ncS_-WJjlCQ
- type: recall
value: 0.946236559139785
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzA2NDgzYTkzMTY4ZDQxYTdlZmM2ODY4YzM4N2E0ODk0YzRkNDI3YTFhMGIwNDZhNTI0MmIyNGU0YmFlMzRjYyIsInZlcnNpb24iOjF9.jNL0IQk6XnUd6zFfHwTSL41Ax35OdoE8xQA-2PqEFs9UtT2O9fo6cZyXDln6QPMGHOlwNgPp_PX6mLrmDHN6Cw
- type: auc
value: 0.9536411880747964
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmE0ZWZlNGFkMzdhNTdjZjY0NDkzNDZhOTJmY2Q1MWU4MTc3NGMwYmRjNTlkMTZjOTBiNjIwOTUzZWZhZTcwNSIsInZlcnNpb24iOjF9.ZVekwshvwAi8K6gYJmKEDk8riyiOqDhsfzbSxXa-AWKvREksbNtsDo_u6iOEYImGLbcEFfgesDE-cBnEsmMdAg
- type: f1
value: 0.9295774647887324
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDQwMmE1Y2FhMGE4M2Q5YjU3NTAyZTljZWQ5ODRkMGEyZmI4M2FhNDJjYjlkMzllMzU5NDQ1ZWI2YjNiNmM0OCIsInZlcnNpb24iOjF9.a2jDnaSZhCJ_3f1rBJ8mXfyLCRR6Y9tYb_Hayi00NPWrejDML8Bc-LoobxlPdbd8x8LVJ2vOWhbH5LP4J9kOBg
- type: loss
value: 0.48942330479621887
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODFkMWQ5NTQ0ODMwNjQ2MzcyODA1ODlhZGUzNTg4NjE2M2U5MmIzYjQ3NzgxNTQyZDkyMGNiM2ZhYzc4ZGY0MSIsInZlcnNpb24iOjF9.K6fAIi21ZNtOqKS5c9jlO7kXISNHb0DD4pzdgLsESVjjOYxqS4C9f_OBJjIV-KtuwQGbi3yNC5Y4jTWk2HvNCQ
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4898
- Accuracy: 0.9020
- F1: 0.9296
- Combined Score: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
znacer/dqn-SpaceInvadersNoFrameskip-v4
|
znacer
| 2023-09-24T22:17:14Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T22:16:44Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 329.00 +/- 157.97
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Z4K -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Z4K -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Z4K
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
BBBBirdIsTheWord/a2c-PandaReachDense-v3
|
BBBBirdIsTheWord
| 2023-09-24T21:50:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T21:44:27Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mychen76/alpaca-code_adapter_adapter
|
mychen76
| 2023-09-24T21:45:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T21:45:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
mychen76/alpaca-code_adapter
|
mychen76
| 2023-09-24T21:39:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T21:31:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
AmrMorgado/q-FrozenLake-v1-4x4-noSlippery
|
AmrMorgado
| 2023-09-24T21:37:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T21:25:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="AmrMorgado/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
CyberHarem/yazawa_nico_lovelive
|
CyberHarem
| 2023-09-24T21:36:51Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yazawa_nico_lovelive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T17:08:20Z |
---
license: mit
datasets:
- CyberHarem/yazawa_nico_lovelive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yazawa_nico_lovelive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5940, you need to download `5940/yazawa_nico_lovelive.pt` as the embedding and `5940/yazawa_nico_lovelive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5940**, with the score of 0.880. The trigger words are:
1. `yazawa_nico_lovelive`
2. `black_hair, red_eyes, twintails, blush, smile, bow, hair_bow, bangs`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.838 | [Download](8100/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.858 | [Download](7560/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.877 | [Download](7020/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.852 | [Download](6480/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| **5940** | **0.880** | [**Download**](5940/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.874 | [Download](5400/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.779 | [Download](4860/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.820 | [Download](4320/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.855 | [Download](3780/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.811 | [Download](3240/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.833 | [Download](2700/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.805 | [Download](2160/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.822 | [Download](1620/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.816 | [Download](1080/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.692 | [Download](540/yazawa_nico_lovelive.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
CyberHarem/kitakami_reika_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T21:17:04Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kitakami_reika_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T21:03:43Z |
---
license: mit
datasets:
- CyberHarem/kitakami_reika_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kitakami_reika_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5000, you need to download `5000/kitakami_reika_theidolmstermillionlive.pt` as the embedding and `5000/kitakami_reika_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5000**, with the score of 0.932. The trigger words are:
1. `kitakami_reika_theidolmstermillionlive`
2. `long_hair, blue_hair, twintails, bangs, smile, blush, brown_eyes, breasts, open_mouth, low_twintails, medium_breasts, yellow_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.888 | [Download](7500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.920 | [Download](7000/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.919 | [Download](6500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.902 | [Download](6000/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.868 | [Download](5500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| **5000** | **0.932** | [**Download**](5000/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.861 | [Download](4500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.908 | [Download](4000/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.849 | [Download](3500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.930 | [Download](3000/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.852 | [Download](2500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.808 | [Download](2000/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.829 | [Download](1500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.686 | [Download](1000/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.521 | [Download](500/kitakami_reika_theidolmstermillionlive.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
znacer/q-Taxi-v3
|
znacer
| 2023-09-24T21:12:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T21:12:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Z4K/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
znacer/q-FrozenLake-v1-4x4-noSlippery
|
znacer
| 2023-09-24T21:10:41Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T21:10:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Z4K/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
felixquinihildebet/ppo-SnowballTarget
|
felixquinihildebet
| 2023-09-24T21:05:02Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-24T21:05:00Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: felixquinihildebet/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
OpenDILabCommunity/LunarLander-v2-PPO
|
OpenDILabCommunity
| 2023-09-24T21:04:23Z | 0 | 0 |
pytorch
|
[
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"LunarLander-v2",
"en",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2023-04-28T12:06:22Z |
---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- LunarLander-v2
benchmark_name: OpenAI/Gym/Box2d
task_name: LunarLander-v2
pipeline_tag: reinforcement-learning
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/Box2d-LunarLander-v2
type: OpenAI/Gym/Box2d-LunarLander-v2
metrics:
- type: mean_reward
value: 288.24 +/- 22.69
name: mean_reward
---
# Play **LunarLander-v2** with **PPO** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **PPO** implementation to OpenAI/Gym/Box2d **LunarLander-v2** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env,video]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOF
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = PPOF(
env_id="LunarLander-v2", exp_name="LunarLander-v2-PPO", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOF
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/LunarLander-v2-PPO")
# Instantiate the agent
agent = PPOF(
env_id="LunarLander-v2", exp_name="LunarLander-v2-PPO", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import PPOF
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = PPOF(env_id="LunarLander-v2", exp_name="LunarLander-v2-PPO")
# Train the agent
return_ = agent.train(step=int(4000000), collector_env_num=4, evaluator_env_num=4)
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Box2d",
task_name="LunarLander-v2",
algo_name="PPO",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html",
installation_guide="pip3 install DI-engine[common_env,video]",
usage_file_by_git_clone="./ppo/lunarlander_ppo_deploy.py",
usage_file_by_huggingface_ding="./ppo/lunarlander_ppo_download.py",
train_file="./ppo/lunarlander_ppo.py",
repo_id="OpenDILabCommunity/LunarLander-v2-PPO",
create_repo=False
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'type': 'ppo',
'on_policy': True,
'cuda': True,
'action_space': 'discrete',
'discount_factor': 0.99,
'gae_lambda': 0.95,
'epoch_per_collect': 10,
'batch_size': 64,
'learning_rate': 0.0003,
'lr_scheduler': None,
'weight_decay': 0,
'value_weight': 0.5,
'entropy_weight': 0.001,
'clip_ratio': 0.2,
'adv_norm': True,
'value_norm': 'popart',
'ppo_param_init': True,
'grad_norm': 0.5,
'n_sample': 512,
'unroll_len': 1,
'deterministic_eval': True,
'model': {},
'cfg_type': 'PPOFPolicyDict',
'env_id': 'LunarLander-v2',
'exp_name': 'LunarLander-v2-PPO'
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/LunarLander-v2-PPO)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-PPO/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-PPO/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 371.84 KB
- **Last Update Date:** 2023-09-24
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Box2d
- **Task:** LunarLander-v2
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.9
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html)
|
DriveMyScream/Grammatical_Error_Correction
|
DriveMyScream
| 2023-09-24T20:35:51Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-24T20:34:14Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: Grammatical_Error_Correction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Grammatical_Error_Correction
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8129
- Validation Loss: 0.7423
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7815, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8938 | 0.7913 | 0 |
| 0.8624 | 0.7711 | 1 |
| 0.8487 | 0.7585 | 2 |
| 0.8249 | 0.7495 | 3 |
| 0.8129 | 0.7423 | 4 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/minami_kotori_lovelive
|
CyberHarem
| 2023-09-24T20:29:26Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/minami_kotori_lovelive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-14T16:45:26Z |
---
license: mit
datasets:
- CyberHarem/minami_kotori_lovelive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of minami_kotori_lovelive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5400, you need to download `5400/minami_kotori_lovelive.pt` as the embedding and `5400/minami_kotori_lovelive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5400**, with the score of 0.977. The trigger words are:
1. `minami_kotori_lovelive`
2. `long_hair, brown_hair, one_side_up, smile, blush, bow, yellow_eyes, hair_bow, brown_eyes, bangs, open_mouth, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.971 | [Download](8100/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.969 | [Download](7560/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.971 | [Download](7020/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.974 | [Download](6480/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.976 | [Download](5940/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| **5400** | **0.977** | [**Download**](5400/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.974 | [Download](4860/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.976 | [Download](4320/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.970 | [Download](3780/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.964 | [Download](3240/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.964 | [Download](2700/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.958 | [Download](2160/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.950 | [Download](1620/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.939 | [Download](1080/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.904 | [Download](540/minami_kotori_lovelive.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
sametayhan/q-taxi-v3
|
sametayhan
| 2023-09-24T20:17:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T20:17:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sametayhan/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rjarpa/ms-4maps_alpha-ds-newtoken2
|
rjarpa
| 2023-09-24T20:08:57Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-24T19:02:01Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ms-4maps_alpha-ds-newtoken2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ms-4maps_alpha-ds-newtoken2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Prakhar1241/ayurvedic-medicine
|
Prakhar1241
| 2023-09-24T20:02:36Z | 0 | 2 | null |
[
"license:openrail",
"region:us"
] | null | 2023-09-24T20:01:17Z |
---
license: openrail
---
# -*- coding: utf-8 -*-
"""
Created on Sun Sep 24 23:07:29 2023
@author: Prakhar Agrawal
"""
import streamlit as st
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
# Load data for Ayurvedic medicine, hospitals, and colleges
medicine_data = pd.read_csv(r"C:/Users/Prakhar Agrawal/Downloads/Final_data.csv")
medicine_data = medicine_data.dropna()
hospital_data = pd.read_csv(r"C:/Users/Prakhar Agrawal/Downloads/hospitals.csv")
hospital_data = hospital_data.dropna()
college_data = pd.read_csv(r"C:/Users/Prakhar Agrawal/Downloads/ayurvedic-colleges_2- (1).csv")
college_data = college_data.drop_duplicates(subset='College ID')
def recommend_medicines(user_disease):
tfidf_vectorizer = TfidfVectorizer()
tfidf_matrix = tfidf_vectorizer.fit_transform(medicine_data['Diseases Cured'])
user_input_vector = tfidf_vectorizer.transform([user_disease])
cosine_similarities = cosine_similarity(user_input_vector, tfidf_matrix)
similar_indices = [i for i, score in enumerate(cosine_similarities[0]) if score > 0.5]
recommendations = []
for idx in similar_indices:
recommendation = {
"Ayurvedic Medicine": medicine_data.iloc[idx]['Ayurvedic Medicine'],
"Diseases Cured": medicine_data.iloc[idx]['Diseases Cured'],
"Cautions and Considerations": medicine_data.iloc[idx]['Cautions and Precautions'],
"Properties": medicine_data.iloc[idx]['Properties'],
"Key Ingredients": medicine_data.iloc[idx]['Key Ingredients'],
"Mode of Action": medicine_data.iloc[idx]['Mode of Action']
}
recommendations.append(recommendation)
return recommendations
# Function to recommend Ayurvedic Hospitals
def recommend_hospitals(user_state):
vectorizer = TfidfVectorizer()
tfidf_matrix = vectorizer.fit_transform(hospital_data['State'])
user_vector = vectorizer.transform([user_state])
similarity = cosine_similarity(user_vector, tfidf_matrix)
similar_indices = [i for i, score in enumerate(similarity[0]) if score > 0.75]
recommendations = []
for idx in similar_indices:
recommendation = {
"Name": hospital_data.iloc[idx]['Name'],
"Address": hospital_data.iloc[idx]['Address']
}
recommendations.append(recommendation)
return recommendations
# Function to recommend Ayurvedic Colleges
def recommend_colleges(user_state):
vectorizer = TfidfVectorizer()
tfidf_matrix = vectorizer.fit_transform(college_data['State'])
user_matrix = vectorizer.transform([user_state])
similarity = cosine_similarity(user_matrix, tfidf_matrix)
similar_indices = [i for i, score in enumerate(similarity[0]) if score > 0.5]
recommendations = []
for idx in similar_indices:
recommendation = {
"College ID": college_data.iloc[idx]['College ID'],
"Name of the College": college_data.iloc[idx]['Name of the College'],
"State": college_data.iloc[idx]['State']
}
recommendations.append(recommendation)
return recommendations
# Streamlit UI
st.sidebar.title("Ayurvedic Recommendations")
# Sidebar section for user input
section = st.sidebar.radio("Select a Section", ["Ayurvedic Medicine", "Ayurvedic Hospitals", "Ayurvedic Colleges"])
if section == "Ayurvedic Medicine":
user_disease = st.text_input("Enter the disease name:")
if st.button("Recommend Medicines"):
st.subheader("Recommended Medicines:")
recommendations = recommend_medicines(user_disease)
for recommendation in recommendations:
st.write(recommendation)
elif section == "Ayurvedic Hospitals":
user_state = st.text_input("Enter the name of your state:")
if st.button("Recommend Hospitals"):
st.subheader("Recommended Hospitals:")
recommendations = recommend_hospitals(user_state)
for recommendation in recommendations:
st.write(recommendation)
elif section == "Ayurvedic Colleges":
user_state = st.text_input("Enter your state name:")
if st.button("Recommend Colleges"):
st.subheader("Recommended Colleges:")
recommendations = recommend_colleges(user_state)
for recommendation in recommendations:
st.write(recommendation)
|
Adbhut/speecht5-finetuned-voxpopuli_sl
|
Adbhut
| 2023-09-24T19:56:01Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-24T15:24:38Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5-finetuned-voxpopuli_sl
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5-finetuned-voxpopuli_sl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5689 | 2.68 | 250 | 0.5361 |
| 0.5153 | 5.35 | 500 | 0.5024 |
| 0.5044 | 8.03 | 750 | 0.4942 |
| 0.4934 | 10.71 | 1000 | 0.4915 |
| 0.4906 | 13.39 | 1250 | 0.4853 |
| 0.4886 | 16.06 | 1500 | 0.4868 |
| 0.4886 | 18.74 | 1750 | 0.4842 |
| 0.4812 | 21.42 | 2000 | 0.4849 |
| 0.4824 | 24.1 | 2250 | 0.4836 |
| 0.48 | 26.77 | 2500 | 0.4830 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DriveMyScream/Gender_Age_BMI_Prediction
|
DriveMyScream
| 2023-09-24T19:33:00Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-09-24T19:32:07Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
zitrone44/vit-base-tm
|
zitrone44
| 2023-09-24T19:30:47Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-23T14:50:12Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-tm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-tm
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4170
- eval_accuracy: 0.9062
- eval_runtime: 207.7695
- eval_samples_per_second: 152.78
- eval_steps_per_second: 19.098
- epoch: 6.79
- step: 12447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Pclanglais/Epstein
|
Pclanglais
| 2023-09-24T19:26:22Z | 13 | 8 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-24T16:29:33Z |
---
license: cc-by-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
<div style="text-align: right;font-size:.7em;margin-left:50%"><em>Per un attimo Brahe cercò le parole, le immagini, le analogie; pensò perfino i gesti della mano e delle dita, come un attore che si prepari a rendere fisico un sentimento. Ma appena cominciò a dire "come", a dare solidità a ciò che non aveva, a rendere visibile ciò che non lo era, a collocare, nello spazio ciò che era pura probabilità, e a cercare una qualsiasi cosa tra le forme del mondo cui paragonarlo, Epstein lo interruppe.</em><br>Daniele del Giudice, <em>Atlante occidentale</em></div>
*Epstein* is a generative LLM for English literature fine-tuned from llama-13B. Given twenty potential features, Epstein will generate a literary text.
*Epstein* has been trained on 4,000 excerpts of English or English translated literature in the public domain and on a set of synthetic and manual annotations.
*Epstein* is the reversed companion of *[Brahe](https://huggingface.co/Pclanglais/brahe)*, an analytical LLM model to analyze existing texts using the same features. Both models are named after the protagonists of the philosophical novel of Daniele del Giudice, *Atlante occidentale*. Brahe is a scientist working at the CERN on quantum physics, Epstein is a novelist and they both confront their different views of reality.
## Annotations
In its current version, *Epstein* can generate texts using any of the following annotations. It's preferable to include at least a summary.
* Summary: short summary
* Trope: a trope or literary cliché (a fuzzy definition but works surprisingly well)
* Narrative arc: how is the action unfolding (suspense, dramatic tension, comic relief…)
* Enunciation: who is speaking in the text (first-person narrative, dialog, third-person narrative, omniscient narrator)
* Tone: general tonality of the text (humoristic, tragic, scholarly…)
* Genre: a specific literary genre that would be used in bookshops such as detective fiction, science-fiction, romance, historical novel, young adult…
* Intertextuality: non-literary writing forms that may be similar to this text (red tape, scientific article, case law…)
* Speech standard: the specific social/literary level of the text (poetic, dialectical, vulgar…)
* Literary form: whether it's the description of a place, a conversation, a stream of consciousness
* Literary movement: aesthetic movement the text seems to embody (does not work so well)
* Active character: the list of characters that have an active involvment in the story.
* Mentioned characters: the list of characters only mentioned, with no active involvement in the story
* Quoted works: another text mentioned or quoted in the text.
* Absolute place: a precise place with a proper name such as Paris, Sesame Street, Lisbonne Airport.
* Fuzzy place: unnamed place where the story happens such as a field, an appartment, a church (does not work so well…)
* Fuzzy time nonspecific moment where the action occur moment such as monday, yesterday, a week after.
* Time setting: historical period where the action seems to occur such as the 1960s, the Renaissance, the Victorian period…
* Diegetic time: very approximative number of minutes/hours/days that have unfolded between the beginning and the end of the text (5 minutes, 35 minutes, 2 hours, 3 days).
* Absolute time: a precise date where the action occurs, such as January 15, 1845, 23rd century…
## Example
A science-fiction rewriting of the *Ambassadors* of Henry James. The prompt uses the same features created by *Brahe* with a few twists:
>Summary: Strether is waiting for his friend Waymarsh to arrive at the hotel, but is not completely disappointed when he learns that Waymarsh will not arrive until evening.<br>
Trope: The anticipation of meeting a friend<br>
Narrative arc: Suspense<br>
Enunciation: Third-person narrative<br>
Tone: Anticipatory<br>
Genre: Science-fiction<br>
Intertextuality: Travelogue<br>
Speech standard: Standard<br>
Literary form: Stream of consciousness<br>
Literary movement: Realism<br>
Active character: Strether, Waymarsh<br>
Time setting: the year 2046<br>
Diegetic time: A few hours<br>
Fuzzy place: The hotel
Output of Epstein:
>Strether waited for him at the hotel, but when he learned that his friend would not arrive until evening, he was not completely disappointed. He had plenty to do in the meantime, and he felt a certain relief at having a few days to prepare for the meeting. He had been looking forward to it with a mixture of anticipation and dread, and he was now glad to have some time to think about it and to plan how they would spend their time together. The year 2046 was a time of great change and progress. Strether had been living in the city of New York, which had become a hub of innovation and creativity. The city was home to many of the world's leading scientists and engineers, who were working on cutting-edge projects that were changing the way people lived and worked. Strether had been working as a researcher himself, and he had been excited to learn that Waymarsh was coming to visit him. Waymarsh was a friend from college, and they had stayed in touch over the years. They had both been interested in science and technology, and they had shared a love of exploring new ideas and discoveries. Strether was looking forward to catching up with Waymarsh and hearing about his latest projects.
|
CyberHarem/maihama_ayumu_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T19:24:25Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/maihama_ayumu_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T19:14:30Z |
---
license: mit
datasets:
- CyberHarem/maihama_ayumu_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of maihama_ayumu_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/maihama_ayumu_theidolmstermillionlive.pt` as the embedding and `4420/maihama_ayumu_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.678. The trigger words are:
1. `maihama_ayumu_theidolmstermillionlive`
2. `pink_hair, multicolored_hair, pink_eyes, ponytail, smile, long_hair, blush, blonde_hair, streaked_hair, open_mouth, jewelry, breasts, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.615 | [Download](5100/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.676 | [Download](4760/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.678** | [**Download**](4420/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.678 | [Download](4080/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.630 | [Download](3740/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.603 | [Download](3400/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.647 | [Download](3060/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.605 | [Download](2720/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.558 | [Download](2380/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.595 | [Download](2040/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.529 | [Download](1700/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.385 | [Download](1360/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.482 | [Download](1020/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.342 | [Download](680/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.234 | [Download](340/maihama_ayumu_theidolmstermillionlive.zip) |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
almaghrabima/NER-TQ-llama-2-7b
|
almaghrabima
| 2023-09-24T19:24:04Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-23T07:15:39Z |
---
language:
- en
---
## Usage of this model:
I'm pleased to recount my thrilling experience of refining Llama 2 specifically for Named Entity Recognition (NER) on a unique dataset. NER is a captivating domain in natural language processing where the objective is to detect and categorize entities such as Product Name Trademarks, Countries, Harmonized System Codes, their descriptions, Manufacturers, and Model Numbers.
|
OpenDILabCommunity/Hopper-v3-PPO
|
OpenDILabCommunity
| 2023-09-24T19:16:49Z | 0 | 1 |
pytorch
|
[
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"Hopper-v3",
"en",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2023-04-13T15:02:01Z |
---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- Hopper-v3
benchmark_name: OpenAI/Gym/MuJoCo
task_name: Hopper-v3
pipeline_tag: reinforcement-learning
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/MuJoCo-Hopper-v3
type: OpenAI/Gym/MuJoCo-Hopper-v3
metrics:
- type: mean_reward
value: 3795.27 +/- 26.06
name: mean_reward
---
# Play **Hopper-v3** with **PPO** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **PPO** implementation to OpenAI/Gym/MuJoCo **Hopper-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
sudo apt update -y && sudo apt install -y build-essential libgl1-mesa-dev libgl1-mesa-glx libglew-dev libosmesa6-dev libglfw3 libglfw3-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env,video]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOF
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = PPOF(env_id="Hopper-v3", exp_name="Hopper-v3-PPO", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOF
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Hopper-v3-PPO")
# Instantiate the agent
agent = PPOF(env_id="Hopper-v3", exp_name="Hopper-v3-PPO", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import PPOF
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = PPOF(env_id="Hopper-v3", exp_name="Hopper-v3-PPO")
# Train the agent
return_ = agent.train(step=int(10000000), collector_env_num=4, evaluator_env_num=4, debug=False)
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/MuJoCo",
task_name="Hopper-v3",
algo_name="PPO",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html",
installation_guide='''
sudo apt update -y \
&& sudo apt install -y \
build-essential \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
libglfw3 \
libglfw3-dev \
libsdl2-dev \
libsdl2-image-dev \
libglm-dev \
libfreetype6-dev \
patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env,video]
''',
usage_file_by_git_clone="./ppo/hopper_ppo_deploy.py",
usage_file_by_huggingface_ding="./ppo/hopper_ppo_download.py",
train_file="./ppo/hopper_ppo.py",
repo_id="OpenDILabCommunity/Hopper-v3-PPO",
create_repo=False
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'type': 'ppo',
'on_policy': True,
'cuda': True,
'action_space': 'continuous',
'discount_factor': 0.99,
'gae_lambda': 0.95,
'epoch_per_collect': 10,
'batch_size': 320,
'learning_rate': 0.0003,
'lr_scheduler': None,
'weight_decay': 0,
'value_weight': 0.5,
'entropy_weight': 0.01,
'clip_ratio': 0.2,
'adv_norm': True,
'value_norm': 'baseline',
'ppo_param_init': True,
'grad_norm': 0.5,
'n_sample': 3200,
'unroll_len': 1,
'deterministic_eval': True,
'model': {},
'cfg_type': 'PPOFPolicyDict',
'env_id': 'Hopper-v3',
'exp_name': 'Hopper-v3-PPO'
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Hopper-v3-PPO)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Hopper-v3-PPO/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Hopper-v3-PPO/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 375.3 KB
- **Last Update Date:** 2023-09-24
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/MuJoCo
- **Task:** Hopper-v3
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.9
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html)
|
gaioNL/poca-SoccerTwos
|
gaioNL
| 2023-09-24T18:52:56Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-24T18:51:33Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gaioNL/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/tokoro_megumi_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T18:38:47Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/tokoro_megumi_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T18:26:28Z |
---
license: mit
datasets:
- CyberHarem/tokoro_megumi_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of tokoro_megumi_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4320, you need to download `4320/tokoro_megumi_theidolmstermillionlive.pt` as the embedding and `4320/tokoro_megumi_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4320**, with the score of 0.850. The trigger words are:
1. `tokoro_megumi_theidolmstermillionlive`
2. `long_hair, brown_hair, blush, blue_eyes, ahoge, breasts, smile, bangs, open_mouth, large_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.757 | [Download](8100/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](8100/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bikini.png) | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.776 | [Download](7560/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](7560/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bikini.png) | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.759 | [Download](7020/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](7020/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bikini.png) | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.770 | [Download](6480/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](6480/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.751 | [Download](5940/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](5940/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.696 | [Download](5400/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](5400/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.760 | [Download](4860/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](4860/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bikini.png) | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| **4320** | **0.850** | [**Download**](4320/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](4320/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.792 | [Download](3780/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](3780/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.815 | [Download](3240/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](3240/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.760 | [Download](2700/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](2700/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bikini.png) | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.813 | [Download](2160/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](2160/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.829 | [Download](1620/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](1620/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bikini.png) | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.539 | [Download](1080/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](1080/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.334 | [Download](540/tokoro_megumi_theidolmstermillionlive.zip) |  |  | [<NSFW, click to see>](540/previews/pattern_3.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bikini.png) | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
ironchanchellor/segformer-b0_DsMetalDam_Augmented_Cropped
|
ironchanchellor
| 2023-09-24T18:35:58Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-24T17:39:01Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: segformer-b0_DsMetalDam_Augmented_Cropped
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0_DsMetalDam_Augmented_Cropped
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2486
- Mean Iou: 0.6867
- Mean Accuracy: 0.7623
- Overall Accuracy: 0.9106
- Accuracy Matrix: 0.8910
- Accuracy Austenite: 0.9442
- Accuracy Martensite/austenite: 0.8061
- Accuracy Precipitate: 0.2109
- Accuracy Defect: 0.9591
- Iou Matrix: 0.8022
- Iou Austenite: 0.8886
- Iou Martensite/austenite: 0.6946
- Iou Precipitate: 0.1697
- Iou Defect: 0.8786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Matrix | Accuracy Austenite | Accuracy Martensite/austenite | Accuracy Precipitate | Accuracy Defect | Iou Matrix | Iou Austenite | Iou Martensite/austenite | Iou Precipitate | Iou Defect |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------:|:------------------:|:-----------------------------:|:--------------------:|:---------------:|:----------:|:-------------:|:------------------------:|:---------------:|:----------:|
| 0.2546 | 1.0 | 343 | 0.3220 | 0.5965 | 0.6868 | 0.8757 | 0.8517 | 0.9218 | 0.7201 | 0.0 | 0.9404 | 0.7384 | 0.8585 | 0.5502 | 0.0 | 0.8353 |
| 0.336 | 2.0 | 686 | 0.3159 | 0.5992 | 0.6766 | 0.8807 | 0.8816 | 0.9295 | 0.6220 | 0.0 | 0.9497 | 0.7474 | 0.8627 | 0.5429 | 0.0 | 0.8428 |
| 0.2976 | 3.0 | 1029 | 0.3087 | 0.6057 | 0.6971 | 0.8807 | 0.8383 | 0.9325 | 0.7561 | 0.0000 | 0.9583 | 0.7412 | 0.8629 | 0.5833 | 0.0000 | 0.8411 |
| 0.2791 | 4.0 | 1372 | 0.2907 | 0.6175 | 0.6995 | 0.8886 | 0.8717 | 0.9290 | 0.7401 | 0.0016 | 0.9548 | 0.7608 | 0.8674 | 0.6070 | 0.0016 | 0.8507 |
| 0.2795 | 5.0 | 1715 | 0.2883 | 0.6264 | 0.7025 | 0.8906 | 0.8675 | 0.9369 | 0.7303 | 0.0291 | 0.9489 | 0.7630 | 0.8689 | 0.6135 | 0.0283 | 0.8584 |
| 0.2215 | 6.0 | 2058 | 0.2845 | 0.6316 | 0.7081 | 0.8924 | 0.8873 | 0.9252 | 0.7457 | 0.0452 | 0.9373 | 0.7700 | 0.8700 | 0.6212 | 0.0431 | 0.8536 |
| 0.2372 | 7.0 | 2401 | 0.2770 | 0.6343 | 0.7197 | 0.8931 | 0.8565 | 0.9373 | 0.7906 | 0.0492 | 0.9651 | 0.7657 | 0.8715 | 0.6365 | 0.0472 | 0.8504 |
| 0.3055 | 8.0 | 2744 | 0.2742 | 0.6337 | 0.7201 | 0.8950 | 0.8835 | 0.9220 | 0.8026 | 0.0324 | 0.9603 | 0.7742 | 0.8728 | 0.6413 | 0.0317 | 0.8482 |
| 0.2047 | 9.0 | 3087 | 0.2680 | 0.6497 | 0.7251 | 0.8982 | 0.8733 | 0.9384 | 0.7786 | 0.0884 | 0.9468 | 0.7765 | 0.8766 | 0.6500 | 0.0819 | 0.8634 |
| 0.1705 | 10.0 | 3430 | 0.2675 | 0.6489 | 0.7328 | 0.8987 | 0.8744 | 0.9336 | 0.8043 | 0.0862 | 0.9654 | 0.7793 | 0.8767 | 0.6531 | 0.0802 | 0.8550 |
| 0.2029 | 11.0 | 3773 | 0.2685 | 0.6523 | 0.7267 | 0.9003 | 0.8751 | 0.9443 | 0.7596 | 0.0958 | 0.9589 | 0.7812 | 0.8779 | 0.6536 | 0.0890 | 0.8600 |
| 0.1707 | 12.0 | 4116 | 0.2612 | 0.6591 | 0.7360 | 0.9015 | 0.8866 | 0.9324 | 0.7982 | 0.1097 | 0.9532 | 0.7853 | 0.8788 | 0.6639 | 0.0995 | 0.8679 |
| 0.2742 | 13.0 | 4459 | 0.2628 | 0.6512 | 0.7247 | 0.9022 | 0.8756 | 0.9442 | 0.7781 | 0.0666 | 0.9593 | 0.7847 | 0.8797 | 0.6635 | 0.0633 | 0.8651 |
| 0.2991 | 14.0 | 4802 | 0.2702 | 0.6653 | 0.7404 | 0.9025 | 0.8909 | 0.9368 | 0.7673 | 0.1492 | 0.9578 | 0.7870 | 0.8799 | 0.6627 | 0.1247 | 0.8722 |
| 0.229 | 15.0 | 5145 | 0.2599 | 0.6615 | 0.7395 | 0.9026 | 0.8723 | 0.9463 | 0.7800 | 0.1303 | 0.9687 | 0.7850 | 0.8798 | 0.6682 | 0.1143 | 0.8604 |
| 0.2004 | 16.0 | 5488 | 0.2595 | 0.6719 | 0.7473 | 0.9042 | 0.8854 | 0.9398 | 0.7863 | 0.1735 | 0.9513 | 0.7898 | 0.8814 | 0.6719 | 0.1442 | 0.8721 |
| 0.1944 | 17.0 | 5831 | 0.2564 | 0.6729 | 0.7486 | 0.9058 | 0.8940 | 0.9368 | 0.7895 | 0.1693 | 0.9536 | 0.7936 | 0.8830 | 0.6778 | 0.1418 | 0.8685 |
| 0.2068 | 18.0 | 6174 | 0.2539 | 0.6664 | 0.7450 | 0.9061 | 0.8915 | 0.9362 | 0.8051 | 0.1245 | 0.9677 | 0.7940 | 0.8839 | 0.6801 | 0.1102 | 0.8641 |
| 0.2461 | 19.0 | 6517 | 0.2494 | 0.6776 | 0.7603 | 0.9063 | 0.8756 | 0.9427 | 0.8251 | 0.1941 | 0.9642 | 0.7927 | 0.8854 | 0.6800 | 0.1585 | 0.8712 |
| 0.2252 | 20.0 | 6860 | 0.2498 | 0.6733 | 0.7461 | 0.9074 | 0.8813 | 0.9452 | 0.8043 | 0.1456 | 0.9542 | 0.7947 | 0.8856 | 0.6843 | 0.1284 | 0.8736 |
| 0.1975 | 21.0 | 7203 | 0.2519 | 0.6761 | 0.7516 | 0.9084 | 0.8960 | 0.9386 | 0.7992 | 0.1656 | 0.9585 | 0.7989 | 0.8861 | 0.6862 | 0.1412 | 0.8679 |
| 0.2356 | 22.0 | 7546 | 0.2506 | 0.6801 | 0.7526 | 0.9087 | 0.8956 | 0.9396 | 0.7972 | 0.1764 | 0.9542 | 0.7991 | 0.8858 | 0.6890 | 0.1486 | 0.8779 |
| 0.1838 | 23.0 | 7889 | 0.2510 | 0.6805 | 0.7554 | 0.9088 | 0.8835 | 0.9455 | 0.8068 | 0.1824 | 0.9589 | 0.7978 | 0.8867 | 0.6892 | 0.1516 | 0.8773 |
| 0.1576 | 24.0 | 8232 | 0.2511 | 0.6850 | 0.7658 | 0.9091 | 0.8913 | 0.9418 | 0.8021 | 0.2291 | 0.9650 | 0.7996 | 0.8868 | 0.6891 | 0.1765 | 0.8731 |
| 0.1504 | 25.0 | 8575 | 0.2505 | 0.6819 | 0.7590 | 0.9092 | 0.8869 | 0.9439 | 0.8077 | 0.1916 | 0.9650 | 0.7992 | 0.8873 | 0.6890 | 0.1587 | 0.8751 |
| 0.2196 | 26.0 | 8918 | 0.2530 | 0.6830 | 0.7597 | 0.9095 | 0.8946 | 0.9405 | 0.8035 | 0.1985 | 0.9612 | 0.8010 | 0.8872 | 0.6900 | 0.1610 | 0.8756 |
| 0.1781 | 27.0 | 9261 | 0.2509 | 0.6841 | 0.7596 | 0.9101 | 0.8901 | 0.9451 | 0.7993 | 0.2000 | 0.9635 | 0.8010 | 0.8880 | 0.6930 | 0.1635 | 0.8749 |
| 0.1578 | 28.0 | 9604 | 0.2485 | 0.6831 | 0.7591 | 0.9102 | 0.8874 | 0.9457 | 0.8064 | 0.1912 | 0.9651 | 0.8008 | 0.8882 | 0.6942 | 0.1585 | 0.8740 |
| 0.1931 | 29.0 | 9947 | 0.2495 | 0.6840 | 0.7579 | 0.9105 | 0.8893 | 0.9454 | 0.8042 | 0.1899 | 0.9604 | 0.8016 | 0.8884 | 0.6940 | 0.1580 | 0.8779 |
| 0.1582 | 30.0 | 10290 | 0.2486 | 0.6867 | 0.7623 | 0.9106 | 0.8910 | 0.9442 | 0.8061 | 0.2109 | 0.9591 | 0.8022 | 0.8886 | 0.6946 | 0.1697 | 0.8786 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
sidd272/distilbert-base-uncased-lora-AI_generated-classification
|
sidd272
| 2023-09-24T18:30:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T18:30:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
pabloyesteb/ppo-LunarLander-v2
|
pabloyesteb
| 2023-09-24T18:08:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"tensorboard",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T18:15:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.37 +/- 16.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
surajyadav91/llama2_prompt_tuning_sql
|
surajyadav91
| 2023-09-24T17:58:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T17:58:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
aminh/squad-falcon-7b-v2
|
aminh
| 2023-09-24T17:51:00Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T17:50:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
barto17/output
|
barto17
| 2023-09-24T17:40:56Z | 194 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-24T17:29:14Z |
---
base_model: DistilBertForSequenceClassification
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-imdb
This model is a fine-tuned version of [DistilBertForSequenceClassification](https://huggingface.co/DistilBertForSequenceClassification) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4041 | 1.0 | 625 | 0.2722 |
| 0.2358 | 2.0 | 1250 | 0.3961 |
| 0.1243 | 3.0 | 1875 | 0.4994 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ayoubkirouane/Segments-Sidewalk-SegFormer-B0
|
ayoubkirouane
| 2023-09-24T17:40:06Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"en",
"dataset:segments/sidewalk-semantic",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-09-24T13:58:49Z |
---
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: Segments-Sidewalk-SegFormer-B0
results: []
datasets:
- segments/sidewalk-semantic
pipeline_tag: image-segmentation
license: other
language:
- en
library_name: transformers
---
## Model Details
+ **Model Name**: Segments-Sidewalk-SegFormer-B0
+ **Model Type**: Semantic Segmentation
+ **Base Model**: nvidia/segformer-b0-finetuned-ade-512-512
+ **Fine-Tuning Dataset**: Sidewalk-Semantic
## Model Description
The **Segments-Sidewalk-SegFormer-B0** model is a semantic segmentation model fine-tuned on the **sidewalk-semantic** dataset. It is based on the **SegFormer (b0-sized)** architecture and has been adapted for the task of segmenting sidewalk images into various classes, such as road surfaces, pedestrians, vehicles, and more.
## Model Architecture
The model architecture is based on SegFormer, which utilizes a **hierarchical Transformer Encoder and a lightweight all-MLP decoder head**. This architecture has been proven effective in semantic segmentation tasks, and fine-tuning on the 'sidewalk-semantic' dataset allows it to learn to segment sidewalk images accurately.
## Intended Uses
The **Segments-Sidewalk-SegFormer-B0** model can be used for various applications in the context of sidewalk image analysis and understanding.
**Some of the intended use cases include**
+ **Semantic Segmentation**: Use the model to perform pixel-level classification of sidewalk images, enabling the identification of different objects and features in the images, such as road surfaces, pedestrians, vehicles, and construction elements.
+ **Urban Planning**: The model can assist in urban planning tasks by providing detailed information about sidewalk infrastructure, helping city planners make informed decisions.
+ **Autonomous Navigation**: Deploy the model in autonomous vehicles or robots to enhance their understanding of the sidewalk environment, aiding in safe navigation.

## Limitations
+ **Resolution Dependency**: The model's performance may be sensitive to the resolution of the input images. Fine-tuning was performed at a specific resolution, so using significantly different resolutions may require additional adjustments.
+ **Hardware Requirements**: Inference with deep learning models can be computationally intensive, requiring access to GPUs or other specialized hardware for real-time or efficient processing.
## Ethical Considerations
When using and deploying the **Segments-Sidewalk-SegFormer-B0** model, consider the following ethical considerations:
+ **Bias and Fairness**: Carefully evaluate the dataset for biases that may be present and address them to avoid unfair or discriminatory outcomes in predictions, especially when dealing with human-related classes (e.g., pedestrians).
+ **Privacy**: Be mindful of privacy concerns when processing sidewalk images, as they may contain personally identifiable information or capture private locations. Appropriate data anonymization and consent mechanisms should be in place.
+ **Transparency**: Clearly communicate the model's capabilities and limitations to end-users and stakeholders, ensuring they understand the model's potential errors and uncertainties.
+ **Regulatory Compliance**: Adhere to local and national regulations regarding the collection and processing of sidewalk images, especially if the data involves public spaces or private property.
+ **Accessibility**: Ensure that the model's outputs and applications are accessible to individuals with disabilities and do not exclude any user group.
## Usage
```python
# Load model directly
from transformers import AutoFeatureExtractor, SegformerForSemanticSegmentation
extractor = AutoFeatureExtractor.from_pretrained("ayoubkirouane/Segments-Sidewalk-SegFormer-B0")
model = SegformerForSemanticSegmentation.from_pretrained("ayoubkirouane/Segments-Sidewalk-SegFormer-B0")
```
|
simlamkr1/output
|
simlamkr1
| 2023-09-24T17:36:40Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-24T16:50:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/kinoshita_hinata_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T17:33:34Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kinoshita_hinata_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T17:23:31Z |
---
license: mit
datasets:
- CyberHarem/kinoshita_hinata_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kinoshita_hinata_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3400, you need to download `3400/kinoshita_hinata_theidolmstermillionlive.pt` as the embedding and `3400/kinoshita_hinata_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3400**, with the score of 0.904. The trigger words are:
1. `kinoshita_hinata_theidolmstermillionlive`
2. `brown_hair, short_hair, green_eyes, blush, smile, ahoge, open_mouth, :d`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.853 | [Download](5100/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.866 | [Download](4760/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.840 | [Download](4420/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.800 | [Download](4080/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.850 | [Download](3740/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| **3400** | **0.904** | [**Download**](3400/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.888 | [Download](3060/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.807 | [Download](2720/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.727 | [Download](2380/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.841 | [Download](2040/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.824 | [Download](1700/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.794 | [Download](1360/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.649 | [Download](1020/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.764 | [Download](680/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.589 | [Download](340/kinoshita_hinata_theidolmstermillionlive.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
PulpMud/Fortnite-SDXL
|
PulpMud
| 2023-09-24T17:24:09Z | 0 | 0 | null |
[
"sdxl",
"lora",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-09-24T17:13:54Z |
---
license: apache-2.0
language:
- en
tags:
- sdxl
- lora
---
# Model Card for Model ID
Fortnite SDXL LoRa for creating characters that look like they are coming out of Fortnite.
You can try it out here: https://replicate.com/decaid-studio/fortnite
## Model Details
### Model Description
- **Developed by:** Felix Leber @ decaid-studio
- **Model type:** LoRa
- **License:** apache-2.0
- **Finetuned from model [optional]:** SDXL-1.0
## Uses
Trigger: `In the style of frtnte,`
Recommended LoRa Scale: 0.9
|
omiro/Reinforce-CartPole
|
omiro
| 2023-09-24T16:52:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T16:52:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Sanyam0605/poca-SoccerTwos
|
Sanyam0605
| 2023-09-24T16:45:45Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-24T16:45:17Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Sanyam0605/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Sarim24/ppo-LunarLander-v2
|
Sarim24
| 2023-09-24T16:34:02Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-24T16:33:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.70 +/- 10.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JiSung1119/llama2-chat-airport
|
JiSung1119
| 2023-09-24T16:33:43Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T16:32:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
nahyeonkang/classifier
|
nahyeonkang
| 2023-09-24T16:21:41Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-24T16:05:18Z |
---
base_model: klue/roberta-base
tags:
- generated_from_trainer
model-index:
- name: classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Ori/lama-2-13b-peft-2wikihop-with-ret-at-1-v2-seed-3
|
Ori
| 2023-09-24T16:16:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2023-09-24T16:13:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
scottto/mlc-chat-RWKV-4-World-3B-q4f16_2
|
scottto
| 2023-09-24T16:09:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-09-24T16:01:55Z |
---
license: apache-2.0
---
# mlc-chat iOS pre-built RWKV-4-World-3B
https://github.com/mlc-ai/mlc-llm
https://mlc.ai/mlc-llm/docs/deploy/ios.html
RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配
ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在
Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br>
RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br>
import torch<br>
from ringrwkv.configuration_rwkv_world import RwkvConfig<br>
from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br>
from ringrwkv.modehf_world import RwkvForCausalLM<br>
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-3B") #或将本模型下载至本地文件夹<br>
tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br>
text = "你叫什么名字?"<br>
question = f'Question: {text.strip()}\n\nAnswer:'<br>
input_ids = tokenizer.encode(question)<br>
input_ids = torch.tensor(input_ids).unsqueeze(0)<br>
out = model.generate(input_ids,max_new_tokens=40)<br><br>
outlist = out[0].tolist()<br>
for i in outlist:<br>
if i==0: #要删除tokenid为0的元素 <br>
outlist.remove(i)<br>
answer = tokenizer.decode(outlist)<br>
print(answer)<br>
|
CyberHarem/yaegashi_yasuko_akibameidosensou
|
CyberHarem
| 2023-09-24T16:06:23Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yaegashi_yasuko_akibameidosensou",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-22T10:21:24Z |
---
license: mit
datasets:
- CyberHarem/yaegashi_yasuko_akibameidosensou
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yaegashi_yasuko_akibameidosensou
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5700, you need to download `5700/yaegashi_yasuko_akibameidosensou.pt` as the embedding and `5700/yaegashi_yasuko_akibameidosensou.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5700**, with the score of 0.971. The trigger words are:
1. `yaegashi_yasuko_akibameidosensou`
2. `brown_hair, hair_ornament, necktie, hairclip, ponytail, vest, formal, short_hair, black_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5700** | **0.971** | [**Download**](5700/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5700/previews/nude.png) | [<NSFW, click to see>](5700/previews/nude2.png) |  |  |
| 5320 | 0.951 | [Download](5320/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5320/previews/nude.png) | [<NSFW, click to see>](5320/previews/nude2.png) |  |  |
| 4940 | 0.847 | [Download](4940/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4940/previews/nude.png) | [<NSFW, click to see>](4940/previews/nude2.png) |  |  |
| 4560 | 0.954 | [Download](4560/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4560/previews/nude.png) | [<NSFW, click to see>](4560/previews/nude2.png) |  |  |
| 4180 | 0.865 | [Download](4180/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4180/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4180/previews/nude.png) | [<NSFW, click to see>](4180/previews/nude2.png) |  |  |
| 3800 | 0.867 | [Download](3800/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3800/previews/nude.png) | [<NSFW, click to see>](3800/previews/nude2.png) |  |  |
| 3420 | 0.936 | [Download](3420/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3420/previews/nude.png) | [<NSFW, click to see>](3420/previews/nude2.png) |  |  |
| 3040 | 0.890 | [Download](3040/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3040/previews/nude.png) | [<NSFW, click to see>](3040/previews/nude2.png) |  |  |
| 2660 | 0.760 | [Download](2660/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2660/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2660/previews/nude.png) | [<NSFW, click to see>](2660/previews/nude2.png) |  |  |
| 2280 | 0.858 | [Download](2280/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2280/previews/nude.png) | [<NSFW, click to see>](2280/previews/nude2.png) |  |  |
| 1900 | 0.810 | [Download](1900/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1900/previews/nude.png) | [<NSFW, click to see>](1900/previews/nude2.png) |  |  |
| 1520 | 0.865 | [Download](1520/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1520/previews/nude.png) | [<NSFW, click to see>](1520/previews/nude2.png) |  |  |
| 1140 | 0.709 | [Download](1140/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1140/previews/nude.png) | [<NSFW, click to see>](1140/previews/nude2.png) |  |  |
| 760 | 0.641 | [Download](760/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](760/previews/nude.png) | [<NSFW, click to see>](760/previews/nude2.png) |  |  |
| 380 | 0.359 | [Download](380/yaegashi_yasuko_akibameidosensou.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](380/previews/nude.png) | [<NSFW, click to see>](380/previews/nude2.png) |  |  |
|
CyberHarem/nakatani_iku_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T15:59:40Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/nakatani_iku_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T15:46:38Z |
---
license: mit
datasets:
- CyberHarem/nakatani_iku_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of nakatani_iku_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2940, you need to download `2940/nakatani_iku_theidolmstermillionlive.pt` as the embedding and `2940/nakatani_iku_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2940**, with the score of 0.959. The trigger words are:
1. `nakatani_iku_theidolmstermillionlive`
2. `black_hair, short_hair, brown_eyes, blush, open_mouth, hair_ornament, one_side_up, smile, bangs, hair_bobbles`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6300 | 0.931 | [Download](6300/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](6300/previews/pattern_2.png) | [<NSFW, click to see>](6300/previews/pattern_3.png) | [<NSFW, click to see>](6300/previews/pattern_4.png) | [<NSFW, click to see>](6300/previews/pattern_5.png) | [<NSFW, click to see>](6300/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](6300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6300/previews/nude.png) | [<NSFW, click to see>](6300/previews/nude2.png) |  |  |
| 5880 | 0.884 | [Download](5880/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5880/previews/pattern_2.png) | [<NSFW, click to see>](5880/previews/pattern_3.png) | [<NSFW, click to see>](5880/previews/pattern_4.png) | [<NSFW, click to see>](5880/previews/pattern_5.png) | [<NSFW, click to see>](5880/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](5880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5880/previews/nude.png) | [<NSFW, click to see>](5880/previews/nude2.png) |  |  |
| 5460 | 0.937 | [Download](5460/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5460/previews/pattern_2.png) | [<NSFW, click to see>](5460/previews/pattern_3.png) | [<NSFW, click to see>](5460/previews/pattern_4.png) | [<NSFW, click to see>](5460/previews/pattern_5.png) | [<NSFW, click to see>](5460/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](5460/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5460/previews/nude.png) | [<NSFW, click to see>](5460/previews/nude2.png) |  |  |
| 5040 | 0.866 | [Download](5040/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5040/previews/pattern_2.png) | [<NSFW, click to see>](5040/previews/pattern_3.png) | [<NSFW, click to see>](5040/previews/pattern_4.png) | [<NSFW, click to see>](5040/previews/pattern_5.png) | [<NSFW, click to see>](5040/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](5040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4620 | 0.929 | [Download](4620/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](4620/previews/pattern_2.png) | [<NSFW, click to see>](4620/previews/pattern_3.png) | [<NSFW, click to see>](4620/previews/pattern_4.png) | [<NSFW, click to see>](4620/previews/pattern_5.png) | [<NSFW, click to see>](4620/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](4620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) |  |  |
| 4200 | 0.862 | [Download](4200/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](4200/previews/pattern_2.png) | [<NSFW, click to see>](4200/previews/pattern_3.png) | [<NSFW, click to see>](4200/previews/pattern_4.png) | [<NSFW, click to see>](4200/previews/pattern_5.png) | [<NSFW, click to see>](4200/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3780 | 0.910 | [Download](3780/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](3780/previews/pattern_2.png) | [<NSFW, click to see>](3780/previews/pattern_3.png) | [<NSFW, click to see>](3780/previews/pattern_4.png) | [<NSFW, click to see>](3780/previews/pattern_5.png) | [<NSFW, click to see>](3780/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3360 | 0.928 | [Download](3360/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](3360/previews/pattern_2.png) | [<NSFW, click to see>](3360/previews/pattern_3.png) | [<NSFW, click to see>](3360/previews/pattern_4.png) | [<NSFW, click to see>](3360/previews/pattern_5.png) | [<NSFW, click to see>](3360/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| **2940** | **0.959** | [**Download**](2940/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](2940/previews/pattern_2.png) | [<NSFW, click to see>](2940/previews/pattern_3.png) | [<NSFW, click to see>](2940/previews/pattern_4.png) | [<NSFW, click to see>](2940/previews/pattern_5.png) | [<NSFW, click to see>](2940/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](2940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2940/previews/nude.png) | [<NSFW, click to see>](2940/previews/nude2.png) |  |  |
| 2520 | 0.925 | [Download](2520/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](2520/previews/pattern_2.png) | [<NSFW, click to see>](2520/previews/pattern_3.png) | [<NSFW, click to see>](2520/previews/pattern_4.png) | [<NSFW, click to see>](2520/previews/pattern_5.png) | [<NSFW, click to see>](2520/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](2520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) |  |  |
| 2100 | 0.924 | [Download](2100/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](2100/previews/pattern_2.png) | [<NSFW, click to see>](2100/previews/pattern_3.png) | [<NSFW, click to see>](2100/previews/pattern_4.png) | [<NSFW, click to see>](2100/previews/pattern_5.png) | [<NSFW, click to see>](2100/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](2100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2100/previews/nude.png) | [<NSFW, click to see>](2100/previews/nude2.png) |  |  |
| 1680 | 0.811 | [Download](1680/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](1680/previews/pattern_2.png) | [<NSFW, click to see>](1680/previews/pattern_3.png) | [<NSFW, click to see>](1680/previews/pattern_4.png) | [<NSFW, click to see>](1680/previews/pattern_5.png) | [<NSFW, click to see>](1680/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](1680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1680/previews/nude.png) | [<NSFW, click to see>](1680/previews/nude2.png) |  |  |
| 1260 | 0.780 | [Download](1260/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](1260/previews/pattern_2.png) | [<NSFW, click to see>](1260/previews/pattern_3.png) | [<NSFW, click to see>](1260/previews/pattern_4.png) | [<NSFW, click to see>](1260/previews/pattern_5.png) | [<NSFW, click to see>](1260/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](1260/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1260/previews/nude.png) | [<NSFW, click to see>](1260/previews/nude2.png) |  |  |
| 840 | 0.723 | [Download](840/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](840/previews/pattern_2.png) | [<NSFW, click to see>](840/previews/pattern_3.png) | [<NSFW, click to see>](840/previews/pattern_4.png) | [<NSFW, click to see>](840/previews/pattern_5.png) | [<NSFW, click to see>](840/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](840/previews/nude.png) | [<NSFW, click to see>](840/previews/nude2.png) |  |  |
| 420 | 0.655 | [Download](420/nakatani_iku_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](420/previews/pattern_2.png) | [<NSFW, click to see>](420/previews/pattern_3.png) | [<NSFW, click to see>](420/previews/pattern_4.png) | [<NSFW, click to see>](420/previews/pattern_5.png) | [<NSFW, click to see>](420/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](420/previews/nude.png) | [<NSFW, click to see>](420/previews/nude2.png) |  |  |
|
omiro/ppo-Huggy-2
|
omiro
| 2023-09-24T15:48:37Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-24T15:48:32Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: omiro/ppo-Huggy-2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
platzi/iass-distilroberta-base-mrpc-glue-iassolutions
|
platzi
| 2023-09-24T15:47:04Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-24T15:44:35Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: iass-distilroberta-base-mrpc-glue-iassolutions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8235294117647058
- name: F1
type: f1
value: 0.8714285714285714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iass-distilroberta-base-mrpc-glue-iassolutions
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4363
- Accuracy: 0.8235
- F1: 0.8714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5412 | 1.09 | 500 | 0.4363 | 0.8235 | 0.8714 |
| 0.372 | 2.18 | 1000 | 0.7459 | 0.8235 | 0.8710 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
flobbit/ford-pickup-truck-1966-sdxl-lora
|
flobbit
| 2023-09-24T15:45:47Z | 667 | 3 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-09-24T15:31:56Z |
---
license: apache-2.0
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
widget:
- text: f0rd66pu, truck, gold body, (lifted), massive orange volcano explosion in the background, night sky, tornado (cfg = 8, seed = 2363198649)
inference: true
language:
- en
---
# ford-pickup-truck-1966-sdxl-lora

LoRA for SDXL 1.0 Base for generating 1966 Ford pickup trucks. The LoRA is in a `safetensors` format for use in diffusers or in UIs such as A1111.
## How to use
In A1111, specify the LoRA in the prompt along with a weight \<lora:f0rd66pu_SDXL_v1_32:1\>, then use the trigger keyword. Further example images with A1111 prompts at (https://civitai.com/models/151004/ford-pickup-truck-1966-sdxl)
Example diffusers prompt which you can run in the inference to the right: 'f0rd66pu, truck, gold body, (lifted), massive orange volcano explosion in the background, night sky, tornado (cfg = 8, seed = 2363198649)'
## Recommended Weight:
1.0 (lowering the LoRA weight in A1111 will make it produce less accurate results)
## Trigger:
f0rd66pu
## Helper:
In general, you can vary the color of the body and make the truck lifted with some fiddling. For example: f0rd66pu, truck, blue body, (lifted), ...
## Notes:
The LoRA seems to work well with other base SDXL models, but I didn't spend much time playing with this.
## Methodology:
This model was trained on only 1966 images at 1024x1024. 122 images of a variety of vehicles (2wd and 4wd) and colors. No regularization images were used. 20 epochs with 4880 overall steps. No regularization images were used.

|
Sudha3014/4
|
Sudha3014
| 2023-09-24T15:15:19Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-09-24T15:15:19Z |
---
license: bigscience-bloom-rail-1.0
---
|
CyberHarem/takayama_sayoko_theidolmstermillionlive
|
CyberHarem
| 2023-09-24T15:04:54Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/takayama_sayoko_theidolmstermillionlive",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T14:48:29Z |
---
license: mit
datasets:
- CyberHarem/takayama_sayoko_theidolmstermillionlive
pipeline_tag: text-to-image
tags:
- art
---
# Lora of takayama_sayoko_theidolmstermillionlive
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5520, you need to download `5520/takayama_sayoko_theidolmstermillionlive.pt` as the embedding and `5520/takayama_sayoko_theidolmstermillionlive.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5520**, with the score of 0.898. The trigger words are:
1. `takayama_sayoko_theidolmstermillionlive`
2. `long_hair, black_hair, red_eyes, blush, smile, open_mouth, bangs, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6900 | 0.829 | [Download](6900/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](6900/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6900/previews/bondage.png) | [<NSFW, click to see>](6900/previews/free.png) |  |  | [<NSFW, click to see>](6900/previews/nude.png) | [<NSFW, click to see>](6900/previews/nude2.png) |  |  |
| 6440 | 0.870 | [Download](6440/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](6440/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6440/previews/bondage.png) | [<NSFW, click to see>](6440/previews/free.png) |  |  | [<NSFW, click to see>](6440/previews/nude.png) | [<NSFW, click to see>](6440/previews/nude2.png) |  |  |
| 5980 | 0.835 | [Download](5980/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5980/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5980/previews/bondage.png) | [<NSFW, click to see>](5980/previews/free.png) |  |  | [<NSFW, click to see>](5980/previews/nude.png) | [<NSFW, click to see>](5980/previews/nude2.png) |  |  |
| **5520** | **0.898** | [**Download**](5520/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5520/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5520/previews/bondage.png) | [<NSFW, click to see>](5520/previews/free.png) |  |  | [<NSFW, click to see>](5520/previews/nude.png) | [<NSFW, click to see>](5520/previews/nude2.png) |  |  |
| 5060 | 0.823 | [Download](5060/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](5060/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5060/previews/bondage.png) | [<NSFW, click to see>](5060/previews/free.png) |  |  | [<NSFW, click to see>](5060/previews/nude.png) | [<NSFW, click to see>](5060/previews/nude2.png) |  |  |
| 4600 | 0.824 | [Download](4600/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](4600/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4600/previews/bondage.png) | [<NSFW, click to see>](4600/previews/free.png) |  |  | [<NSFW, click to see>](4600/previews/nude.png) | [<NSFW, click to see>](4600/previews/nude2.png) |  |  |
| 4140 | 0.843 | [Download](4140/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](4140/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4140/previews/bondage.png) | [<NSFW, click to see>](4140/previews/free.png) |  |  | [<NSFW, click to see>](4140/previews/nude.png) | [<NSFW, click to see>](4140/previews/nude2.png) |  |  |
| 3680 | 0.783 | [Download](3680/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](3680/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3680/previews/bondage.png) | [<NSFW, click to see>](3680/previews/free.png) |  |  | [<NSFW, click to see>](3680/previews/nude.png) | [<NSFW, click to see>](3680/previews/nude2.png) |  |  |
| 3220 | 0.813 | [Download](3220/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](3220/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3220/previews/bondage.png) | [<NSFW, click to see>](3220/previews/free.png) |  |  | [<NSFW, click to see>](3220/previews/nude.png) | [<NSFW, click to see>](3220/previews/nude2.png) |  |  |
| 2760 | 0.825 | [Download](2760/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](2760/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2760/previews/bondage.png) | [<NSFW, click to see>](2760/previews/free.png) |  |  | [<NSFW, click to see>](2760/previews/nude.png) | [<NSFW, click to see>](2760/previews/nude2.png) |  |  |
| 2300 | 0.765 | [Download](2300/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](2300/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2300/previews/bondage.png) | [<NSFW, click to see>](2300/previews/free.png) |  |  | [<NSFW, click to see>](2300/previews/nude.png) | [<NSFW, click to see>](2300/previews/nude2.png) |  |  |
| 1840 | 0.789 | [Download](1840/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](1840/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1840/previews/bondage.png) | [<NSFW, click to see>](1840/previews/free.png) |  |  | [<NSFW, click to see>](1840/previews/nude.png) | [<NSFW, click to see>](1840/previews/nude2.png) |  |  |
| 1380 | 0.649 | [Download](1380/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](1380/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1380/previews/bondage.png) | [<NSFW, click to see>](1380/previews/free.png) |  |  | [<NSFW, click to see>](1380/previews/nude.png) | [<NSFW, click to see>](1380/previews/nude2.png) |  |  |
| 920 | 0.688 | [Download](920/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](920/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](920/previews/bondage.png) | [<NSFW, click to see>](920/previews/free.png) |  |  | [<NSFW, click to see>](920/previews/nude.png) | [<NSFW, click to see>](920/previews/nude2.png) |  |  |
| 460 | 0.622 | [Download](460/takayama_sayoko_theidolmstermillionlive.zip) |  | [<NSFW, click to see>](460/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](460/previews/bondage.png) | [<NSFW, click to see>](460/previews/free.png) |  |  | [<NSFW, click to see>](460/previews/nude.png) | [<NSFW, click to see>](460/previews/nude2.png) |  |  |
|
bgspaditya/byt-malurl-db-bu
|
bgspaditya
| 2023-09-24T14:50:45Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"phishing",
"bert",
"en",
"dataset:bgspaditya/phishing-dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-18T03:28:59Z |
---
license: mit
datasets:
- bgspaditya/phishing-dataset
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
tags:
- phishing
- text-classification
- bert
---
Distillation version of BERT Base Uncased. Pretrained and Finetuned for Malicious URL Detection
|
CyberHarem/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord
|
CyberHarem
| 2023-09-24T14:15:03Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-24T14:03:07Z |
---
license: mit
datasets:
- CyberHarem/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord
pipeline_tag: text-to-image
tags:
- art
---
# Lora of azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.pt` as the embedding and `4420/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.744. The trigger words are:
1. `azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord`
2. `short_hair, blush, white_hair, magical_girl, aqua_eyes, bangs, grey_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.689 | [Download](5100/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.731 | [Download](4760/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.744** | [**Download**](4420/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.690 | [Download](4080/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.672 | [Download](3740/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.697 | [Download](3400/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.574 | [Download](3060/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.634 | [Download](2720/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.569 | [Download](2380/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.624 | [Download](2040/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.482 | [Download](1700/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.375 | [Download](1360/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.349 | [Download](1020/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.213 | [Download](680/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.177 | [Download](340/azusa_mifuyu_puellamagimadokamagicasidestorymagiarecord.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
dbtmddn41/xlm-roberta-base-finetuned-panx-de
|
dbtmddn41
| 2023-09-24T14:09:45Z | 55 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"token-classification",
"generated_from_keras_callback",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-24T07:09:30Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: dbtmddn41/xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dbtmddn41/xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1447
- Train F1: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamW', 'weight_decay': 0.004, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train F1 | Epoch |
|:----------:|:--------:|:-----:|
| 1.2005 | 0.0 | 0 |
| 1.1504 | 0.0 | 1 |
| 1.1447 | 0.0 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
calpt/CLIP-ViT-B-32-xlm-roberta-base-laion5B-s13B-b90k
|
calpt
| 2023-09-24T14:09:09Z | 2,072 | 4 |
transformers
|
[
"transformers",
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"custom_code",
"license:mit",
"region:us"
] |
feature-extraction
| 2023-02-24T21:49:09Z |
---
license: mit
---
# CLIP ViT-B/32 xlm roberta base - LAION-5B
[CLIP ViT-B/32 xlm roberta base - LAION-5B](https://huggingface.co/laion/CLIP-ViT-B-32-xlm-roberta-base-laion5B-s13B-b90k) model converted from OpenCLIP to HuggingFace Transformers.
See https://gist.github.com/calpt/8e3555bd11f1916b5169c8125117e5ee for conversion script and more info.
## Usage
Model uses custom code. Make sure to pass `trust_remote_code=True` when loading the model.
Example:
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoFeatureExtractor, AutoTokenizer
model = AutoModel.from_pretrained("calpt/CLIP-ViT-B-32-xlm-roberta-base-laion5B-s13B-b90k", trust_remote_code=True)
processor = AutoFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
image_input = processor(Image.open("CLIP.png"), return_tensors="pt")
text_input = tokenizer(["a diagram", "a dog", "a cat"], return_tensors="pt", padding=True)
with torch.no_grad():
outputs = model(**image_input, **text_input)
text_probs = (100.0 * outputs.logits_per_image.softmax(dim=-1))
print("Label probs:", text_probs)
```
|
jtlowell/gentzy-lora
|
jtlowell
| 2023-09-24T14:04:28Z | 5 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:jtlowell/gentzy",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-24T13:14:07Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: gentzy
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
datasets:
- jtlowell/gentzy
---
# LoRA DreamBooth - jtlowell/gentzy-lora
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained on the concept prompt:
`gentzy`
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
# This is where you load your trained weights
pipe.load_lora_weights('jtlowell/gentzy-lora')
pipe.to("cuda")
prompt = "A majestic gentzy jumping from a big stone at night"
image = pipe(prompt=prompt, num_inference_steps=50).images[0]
```
|
Yogesh0804/study-model
|
Yogesh0804
| 2023-09-24T13:58:58Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-09-24T13:58:58Z |
---
license: bigscience-bloom-rail-1.0
---
|
selinawisco/my_awesome_asr_mind_model
|
selinawisco
| 2023-09-24T13:50:06Z | 65 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-18T13:33:33Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_asr_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ferno22/vit-beans-finetuned
|
ferno22
| 2023-09-24T13:45:06Z | 141 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-24T13:44:49Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-finetuned-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9711538461538461
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-finetuned-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1157
- Accuracy: 0.9712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.193 | 1.0 | 117 | 0.1099 | 0.9808 |
| 0.0462 | 2.0 | 234 | 0.0857 | 0.9808 |
| 0.0171 | 3.0 | 351 | 0.1237 | 0.9712 |
| 0.0123 | 4.0 | 468 | 0.1088 | 0.9712 |
| 0.0095 | 5.0 | 585 | 0.1135 | 0.9712 |
| 0.0081 | 6.0 | 702 | 0.1162 | 0.9712 |
| 0.0073 | 7.0 | 819 | 0.1158 | 0.9712 |
| 0.0066 | 8.0 | 936 | 0.1152 | 0.9712 |
| 0.0061 | 9.0 | 1053 | 0.1160 | 0.9712 |
| 0.0061 | 10.0 | 1170 | 0.1157 | 0.9712 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
robgonsalves/llama-2-13b-deep-haiku
|
robgonsalves
| 2023-09-24T13:44:50Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-23T20:48:24Z |
---
license: cc-by-sa-4.0
language:
- en
---
# LLaMa2 13B Trained to Write Haikus Given a Topic
Example Prompt: ChatGPT</br>
Response: I think I'm gonna. / Start using ChatGPT to. / Write all my emails.
|
akash31/my-hungry-lion
|
akash31
| 2023-09-24T13:39:54Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-24T13:34:56Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-hungry-lion Dreambooth model trained by akash31 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: IITK-31
Sample pictures of this concept:
.jpg)
|
ViktorDo/electra-finetuned-ner-S800
|
ViktorDo
| 2023-09-24T13:39:44Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"token-classification",
"generated_from_trainer",
"base_model:google/electra-base-discriminator",
"base_model:finetune:google/electra-base-discriminator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-05T12:10:06Z |
---
license: apache-2.0
base_model: google/electra-base-discriminator
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electra-finetuned-ner-S800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-finetuned-ner-S800
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0697
- Precision: 0.6146
- Recall: 0.7181
- F1: 0.6624
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 55 | 0.1115 | 0.4736 | 0.5161 | 0.4940 | 0.9552 |
| No log | 2.0 | 110 | 0.0765 | 0.5789 | 0.6690 | 0.6207 | 0.9721 |
| No log | 3.0 | 165 | 0.0711 | 0.5671 | 0.7055 | 0.6288 | 0.9730 |
| No log | 4.0 | 220 | 0.0698 | 0.6266 | 0.7083 | 0.6649 | 0.9753 |
| No log | 5.0 | 275 | 0.0697 | 0.6146 | 0.7181 | 0.6624 | 0.9758 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.