modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 00:39:58
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 00:39:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kanishka/smolm-mlm-bpe-unmask-seed_222
|
kanishka
| 2023-09-16T15:09:48Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T06:21:18Z |
---
base_model: models/smolm-mlm/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-mlm-bpe-unmask-seed_222
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-mlm-bpe-unmask-seed_222
This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7005
- Accuracy: 0.4481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 512
- seed: 222
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5334 | 1.0 | 11938 | 3.4896 | 0.3463 |
| 3.3402 | 2.0 | 23876 | 3.3814 | 0.3590 |
| 3.1641 | 3.0 | 35814 | 3.1702 | 0.3844 |
| 3.0325 | 4.0 | 47752 | 3.0475 | 0.4019 |
| 2.951 | 5.0 | 59690 | 2.9666 | 0.4095 |
| 2.8583 | 6.0 | 71628 | 2.8908 | 0.4201 |
| 2.7872 | 7.0 | 83566 | 2.8299 | 0.4310 |
| 2.7348 | 8.0 | 95504 | 2.7900 | 0.4335 |
| 2.6584 | 9.0 | 107442 | 2.7272 | 0.4443 |
| 2.6462 | 10.0 | 119380 | 2.6962 | 0.4501 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CyberHarem/himegami_aisa_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T15:09:18Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/himegami_aisa_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-16T08:04:57Z |
---
license: mit
datasets:
- CyberHarem/himegami_aisa_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of himegami_aisa_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5040, you need to download `5040/himegami_aisa_toarumajutsunoindex.pt` as the embedding and `5040/himegami_aisa_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5040**, with the score of 0.886. The trigger words are:
1. `himegami_aisa_toarumajutsunoindex`
2. `long_hair, black_hair, bangs, blunt_bangs, purple_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5400 | 0.848 | [Download](5400/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| **5040** | **0.886** | [**Download**](5040/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4680 | 0.882 | [Download](4680/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4320 | 0.820 | [Download](4320/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3960 | 0.862 | [Download](3960/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3600 | 0.748 | [Download](3600/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3240 | 0.728 | [Download](3240/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2880 | 0.783 | [Download](2880/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2520 | 0.724 | [Download](2520/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) |  |  |
| 2160 | 0.655 | [Download](2160/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1800 | 0.602 | [Download](1800/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1440 | 0.627 | [Download](1440/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 1080 | 0.689 | [Download](1080/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 720 | 0.521 | [Download](720/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](720/previews/nude.png) | [<NSFW, click to see>](720/previews/nude2.png) |  |  |
| 360 | 0.481 | [Download](360/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](360/previews/nude.png) | [<NSFW, click to see>](360/previews/nude2.png) |  |  |
|
AliBagherz/qa-small
|
AliBagherz
| 2023-09-16T15:05:34Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:pquad",
"base_model:HooshvareLab/bert-fa-base-uncased",
"base_model:finetune:HooshvareLab/bert-fa-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-16T14:40:17Z |
---
license: apache-2.0
base_model: HooshvareLab/bert-fa-base-uncased
tags:
- generated_from_trainer
datasets:
- pquad
model-index:
- name: qa-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa-small
This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased) on the pquad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.3634 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/houjou_karen_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T14:49:46Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/houjou_karen_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T14:32:14Z |
---
license: mit
datasets:
- CyberHarem/houjou_karen_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of houjou_karen_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7800, you need to download `7800/houjou_karen_idolmastercinderellagirls.pt` as the embedding and `7800/houjou_karen_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7800**, with the score of 0.913. The trigger words are:
1. `houjou_karen_idolmastercinderellagirls`
2. `brown_hair, blush, smile, brown_eyes, long_hair, bangs, breasts, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **7800** | **0.913** | [**Download**](7800/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](7800/previews/pattern_4.png) |  | [<NSFW, click to see>](7800/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.862 | [Download](7280/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](7280/previews/pattern_4.png) |  | [<NSFW, click to see>](7280/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.821 | [Download](6760/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](6760/previews/pattern_4.png) |  | [<NSFW, click to see>](6760/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](6760/previews/bikini.png) | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.838 | [Download](6240/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](6240/previews/pattern_4.png) |  | [<NSFW, click to see>](6240/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.887 | [Download](5720/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5720/previews/pattern_4.png) |  | [<NSFW, click to see>](5720/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.908 | [Download](5200/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5200/previews/pattern_4.png) |  | [<NSFW, click to see>](5200/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](5200/previews/bikini.png) | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.859 | [Download](4680/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4680/previews/pattern_4.png) |  | [<NSFW, click to see>](4680/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.908 | [Download](4160/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4160/previews/pattern_4.png) |  | [<NSFW, click to see>](4160/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](4160/previews/bikini.png) | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.858 | [Download](3640/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3640/previews/pattern_4.png) |  | [<NSFW, click to see>](3640/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.811 | [Download](3120/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3120/previews/pattern_4.png) |  | [<NSFW, click to see>](3120/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.818 | [Download](2600/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2600/previews/pattern_4.png) |  | [<NSFW, click to see>](2600/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.862 | [Download](2080/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2080/previews/pattern_4.png) |  | [<NSFW, click to see>](2080/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.728 | [Download](1560/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1560/previews/pattern_4.png) |  | [<NSFW, click to see>](1560/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.781 | [Download](1040/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1040/previews/pattern_4.png) |  | [<NSFW, click to see>](1040/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.629 | [Download](520/houjou_karen_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](520/previews/pattern_4.png) |  | [<NSFW, click to see>](520/previews/pattern_6.png) |  |  |  | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
ricecake/codellama-pygmalion-lora-test3
|
ricecake
| 2023-09-16T14:39:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T14:38:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
kanishka/smolm-mlm-bpe-unmask-seed_555
|
kanishka
| 2023-09-16T14:26:26Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T12:26:44Z |
---
base_model: models/smolm-mlm/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-mlm-bpe-unmask-seed_555
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-mlm-bpe-unmask-seed_555
This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7192
- Accuracy: 0.4463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 512
- seed: 555
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5651 | 1.0 | 11938 | 3.5161 | 0.3444 |
| 3.3268 | 2.0 | 23876 | 3.3638 | 0.3500 |
| 3.174 | 3.0 | 35814 | 3.1674 | 0.3830 |
| 3.0341 | 4.0 | 47752 | 3.0337 | 0.3976 |
| 2.9342 | 5.0 | 59690 | 2.9471 | 0.4135 |
| 2.8705 | 6.0 | 71628 | 2.8851 | 0.4242 |
| 2.7996 | 7.0 | 83566 | 2.8431 | 0.4292 |
| 2.7124 | 8.0 | 95504 | 2.7960 | 0.4344 |
| 2.6633 | 9.0 | 107442 | 2.7143 | 0.4475 |
| 2.6564 | 10.0 | 119380 | 2.6882 | 0.4477 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ygaci/ppo-Huggy
|
ygaci
| 2023-09-16T14:26:15Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-16T14:26:05Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ygaci/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/tsukuyomi_komoe_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T14:20:13Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/tsukuyomi_komoe_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-16T07:41:46Z |
---
license: mit
datasets:
- CyberHarem/tsukuyomi_komoe_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of tsukuyomi_komoe_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6380, you need to download `6380/tsukuyomi_komoe_toarumajutsunoindex.pt` as the embedding and `6380/tsukuyomi_komoe_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6380**, with the score of 0.879. The trigger words are:
1. `tsukuyomi_komoe_toarumajutsunoindex`
2. `short_hair, pink_hair, pink_eyes, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8700 | 0.831 | [Download](8700/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8700/previews/nude.png) | [<NSFW, click to see>](8700/previews/nude2.png) |  |  |
| 8120 | 0.855 | [Download](8120/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8120/previews/nude.png) | [<NSFW, click to see>](8120/previews/nude2.png) |  |  |
| 7540 | 0.873 | [Download](7540/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7540/previews/nude.png) | [<NSFW, click to see>](7540/previews/nude2.png) |  |  |
| 6960 | 0.868 | [Download](6960/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6960/previews/nude.png) | [<NSFW, click to see>](6960/previews/nude2.png) |  |  |
| **6380** | **0.879** | [**Download**](6380/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6380/previews/nude.png) | [<NSFW, click to see>](6380/previews/nude2.png) |  |  |
| 5800 | 0.838 | [Download](5800/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5800/previews/nude.png) | [<NSFW, click to see>](5800/previews/nude2.png) |  |  |
| 5220 | 0.720 | [Download](5220/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5220/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5220/previews/nude.png) | [<NSFW, click to see>](5220/previews/nude2.png) |  |  |
| 4640 | 0.811 | [Download](4640/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4640/previews/nude.png) | [<NSFW, click to see>](4640/previews/nude2.png) |  |  |
| 4060 | 0.696 | [Download](4060/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4060/previews/nude.png) | [<NSFW, click to see>](4060/previews/nude2.png) |  |  |
| 3480 | 0.703 | [Download](3480/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3480/previews/nude.png) | [<NSFW, click to see>](3480/previews/nude2.png) |  |  |
| 2900 | 0.612 | [Download](2900/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2900/previews/nude.png) | [<NSFW, click to see>](2900/previews/nude2.png) |  |  |
| 2320 | 0.400 | [Download](2320/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2320/previews/nude.png) | [<NSFW, click to see>](2320/previews/nude2.png) |  |  |
| 1740 | 0.439 | [Download](1740/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1740/previews/nude.png) | [<NSFW, click to see>](1740/previews/nude2.png) |  |  |
| 1160 | 0.190 | [Download](1160/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1160/previews/nude.png) | [<NSFW, click to see>](1160/previews/nude2.png) |  |  |
| 580 | 0.091 | [Download](580/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](580/previews/nude.png) | [<NSFW, click to see>](580/previews/nude2.png) |  |  |
|
badokorach/multilingual-cased-finetuned-luganda-qa
|
badokorach
| 2023-09-16T14:05:10Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:badokorach/multilingual-cased-finetuned-luganda",
"base_model:finetune:badokorach/multilingual-cased-finetuned-luganda",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-13T22:28:45Z |
---
base_model: badokorach/multilingual-cased-finetuned-luganda
tags:
- generated_from_trainer
model-index:
- name: multilingual-cased-finetuned-luganda-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-cased-finetuned-luganda-qa
This model is a fine-tuned version of [badokorach/multilingual-cased-finetuned-luganda](https://huggingface.co/badokorach/multilingual-cased-finetuned-luganda) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.015 | 1.0 | 2215 | 0.0140 |
| 0.0103 | 2.0 | 4430 | 0.0057 |
| 0.0128 | 3.0 | 6645 | 0.0003 |
| 0.0085 | 4.0 | 8860 | 0.0004 |
| 0.0104 | 5.0 | 11075 | 0.0051 |
| 0.0083 | 6.0 | 13290 | 0.0020 |
| 0.0074 | 7.0 | 15505 | 0.0025 |
| 0.0056 | 8.0 | 17720 | 0.0000 |
| 0.0023 | 9.0 | 19935 | 0.0001 |
| 0.0032 | 10.0 | 22150 | 0.0000 |
| 0.0007 | 11.0 | 24365 | 0.0000 |
| 0.0003 | 12.0 | 26580 | 0.0000 |
| 0.0017 | 13.0 | 28795 | 0.0000 |
| 0.0005 | 14.0 | 31010 | 0.0000 |
| 0.0002 | 15.0 | 33225 | 0.0000 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LarryAIDraw/Iinomiko-v2-000010
|
LarryAIDraw
| 2023-09-16T14:00:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-16T13:57:08Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/146080/iino-miko-kaguya-sama-wa-kokurasetailove-is-war
|
LarryAIDraw/YooV2
|
LarryAIDraw
| 2023-09-16T13:59:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-16T13:55:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/121947/jiyoung-yoo-or-manhwa-or-eleceed
|
venkataravuri/my_awesome_swag_model
|
venkataravuri
| 2023-09-16T13:48:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-06-16T04:18:49Z |
---
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: my_awesome_swag_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_swag_model
This model was trained from scratch on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1711
- Accuracy: 0.746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8738 | 1.0 | 625 | 0.6888 | 0.728 |
| 0.431 | 2.0 | 1250 | 0.7642 | 0.736 |
| 0.1891 | 3.0 | 1875 | 1.1711 | 0.746 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
espiusedwards/flant5-large-lora
|
espiusedwards
| 2023-09-16T13:46:52Z | 23 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T11:45:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
fetiska/pyramid_raider
|
fetiska
| 2023-09-16T13:44:24Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-09-16T13:44:16Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: fetiska/pyramid_raider
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LarryAIDraw/haruna_ba
|
LarryAIDraw
| 2023-09-16T13:30:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-16T13:30:10Z |
---
license: creativeml-openrail-m
---
|
nikhilwani/masked-language-model
|
nikhilwani
| 2023-09-16T13:18:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T13:04:50Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: masked-language-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# masked-language-model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2449 | 1.0 | 1141 | 2.0667 |
| 2.1633 | 2.0 | 2282 | 1.9991 |
| 2.1264 | 3.0 | 3423 | 1.9840 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kyuwon416/rl_course_vizdoom_health_gathering_supreme
|
kyuwon416
| 2023-09-16T13:12:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T11:31:35Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.12 +/- 5.35
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kyuwon416/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jeffsabarman/image_classification
|
jeffsabarman
| 2023-09-16T13:01:32Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-16T13:00:54Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.60625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1918
- Accuracy: 0.6062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.6651 | 0.3187 |
| No log | 2.0 | 40 | 1.3900 | 0.475 |
| No log | 3.0 | 60 | 1.2950 | 0.4875 |
| No log | 4.0 | 80 | 1.2170 | 0.5813 |
| No log | 5.0 | 100 | 1.1709 | 0.5687 |
| No log | 6.0 | 120 | 1.2711 | 0.525 |
| No log | 7.0 | 140 | 1.1324 | 0.575 |
| No log | 8.0 | 160 | 1.2349 | 0.5437 |
| No log | 9.0 | 180 | 1.3844 | 0.5312 |
| No log | 10.0 | 200 | 1.2460 | 0.55 |
| No log | 11.0 | 220 | 1.2182 | 0.6125 |
| No log | 12.0 | 240 | 1.3365 | 0.5563 |
| No log | 13.0 | 260 | 1.2137 | 0.6125 |
| No log | 14.0 | 280 | 1.3335 | 0.575 |
| No log | 15.0 | 300 | 1.1078 | 0.625 |
| No log | 16.0 | 320 | 1.2962 | 0.6 |
| No log | 17.0 | 340 | 1.2558 | 0.6125 |
| No log | 18.0 | 360 | 1.3949 | 0.55 |
| No log | 19.0 | 380 | 1.3807 | 0.5687 |
| No log | 20.0 | 400 | 1.2734 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
huba-buba/speecht5_tts_voxpopuli_nl
|
huba-buba
| 2023-09-16T12:36:11Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"test model",
"generated_from_trainer",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-09-16T09:06:39Z |
---
language:
- nl
license: mit
base_model: microsoft/speecht5_tts
tags:
- test model
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5151 | 8.58 | 1000 | 0.4713 |
| 0.4916 | 17.17 | 2000 | 0.4599 |
| 0.4863 | 25.75 | 3000 | 0.4551 |
| 0.4896 | 34.33 | 4000 | 0.4546 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
fetiska/cold_weapon
|
fetiska
| 2023-09-16T12:18:33Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-16T12:18:30Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: fetiska/cold_weapon
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
imone/CodeLlama_13B_with_EOT_token
|
imone
| 2023-09-16T11:55:50Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-09-16T11:44:59Z |
---
license: llama2
---
# Code Llama 13B with End-of-turn (EOT) Token
This is the Code Llama 13B model with `<|end_of_turn|>` token added as id `32016` and other special tokens. The token input/output embedding is initialized as the mean of all existing input/output token embeddings, respectively.
## Special tokens added:
```json
{
"<|end_of_turn|>": 32016,
"<|verdict|>": 32017,
"<|PAD|>": 32018,
"<|PAD2|>": 32019,
}
```
|
open-eye/RETFound_MAE
|
open-eye
| 2023-09-16T11:52:03Z | 0 | 7 | null |
[
"Foundation model for eye",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-09-16T11:28:33Z |
---
tags:
- Foundation model for eye
license: cc-by-nc-4.0
---
## RETFound - A foundation model for retinal imaging
This is the official repo for RETFound, which is based on [MAE](https://github.com/facebookresearch/mae):
Please contact **ykzhoua@gmail.com** or **yukun.zhou.19@ucl.ac.uk** if you have questions.
Keras version implemented by Yuka Kihara can be found [here](https://github.com/uw-biomedical-ml/RETFound_MAE)
### Key features
- RETFound is pre-trained on 1.6 million retinal images with self-supervised learning
- RETFound has been validated in multiple disease detection tasks
- RETFound can be efficiently adapted to customised tasks
### News
- A [visualisation demo](https://github.com/rmaphoh/RETFound_MAE/blob/main/RETFound_visualize.ipynb) is added
### Install environment
1. Create environment with conda:
```
conda create -n retfound python=3.7.5 -y
conda activate retfound
```
2. Install dependencies
```
git clone https://github.com/rmaphoh/RETFound_MAE/
cd RETFound_MAE
pip install -r requirement.txt
```
### Fine-tuning with RETFound weights
To fine tune RETFound on your own data, follow these steps:
1. Download the RETFound pre-trained weights
<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom"></th>
<th valign="bottom">ViT-Large</th>
<!-- TABLE BODY -->
<tr><td align="left">Colour fundus image</td>
<td align="center"><a href="https://drive.google.com/file/d/1l62zbWUFTlp214SvK6eMwPQZAzcwoeBE/view?usp=sharing">download</a></td>
</tr>
<!-- TABLE BODY -->
<tr><td align="left">OCT</td>
<td align="center"><a href="https://drive.google.com/file/d/1m6s7QYkjyjJDlpEuXm7Xp3PmjN-elfW2/view?usp=sharing">download</a></td>
</tr>
</tbody></table>
2. Organise your data into this directory structure (using IDRiD as an [example](Example.ipynb))
<p align="left">
<img src="./pic/file_index.jpg" width="160">
</p>
3. Start fine-tuning (use IDRiD as example). A fine-tuned checkpoint will be saved during training. Evaluation will be run after training.
```
python -m torch.distributed.launch --nproc_per_node=1 --master_port=48798 main_finetune.py \
--batch_size 16 \
--world_size 1 \
--model vit_large_patch16 \
--epochs 50 \
--blr 5e-3 --layer_decay 0.65 \
--weight_decay 0.05 --drop_path 0.2 \
--nb_classes 5 \
--data_path ./IDRiD_data/ \
--task ./finetune_IDRiD/ \
--finetune ./RETFound_cfp_weights.pth
```
4. For evaluation only
```
python -m torch.distributed.launch --nproc_per_node=1 --master_port=48798 main_finetune.py \
--eval --batch_size 16 \
--world_size 1 \
--model vit_large_patch16 \
--epochs 50 \
--blr 5e-3 --layer_decay 0.65 \
--weight_decay 0.05 --drop_path 0.2 \
--nb_classes 5 \
--data_path ./IDRiD_data/ \
--task ./internal_IDRiD/ \
--resume ./finetune_IDRiD/checkpoint-best.pth
```
### Load the model and weights (if you want to call the model in your code)
```python
import torch
import models_vit
from util.pos_embed import interpolate_pos_embed
from timm.models.layers import trunc_normal_
# call the model
model = models_vit.__dict__['vit_large_patch16'](
num_classes=2,
drop_path_rate=0.2,
global_pool=True,
)
# load RETFound weights
checkpoint = torch.load('RETFound_cfp_weights.pth', map_location='cpu')
checkpoint_model = checkpoint['model']
state_dict = model.state_dict()
for k in ['head.weight', 'head.bias']:
if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape:
print(f"Removing key {k} from pretrained checkpoint")
del checkpoint_model[k]
# interpolate position embedding
interpolate_pos_embed(model, checkpoint_model)
# load pre-trained model
msg = model.load_state_dict(checkpoint_model, strict=False)
assert set(msg.missing_keys) == {'head.weight', 'head.bias', 'fc_norm.weight', 'fc_norm.bias'}
# manually initialize fc layer
trunc_normal_(model.head.weight, std=2e-5)
print("Model = %s" % str(model))
```
|
him009/bloomz_fine_tuned_marketing_email
|
him009
| 2023-09-16T11:45:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T11:45:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
dariadaria/reviews_classifier
|
dariadaria
| 2023-09-16T11:43:10Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-05T02:59:44Z |
SENTIMENT_LABELS = {
'NEGATIVE': 0,
'POSITIVE': 1,
'NEUTRAL': 2,
}
num_classes = 3
---
language:
- en
thumbnail: "url to a thumbnail used in social sharing"
tags:
- sentiment analysis
license: commercial use disallowed
datasets:
- dariadaria/disneyland_reviews
metrics:
confusion matrix
| | pred:0(NEG)| pred:1(POS)| pred:2(NEU)|
| ------------ | --------- | ----------- | ----------- |
| true:0(NEG) | 793| 188| 171|
| true:1(POS) | 227 | 920| 260|
| true:2(NEU) | 115 | 203| 5087|
---
|
CyberHarem/fukiyose_seiri_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T11:40:55Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/fukiyose_seiri_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-15T22:22:23Z |
---
license: mit
datasets:
- CyberHarem/fukiyose_seiri_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of fukiyose_seiri_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5100, you need to download `5100/fukiyose_seiri_toarumajutsunoindex.pt` as the embedding and `5100/fukiyose_seiri_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5100**, with the score of 0.907. The trigger words are:
1. `fukiyose_seiri_toarumajutsunoindex`
2. `black_hair, long_hair, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5100** | **0.907** | [**Download**](5100/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/pattern_12.png) | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.847 | [Download](4760/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/pattern_12.png) | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.862 | [Download](4420/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/pattern_12.png) | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.839 | [Download](4080/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/pattern_12.png) | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.820 | [Download](3740/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/pattern_12.png) | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.875 | [Download](3400/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/pattern_12.png) | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.831 | [Download](3060/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/pattern_12.png) | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.790 | [Download](2720/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/pattern_12.png) | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.777 | [Download](2380/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/pattern_12.png) | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.789 | [Download](2040/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/pattern_12.png) | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.810 | [Download](1700/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/pattern_12.png) | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.756 | [Download](1360/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/pattern_12.png) | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.770 | [Download](1020/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/pattern_12.png) | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.670 | [Download](680/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/pattern_12.png) | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.689 | [Download](340/fukiyose_seiri_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/pattern_12.png) | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
raffel-22/emotion_classification_2
|
raffel-22
| 2023-09-16T11:35:36Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-16T11:19:27Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification_2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.51875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3274
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9337 | 0.3563 |
| No log | 2.0 | 40 | 1.7116 | 0.3375 |
| No log | 3.0 | 60 | 1.5755 | 0.4562 |
| No log | 4.0 | 80 | 1.4939 | 0.45 |
| No log | 5.0 | 100 | 1.4377 | 0.5062 |
| No log | 6.0 | 120 | 1.4363 | 0.4562 |
| No log | 7.0 | 140 | 1.3615 | 0.5125 |
| No log | 8.0 | 160 | 1.3021 | 0.5375 |
| No log | 9.0 | 180 | 1.3307 | 0.525 |
| No log | 10.0 | 200 | 1.3085 | 0.4938 |
| No log | 11.0 | 220 | 1.2798 | 0.5813 |
| No log | 12.0 | 240 | 1.2707 | 0.525 |
| No log | 13.0 | 260 | 1.2339 | 0.55 |
| No log | 14.0 | 280 | 1.3053 | 0.5437 |
| No log | 15.0 | 300 | 1.3038 | 0.4938 |
| No log | 16.0 | 320 | 1.3088 | 0.5375 |
| No log | 17.0 | 340 | 1.3336 | 0.5312 |
| No log | 18.0 | 360 | 1.3053 | 0.5 |
| No log | 19.0 | 380 | 1.2206 | 0.5687 |
| No log | 20.0 | 400 | 1.2598 | 0.5312 |
| No log | 21.0 | 420 | 1.3332 | 0.5125 |
| No log | 22.0 | 440 | 1.3388 | 0.5312 |
| No log | 23.0 | 460 | 1.3129 | 0.5563 |
| No log | 24.0 | 480 | 1.3632 | 0.5062 |
| 0.9153 | 25.0 | 500 | 1.4166 | 0.4688 |
| 0.9153 | 26.0 | 520 | 1.4094 | 0.5 |
| 0.9153 | 27.0 | 540 | 1.4294 | 0.475 |
| 0.9153 | 28.0 | 560 | 1.4937 | 0.475 |
| 0.9153 | 29.0 | 580 | 1.3897 | 0.4938 |
| 0.9153 | 30.0 | 600 | 1.4565 | 0.475 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kyuwon416/LunarLander-v2
|
kyuwon416
| 2023-09-16T11:07:42Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T11:04:41Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -159.89 +/- 85.61
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'kyuwon416/LunarLander-v2-1'
'batch_size': 512
'minibatch_size': 128}
```
|
simlamkr1/Llama2-simassistant
|
simlamkr1
| 2023-09-16T11:03:17Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-25T17:22:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
CyberHarem/uiharu_kazari_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T10:53:02Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/uiharu_kazari_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-15T21:30:58Z |
---
license: mit
datasets:
- CyberHarem/uiharu_kazari_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of uiharu_kazari_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4080, you need to download `4080/uiharu_kazari_toarumajutsunoindex.pt` as the embedding and `4080/uiharu_kazari_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4080**, with the score of 0.974. The trigger words are:
1. `uiharu_kazari_toarumajutsunoindex`
2. `short_hair, black_hair, hair_ornament, flower, hair_flower, head_wreath, serafuku, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.955 | [Download](5100/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.946 | [Download](4760/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.969 | [Download](4420/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| **4080** | **0.974** | [**Download**](4080/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.858 | [Download](3740/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.886 | [Download](3400/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.865 | [Download](3060/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.895 | [Download](2720/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.924 | [Download](2380/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.929 | [Download](2040/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.843 | [Download](1700/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.806 | [Download](1360/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.795 | [Download](1020/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.694 | [Download](680/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.614 | [Download](340/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
VictorGil75/Modelo_Clasificacion_Taller_NoTaller_V4
|
VictorGil75
| 2023-09-16T10:42:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T10:42:50Z |
# Modelo_Clasificacion_Taller_NoTaller_V4
Modelo de clasificación para identificar si una revisión pertenece a un Taller o no.
Este modelo fue entrenado para clasificar reseñas en la categoría 'Taller' o 'No Taller'.
|
MUmairAB/python-code-generator
|
MUmairAB
| 2023-09-16T10:20:02Z | 78 | 1 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-27T10:23:04Z |
---
license: mit
tags:
- generated_from_keras_callback
base_model: gpt2
model-index:
- name: MUmairAB/python-code-generator
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MUmairAB/python-code-generator
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4439
- Validation Loss: 2.6055
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 20785, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4439 | 2.6055 | 0 |
| 2.4438 | 2.6055 | 1 |
| 2.4439 | 2.6055 | 2 |
| 2.4437 | 2.6055 | 3 |
| 2.4439 | 2.6055 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
facebook/dinov2-small-imagenet1k-1-layer
|
facebook
| 2023-09-16T10:14:58Z | 5,029 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"dinov2",
"image-classification",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2304.07193",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-14T19:57:42Z |
---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (small-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the model for classifying an image among one of the [1000 ImageNet labels](https://huggingface.co/datasets/huggingface/label-files/blob/main/imagenet-1k-id2label.json). See the [model hub](https://huggingface.co/models?search=facebook/dinov2) to look for
other fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-small-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-small-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
### BibTeX entry and citation info
```bibtex
misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
nielsr/layoutlmv3-finetuned-funsd
|
nielsr
| 2023-09-16T10:14:49Z | 2,631 | 24 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:nielsr/funsd-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-02T16:18:22Z |
---
tags:
- generated_from_trainer
datasets:
- nielsr/funsd-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
base_model: microsoft/layoutlmv3-base
model-index:
- name: layoutlmv3-finetuned-funsd
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: nielsr/funsd-layoutlmv3
type: nielsr/funsd-layoutlmv3
args: funsd
metrics:
- type: precision
value: 0.9026198714780029
name: Precision
- type: recall
value: 0.913
name: Recall
- type: f1
value: 0.9077802634849614
name: F1
- type: accuracy
value: 0.8330271015158475
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-funsd
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the nielsr/funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1164
- Precision: 0.9026
- Recall: 0.913
- F1: 0.9078
- Accuracy: 0.8330
The script for training can be found here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 10.0 | 100 | 0.5238 | 0.8366 | 0.886 | 0.8606 | 0.8410 |
| No log | 20.0 | 200 | 0.6930 | 0.8751 | 0.8965 | 0.8857 | 0.8322 |
| No log | 30.0 | 300 | 0.7784 | 0.8902 | 0.908 | 0.8990 | 0.8414 |
| No log | 40.0 | 400 | 0.9056 | 0.8916 | 0.905 | 0.8983 | 0.8364 |
| 0.2429 | 50.0 | 500 | 1.0016 | 0.8954 | 0.9075 | 0.9014 | 0.8298 |
| 0.2429 | 60.0 | 600 | 1.0097 | 0.8899 | 0.897 | 0.8934 | 0.8294 |
| 0.2429 | 70.0 | 700 | 1.0722 | 0.9035 | 0.9085 | 0.9060 | 0.8315 |
| 0.2429 | 80.0 | 800 | 1.0884 | 0.8905 | 0.9105 | 0.9004 | 0.8269 |
| 0.2429 | 90.0 | 900 | 1.1292 | 0.8938 | 0.909 | 0.9013 | 0.8279 |
| 0.0098 | 100.0 | 1000 | 1.1164 | 0.9026 | 0.913 | 0.9078 | 0.8330 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/dinov2-giant-imagenet1k-1-layer
|
facebook
| 2023-09-16T10:14:37Z | 6,734 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"dinov2",
"image-classification",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2304.07193",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-14T20:18:41Z |
---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (giant-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the model for classifying an image among one of the [1000 ImageNet labels](https://huggingface.co/datasets/huggingface/label-files/blob/main/imagenet-1k-id2label.json). See the [model hub](https://huggingface.co/models?search=facebook/dinov2) to look for
other fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-giant-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-giant-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
### BibTeX entry and citation info
```bibtex
misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
paras1/dog-images-in-different-backgrounds
|
paras1
| 2023-09-16T09:41:58Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T09:36:20Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Dog-images-in-different-backgrounds Dreambooth model trained by paras1 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: NCU86
Sample pictures of this concept:

|
kaekitsune/RVC-MMTL
|
kaekitsune
| 2023-09-16T09:33:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T21:29:45Z |
---
license: creativeml-openrail-m
---
|
Shishir1807/CT_M6
|
Shishir1807
| 2023-09-16T09:30:25Z | 168 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-16T09:29:49Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/CT_M6",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/CT_M6",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/CT_M6",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/CT_M6" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
c-g/ppo-Huggy
|
c-g
| 2023-09-16T09:21:02Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-16T09:20:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: c-g/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
VictorGil75/Modelo_Clasificacion_Taller_NoTaller_V3
|
VictorGil75
| 2023-09-16T09:12:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T09:12:11Z |
# Modelo_Clasificacion_Taller_NoTaller_V3
Modelo de clasificación para identificar si una revisión pertenece a un Taller o no.
Este modelo fue entrenado para clasificar reseñas en la categoría 'Taller' o 'No Taller'.
|
kittendev/visual_emotional_analysis
|
kittendev
| 2023-09-16T09:10:56Z | 332 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-16T07:16:59Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: visual_emotional_analysis
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# visual_emotional_analysis
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2815
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.8308 | 0.375 |
| No log | 2.0 | 40 | 1.5510 | 0.4875 |
| No log | 3.0 | 60 | 1.4138 | 0.5062 |
| No log | 4.0 | 80 | 1.3845 | 0.4875 |
| No log | 5.0 | 100 | 1.3245 | 0.525 |
| No log | 6.0 | 120 | 1.2645 | 0.6 |
| No log | 7.0 | 140 | 1.2887 | 0.5188 |
| No log | 8.0 | 160 | 1.2395 | 0.5875 |
| No log | 9.0 | 180 | 1.2267 | 0.55 |
| No log | 10.0 | 200 | 1.1883 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kingsankha/my_awesome_model
|
kingsankha
| 2023-09-16T09:09:51Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-15T14:23:52Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: kingsankha/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kingsankha/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3223
- Validation Loss: 0.2669
- Train Accuracy: 0.8878
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3785, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0651 | 1.0124 | 0.4824 | 0 |
| 0.9823 | 0.8702 | 0.6169 | 1 |
| 0.8404 | 0.6475 | 0.7266 | 2 |
| 0.6569 | 0.4672 | 0.8116 | 3 |
| 0.5071 | 0.3573 | 0.8487 | 4 |
| 0.4086 | 0.3027 | 0.8694 | 5 |
| 0.3471 | 0.2676 | 0.8889 | 6 |
| 0.3220 | 0.2669 | 0.8878 | 7 |
| 0.3195 | 0.2669 | 0.8878 | 8 |
| 0.3223 | 0.2669 | 0.8878 | 9 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
VictorGil75/Modelo_Clasificacion_Taller_NoTaller_V2
|
VictorGil75
| 2023-09-16T09:09:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T09:09:20Z |
# Modelo_Clasificacion_Taller_NoTaller_V2
Modelo de clasificación para identificar si una revisión pertenece a un Taller o no.
Este modelo fue entrenado para clasificar reseñas en la categoría 'Taller' o 'No Taller'.
|
kishanmurthy/bloom-3b-finetuned-squad-v2
|
kishanmurthy
| 2023-09-16T08:47:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T06:55:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
sharoz/Performance_Linear_Regression
|
sharoz
| 2023-09-16T08:46:29Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-09-16T08:14:37Z |
---
license: openrail
---
# Model Card
## Metrics
| Epoch | Train Cost | Test Cost |
|-|-|-|
| 0 | 3382.962786165282 | 3094.984020098352 |
| 100 | 2770.5905361187124 | 2455.4727963244277 |
| 200 | 2269.2087579076933 | 1946.057881802018 |
| 500 | 1247.4047150621218 | 971.1867109807944 |
| 1000 | 461.6479383731376 | 350.78932608350294 |
| 2000 | 79.84823961847172 | 223.0862786936803 |
| Learning Rate |Initial Bias | Initial Weights |
|-|-|-|
| 0.001 | 0 | 0.9 to 1.0 |

|
scwoods/Llama-2-7b-chat-hf-fine-tuned-adapters
|
scwoods
| 2023-09-16T08:43:08Z | 3 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T08:43:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
Shishir1807/CT_M1
|
Shishir1807
| 2023-09-16T08:39:58Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-16T08:32:35Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/CT_M1",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/CT_M1",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/CT_M1",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/CT_M1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
Yntec/aMovieX
|
Yntec
| 2023-09-16T08:29:51Z | 9,944 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"MagicArt35",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T07:32:13Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- MagicArt35
---
# AmovieX
Samples and prompts:

Pretty cute girl in a future where humanity has colonized the stars, a group of explorers embarks on a journey to a distant planet, hoping to discover new forms of life and unlock the secrets of the universe. But as they descend through the planet’s thick atmosphere, they discover that the world below is more dangerous and mysterious than they could have ever imagined.

Create pretty cute girl in an otherworldly landscape inspired by a specific planet or moon in our solar system, featuring unique geological formations and extraterrestrial flora and fauna.
Original page:
https://civitai.com/models/94687/photo-movie-x
|
pralaypati/stable-diffusion-fine-tuned
|
pralaypati
| 2023-09-16T08:25:10Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T06:52:18Z |
---
license: apache-2.0
---
A sample fine-tuned version of a pre-trained stable diffusion model. Model has been fine-tuned on two subjects and the corresponding prompts.
|
GregoRio123/tes
|
GregoRio123
| 2023-09-16T07:41:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-16T07:40:50Z |
---
license: creativeml-openrail-m
---
|
Evan-Lin/Bart-large-abs-yelp-inferable
|
Evan-Lin
| 2023-09-16T07:31:08Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-23T04:24:55Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmp6bccv587/Evan-Lin/Bart-large-abs-yelp-inferable")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmp6bccv587/Evan-Lin/Bart-large-abs-yelp-inferable")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmp6bccv587/Evan-Lin/Bart-large-abs-yelp-inferable")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
minhbui/viettel_v1_mix_100k
|
minhbui
| 2023-09-16T07:19:29Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"vi",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-16T03:51:46Z |
---
license: llama2
language:
- vi
- en
---
Our model finetune qlora on 100k samples (50k dolphins data + 43k webglm data + 10k squad paraphrases answer). All the data is translated into Vietnamese.
|
cedricsarigumba/llama2-qlora-finetunined-french
|
cedricsarigumba
| 2023-09-16T07:12:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T07:12:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Leekp/toonmaker3
|
Leekp
| 2023-09-16T07:02:08Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-16T07:02:07Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Korean webtoon image depicting a character named fred
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
thainq107/flan-t5-small-twitter-sentiment-analysis-zero-shot
|
thainq107
| 2023-09-16T07:01:14Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-16T03:38:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-small-twitter-sentiment-analysis-zero-shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-twitter-sentiment-analysis-zero-shot
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1810
- Accuracy: 0.8484
It achieves the following results on the test set:
- Loss: 0.1899
- Accuracy: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.213 | 1.0 | 1875 | 0.1922 | 0.8268 |
| 0.1963 | 2.0 | 3750 | 0.1863 | 0.8374 |
| 0.188 | 3.0 | 5625 | 0.1817 | 0.8413 |
| 0.1835 | 4.0 | 7500 | 0.1795 | 0.8438 |
| 0.1754 | 5.0 | 9375 | 0.1786 | 0.8459 |
| 0.1715 | 6.0 | 11250 | 0.1806 | 0.8464 |
| 0.1692 | 7.0 | 13125 | 0.1799 | 0.8478 |
| 0.1646 | 8.0 | 15000 | 0.1810 | 0.8484 |
| 0.1664 | 9.0 | 16875 | 0.1810 | 0.8484 |
| 0.1622 | 10.0 | 18750 | 0.1803 | 0.8484 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1
- Datasets 2.9.0
- Tokenizers 0.13.3
|
Josevega69/jose69
|
Josevega69
| 2023-09-16T06:25:04Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-16T05:36:00Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: jose69
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jose69
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0328
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1307 | 3.85 | 500 | 0.0328 | 0.9850 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
shantanudave/shantanuimagessept10
|
shantanudave
| 2023-09-16T06:23:42Z | 1 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-16T06:18:49Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sdaveshantanu
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
VictorGil75/Modelo_Clasificacion_Taller_NoTaller
|
VictorGil75
| 2023-09-16T06:21:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T06:21:28Z |
# Modelo_Clasificacion_Taller_NoTaller
Modelo de clasificación para identificar si una revisión pertenece a un Taller o no.
Este modelo fue entrenado para clasificar reseñas en la categoría 'Taller' o 'No Taller'.
|
MouseTrap/StyleGen-Loopster-DL
|
MouseTrap
| 2023-09-16T05:57:46Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:riffusion/riffusion-model-v1",
"base_model:adapter:riffusion/riffusion-model-v1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-16T05:50:05Z |
---
license: creativeml-openrail-m
base_model: riffusion/riffusion-model-v1
instance_prompt: Loopster style
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MouseTrap/StyleGen-Looper
These are LoRA adaption weights for riffusion/riffusion-model-v1. The weights were trained on Loopster style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
|
om-ashish-soni/pos-ner-tagging-v3
|
om-ashish-soni
| 2023-09-16T05:46:42Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:om-ashish-soni/pos-ner-tagging-v2",
"base_model:finetune:om-ashish-soni/pos-ner-tagging-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-16T05:35:20Z |
---
license: apache-2.0
base_model: om-ashish-soni/pos-ner-tagging-v2
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos-ner-tagging-v3
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9339443388845423
- name: Recall
type: recall
value: 0.9374228724000987
- name: F1
type: f1
value: 0.9356803726596793
- name: Accuracy
type: accuracy
value: 0.9272679107552835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos-ner-tagging-v3
This model is a fine-tuned version of [om-ashish-soni/pos-ner-tagging-v2](https://huggingface.co/om-ashish-soni/pos-ner-tagging-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6356
- Precision: 0.9339
- Recall: 0.9374
- F1: 0.9357
- Accuracy: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 439 | 0.6415 | 0.9341 | 0.9367 | 0.9354 | 0.9265 |
| 0.0078 | 2.0 | 878 | 0.6372 | 0.9327 | 0.9363 | 0.9345 | 0.9259 |
| 0.006 | 3.0 | 1317 | 0.6283 | 0.9338 | 0.9373 | 0.9356 | 0.9274 |
| 0.0036 | 4.0 | 1756 | 0.6356 | 0.9339 | 0.9374 | 0.9357 | 0.9273 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Sagicc/whisper-medium-sr-fleurs
|
Sagicc
| 2023-09-16T05:41:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"sr",
"dataset:google/fleurs",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-15T19:07:27Z |
---
language:
- sr
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Medium Sr Fleurs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Google Fleurs
type: google/fleurs
config: sr_rs
split: test
args: sr_rs
metrics:
- name: Wer
type: wer
value: 0.17942107976725344
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Sr Fleurs
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3577
- Wer Ortho: 0.2072
- Wer: 0.1794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0341 | 2.49 | 500 | 0.2704 | 0.2074 | 0.1789 |
| 0.0109 | 4.98 | 1000 | 0.3091 | 0.2075 | 0.1774 |
| 0.006 | 7.46 | 1500 | 0.3143 | 0.2031 | 0.1713 |
| 0.0081 | 9.95 | 2000 | 0.3284 | 0.2070 | 0.1754 |
| 0.0038 | 12.44 | 2500 | 0.3426 | 0.2099 | 0.1805 |
| 0.0042 | 14.93 | 3000 | 0.3630 | 0.2113 | 0.1821 |
| 0.0032 | 17.41 | 3500 | 0.3659 | 0.2089 | 0.1791 |
| 0.0046 | 19.9 | 4000 | 0.3577 | 0.2072 | 0.1794 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
om-ashish-soni/pos-ner-tagging-v2
|
om-ashish-soni
| 2023-09-16T05:25:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:om-ashish-soni/pos-ner-tagging-v2",
"base_model:finetune:om-ashish-soni/pos-ner-tagging-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-16T04:22:26Z |
---
license: apache-2.0
base_model: om-ashish-soni/pos-ner-tagging-v2
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos-ner-tagging-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9393653920267203
- name: Recall
type: recall
value: 0.9408358887483113
- name: F1
type: f1
value: 0.9401000653531749
- name: Accuracy
type: accuracy
value: 0.9270324365691411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos-ner-tagging-v2
This model is a fine-tuned version of [om-ashish-soni/pos-ner-tagging-v2](https://huggingface.co/om-ashish-soni/pos-ner-tagging-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6442
- Precision: 0.9394
- Recall: 0.9408
- F1: 0.9401
- Accuracy: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3297 | 1.0 | 1756 | 0.4190 | 0.9189 | 0.9231 | 0.9210 | 0.9051 |
| 0.2521 | 2.0 | 3512 | 0.3836 | 0.9210 | 0.9300 | 0.9255 | 0.9114 |
| 0.1932 | 3.0 | 5268 | 0.4155 | 0.9295 | 0.9338 | 0.9316 | 0.9183 |
| 0.1325 | 4.0 | 7024 | 0.3969 | 0.9328 | 0.9356 | 0.9342 | 0.9211 |
| 0.0973 | 5.0 | 8780 | 0.4247 | 0.9332 | 0.9367 | 0.9349 | 0.9222 |
| 0.0799 | 6.0 | 10536 | 0.4606 | 0.9338 | 0.9374 | 0.9356 | 0.9229 |
| 0.0554 | 7.0 | 12292 | 0.4836 | 0.9333 | 0.9379 | 0.9356 | 0.9239 |
| 0.0415 | 8.0 | 14048 | 0.5271 | 0.9361 | 0.9391 | 0.9376 | 0.9245 |
| 0.0285 | 9.0 | 15804 | 0.5363 | 0.9366 | 0.9397 | 0.9381 | 0.9253 |
| 0.022 | 10.0 | 17560 | 0.5653 | 0.9377 | 0.9396 | 0.9387 | 0.9258 |
| 0.0146 | 11.0 | 19316 | 0.5962 | 0.9374 | 0.9400 | 0.9387 | 0.9259 |
| 0.0121 | 12.0 | 21072 | 0.6061 | 0.9385 | 0.9401 | 0.9393 | 0.9266 |
| 0.0085 | 13.0 | 22828 | 0.6263 | 0.9384 | 0.9403 | 0.9394 | 0.9261 |
| 0.0062 | 14.0 | 24584 | 0.6365 | 0.9381 | 0.9399 | 0.9390 | 0.9259 |
| 0.0053 | 15.0 | 26340 | 0.6386 | 0.9384 | 0.9402 | 0.9393 | 0.9264 |
| 0.0042 | 16.0 | 28096 | 0.6442 | 0.9394 | 0.9408 | 0.9401 | 0.9270 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/kanzaki_kaori_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T05:04:48Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kanzaki_kaori_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-15T21:03:11Z |
---
license: mit
datasets:
- CyberHarem/kanzaki_kaori_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kanzaki_kaori_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7280, you need to download `7280/kanzaki_kaori_toarumajutsunoindex.pt` as the embedding and `7280/kanzaki_kaori_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7280**, with the score of 0.598. The trigger words are:
1. `kanzaki_kaori_toarumajutsunoindex`
2. `long_hair, ponytail, ribbon, hair_ribbon, black_hair, purple_eyes, very_long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.488 | [Download](7800/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| **7280** | **0.598** | [**Download**](7280/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.571 | [Download](6760/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.498 | [Download](6240/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.496 | [Download](5720/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.480 | [Download](5200/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.512 | [Download](4680/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.483 | [Download](4160/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.411 | [Download](3640/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.438 | [Download](3120/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.352 | [Download](2600/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.300 | [Download](2080/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.234 | [Download](1560/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.137 | [Download](1040/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.101 | [Download](520/kanzaki_kaori_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
shaowenchen/baichuan2-7b-base-gguf
|
shaowenchen
| 2023-09-16T04:50:40Z | 27 | 3 | null |
[
"gguf",
"baichuan",
"chinese",
"text-generation",
"zh",
"en",
"license:other",
"region:us"
] |
text-generation
| 2023-09-15T22:41:09Z |
---
inference: false
language:
- zh
- en
license: other
model_creator: baichuan-inc
model_link: https://www.modelscope.cn/models/baichuan-inc/Baichuan2-7B-Base
model_name: Baichuan2-7B-Base
model_type: baichuan
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- gguf
- baichuan
- chinese
---
## Provided files
| Name | Quant method | Size |
| ----------------------------- | ------------ | ------ |
| baichuan2-7b-base.Q2_K.gguf | Q2_K | 3.0 GB |
| baichuan2-7b-base.Q3_K.gguf | Q3_K | 3.5 GB |
| baichuan2-7b-base.Q3_K_L.gguf | Q3_K_L | 3.8 GB |
| baichuan2-7b-base.Q3_K_S.gguf | Q3_K_S | 3.2 GB |
| baichuan2-7b-base.Q4_0.gguf | Q4_0 | 4.1 GB |
| baichuan2-7b-base.Q4_1.gguf | Q4_1 | 4.5 GB |
| baichuan2-7b-base.Q4_K.gguf | Q4_K | 4.3 GB |
| baichuan2-7b-base.Q4_K_S.gguf | Q4_K_S | 4.1 GB |
| baichuan2-7b-base.Q5_0.gguf | Q5_0 | 4.9 GB |
| baichuan2-7b-base.Q5_1.gguf | Q5_1 | 5.3 GB |
| baichuan2-7b-base.Q5_K.gguf | Q5_K | 5.0 GB |
| baichuan2-7b-base.Q5_K_S.gguf | Q5_K_S | 4.9 GB |
| baichuan2-7b-base.Q6_K.gguf | Q6_K | 5.7 GB |
| baichuan2-7b-base.Q8_0.gguf | Q8_0 | 7.4 GB |
| baichuan2-7b-base.gguf | full | 14 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
## Provided images
| Name | Quant method | Size |
| ------------------------------------------- | ------------ | ------- |
| `shaowenchen/baichuan2-7b-base-gguf:Q2_K` | Q2_K | 4.01 GB |
| `shaowenchen/baichuan2-7b-base-gguf:Q3_K` | Q3_K | 4.52 GB |
| `shaowenchen/baichuan2-7b-base-gguf:Q3_K_L` | Q3_K_L | 4.82 GB |
| `shaowenchen/baichuan2-7b-base-gguf:Q3_K_S` | Q3_K_S | 4.17 GB |
| `shaowenchen/baichuan2-7b-base-gguf:Q4_0` | Q4_0 | 5.1 GB |
Usage:
```
docker run --rm -p 8000:8000 shaowenchen/baichuan2-7b-base-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
nightdude/config_8113572
|
nightdude
| 2023-09-16T04:35:17Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T04:33:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
dangkhoa99/falcon-7b-finetuned-QA-MRC-4-bit
|
dangkhoa99
| 2023-09-16T04:25:00Z | 8 | 1 |
peft
|
[
"peft",
"falcon-7b",
"custom_code",
"text-generation-inference",
"endpoints-template",
"text-generation",
"en",
"dataset:squad_v2",
"arxiv:2106.09685",
"arxiv:2305.14314",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-08T13:31:48Z |
---
library_name: peft
datasets:
- squad_v2
model-index:
- name: dangkhoa99/falcon-7b-finetuned-QA-MRC-4-bit
results: []
language:
- en
tags:
- falcon-7b
- custom_code
- text-generation-inference
- endpoints-template
metrics:
- exact_match
- f1
pipeline_tag: text-generation
inference: false
---
# 🚀 falcon-7b-finetuned-QA-MRC-4-bit
Falcon-7b-finetuned-QA-MRC-4-bit is a model for Machine Reading Comprehension (MRC) with Question Answering (QA). It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. This repo only includes the LoRA adapters from fine-tuning with 🤗's [peft](https://github.com/huggingface/peft) package.
## Model Summary
- **Model Type:** Causal decoder-only
- **Language(s):** English
- **Base Model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) (License: [Apache 2.0](https://huggingface.co/tiiuae/falcon-7b#license))
- **Dataset:** [SQuAD2.0](https://huggingface.co/datasets/squad_v2) (License: cc-by-sa-4.0)
- **License(s):** Apache 2.0 inherited from "Base Model" and cc-by-sa-4.0 inherited from "Dataset"
## Model Details
The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took **approximately 5.08 hours** and was executed on a workstation with **a single A100-SXM NVIDIA GPU** with 37 GB of available memory.
### Model Date
August 08, 2023
## Usage
### Prompt
The model was trained on the following kind of prompt:
```python
"""Answer the question based on the context below. If the question cannot be answered using the information provided answer with 'No answer'. Stop response if end.
>>TITLE<<: Flawless answer.
>>CONTEXT<<: {context}
>>QUESTION<<: {question}
>>ANSWER<<:
"""
```
### Inference
You will need **at least 6GB of memory** to swiftly run inference.
[Colab Notebook](https://colab.research.google.com/drive/1d2WP-MimF34NN72wGU0gX0uSUTHirN8A?usp=sharing)
#### Example 1:
```python
context = '''The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'''
question = '''Which name is also used to describe the Amazon rainforest in English?'''
>>> 'Amazonia or the Amazon Jungle'
```
#### Example 2 (No answer):
```python
context = '''The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'''
question = '''What is 2 + 2?'''
>>> 'No answer'
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Performance
Evaluated on the SQuAD 2.0 dev set with the [Metrics](https://huggingface.co/docs/datasets/v2.14.4/en/loading#metrics)
```python
'exact': 71.48993514697212
'f1': 76.65914166347146
'total': 11873
'HasAns_exact': 62.78677462887989
'HasAns_f1': 73.14001163468224
'HasAns_total': 5928
'NoAns_exact': 80.1682085786375
'NoAns_f1': 80.1682085786375
'NoAns_total': 5945
'best_exact': 71.48993514697212
'best_exact_thresh': 0.0
'best_f1': 76.65914166347147
'best_f1_thresh': 0.0
```
### Framework versions
- PEFT 0.5.0.dev0
- Transformers 4.31.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
chgenly/ppo-SnowballTarget
|
chgenly
| 2023-09-16T04:22:05Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-16T03:58:22Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chgenly/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
miittnnss/miittnnss-lora
|
miittnnss
| 2023-09-16T03:43:46Z | 5 | 2 |
diffusers
|
[
"diffusers",
"lora",
"art",
"stable-diffusion",
"text-to-image",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-16T02:58:15Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- lora
- art
- stable-diffusion
library_name: diffusers
pipeline_tag: text-to-image
inference: true
---
Try the gradio demo [here](https://huggingface.co/spaces/miittnnss/miittnnss-lora-diffusion)
------------
Hello! This is my another LoRA model. It is trained using Hollowstrawberry's [LoRA Trainer on Google Colab](https://colab.research.google.com/github/hollowstrawberry/kohya-colab/blob/main/Lora_Trainer.ipynb).
|
Bomml/Llama-2-70B-chat-w4-g128-awq
|
Bomml
| 2023-09-16T03:09:15Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"awq",
"text-generation",
"en",
"arxiv:2307.09288",
"license:other",
"region:us"
] |
text-generation
| 2023-09-16T02:50:36Z |
---
inference: false
language:
- en
license: other
model_creator: Meta Llama 2
model_link: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
model_name: Llama 2 70B Chat
model_type: llama
pipeline_tag: text-generation
quantized_by: Bomml.ai
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- awq
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/lnPZ6Nr.png" alt="Bomml.ai" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.6em; margin-bottom: 0em;"><a href="https://bomml.ai">Support: Bomml.ai</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 70B Chat - W4 G128 AWQ
- Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
- Original model: [Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
<!-- description start -->
## Description
This repo contains AWQ 4bit model files for [Meta Llama 2's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf).
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
# Original model card: Meta Llama 2's Llama 2 70B Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
CyberHarem/maekawa_miku_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T03:03:30Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/maekawa_miku_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T02:42:35Z |
---
license: mit
datasets:
- CyberHarem/maekawa_miku_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of maekawa_miku_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7800, you need to download `7800/maekawa_miku_idolmastercinderellagirls.pt` as the embedding and `7800/maekawa_miku_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7800**, with the score of 0.973. The trigger words are:
1. `maekawa_miku_idolmastercinderellagirls`
2. `brown_hair, green_eyes, short_hair, blush, smile, animal_ears, cat_ears, open_mouth, fang, breasts, bangs, medium_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **7800** | **0.973** | [**Download**](7800/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.963 | [Download](7280/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.968 | [Download](6760/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.965 | [Download](6240/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.954 | [Download](5720/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.964 | [Download](5200/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.962 | [Download](4680/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.950 | [Download](4160/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.944 | [Download](3640/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.939 | [Download](3120/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.874 | [Download](2600/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.961 | [Download](2080/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.903 | [Download](1560/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.833 | [Download](1040/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.889 | [Download](520/maekawa_miku_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_9.png) |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
Abigail13/pruebaJA
|
Abigail13
| 2023-09-16T02:52:17Z | 0 | 0 |
transformers
|
[
"transformers",
"dataset:beans",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2023-09-15T02:45:18Z |
---
datasets:
- beans
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stablediffusionapi/controlnet-canny-sdxl-1.0
|
stablediffusionapi
| 2023-09-16T02:47:23Z | 3 | 0 |
diffusers
|
[
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-16T02:42:45Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# controlnet-canny-sdxl-1.0 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "controlnet-canny-sdxl-1.0"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/controlnet-canny-sdxl-1.0)
Model link: [View model](https://stablediffusionapi.com/models/controlnet-canny-sdxl-1.0)
Credits: [View credits](https://civitai.com/?query=controlnet-canny-sdxl-1.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "controlnet-canny-sdxl-1.0",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
stablediffusionapi/copax-timelessxl-sdxl10
|
stablediffusionapi
| 2023-09-16T02:26:22Z | 820 | 5 |
diffusers
|
[
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T02:15:36Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# copax-timelessxl-sdxl10 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "copax-timelessxl-sdxl10"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/copax-timelessxl-sdxl10)
Model link: [View model](https://stablediffusionapi.com/models/copax-timelessxl-sdxl10)
Credits: [View credits](https://civitai.com/?query=copax-timelessxl-sdxl10)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "copax-timelessxl-sdxl10",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
reallygoodtechdeals/autotrain-lane-center3-89488143942
|
reallygoodtechdeals
| 2023-09-16T02:20:19Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:reallygoodtechdeals/autotrain-data-lane-center3",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-16T02:18:40Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- reallygoodtechdeals/autotrain-data-lane-center3
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.5738396582180998
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 89488143942
- CO2 Emissions (in grams): 0.5738
## Validation Metrics
- Loss: 1.067
- Accuracy: 0.457
- Macro F1: 0.348
- Micro F1: 0.457
- Weighted F1: 0.388
- Macro Precision: 0.303
- Micro Precision: 0.457
- Weighted Precision: 0.337
- Macro Recall: 0.410
- Micro Recall: 0.457
- Weighted Recall: 0.457
|
Reham721/Subjective_QG
|
Reham721
| 2023-09-16T02:03:36Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ar",
"dataset:squad",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-15T20:17:26Z |
---
datasets:
- squad
language:
- ar
pipeline_tag: text2text-generation
---
|
stablediffusionapi/stable-diffusion-xl-1.0-inpainting-0.1
|
stablediffusionapi
| 2023-09-16T02:00:10Z | 43 | 1 |
diffusers
|
[
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"diffusers:StableDiffusionXLInpaintPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T01:49:29Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# stable-diffusion-xl-1.0-inpainting-0.1 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "stable-diffusion-xl-1.0-inpainting-0.1"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/stable-diffusion-xl-1.0-inpainting-0.1)
Model link: [View model](https://stablediffusionapi.com/models/stable-diffusion-xl-1.0-inpainting-0.1)
Credits: [View credits](https://civitai.com/?query=stable-diffusion-xl-1.0-inpainting-0.1)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "stable-diffusion-xl-1.0-inpainting-0.1",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
lyogavin/Anima-7B-100K
|
lyogavin
| 2023-09-16T01:59:42Z | 1,537 | 31 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"100k",
"7b",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-14T14:47:16Z |
---
license: apache-2.0
language:
- en
tags:
- llama2
- 100k
- 7b
---
Anima LLM supporting 100K input token length. It's trained based on Llama2 7B, so the license support commercial use!
We carefully curated long QA training dataset from 30k to 100k length to train this model. We also made a lot of memory optimizations to make it scale to 100k tokens.
## How to train/infer?
#### install dependencies
```bash
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/xentropy
pip install evaluate
pip install git+https://github.com/huggingface/peft.git@v0.4.0
pip install wandb
```
#### inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
base_model = "lyogavin/Anima-7B-100K"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map="auto",
)
model.eval()
prompt = "Where is the capital of US?"
inputs = tokenizer(prompt, return_tensors="pt")
inputs['input_ids'] = inputs['input_ids'].cuda()
inputs['attention_mask'] = inputs['attention_mask'].cuda()
# Generate
generate_ids = model.generate(**inputs, max_new_tokens=30,
only_last_logit=True, # to save memory
use_cache=False, # when run into OOM, enable this can save memory
xentropy=True)
output = tokenizer.batch_decode(generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False)[0]
```
#### Training
```bash
./run_longer_training.sh
```
## Evaluations
There's almost none evaluation dataset designed for 100k tokens. So we designed/curated some dataset for this model. We compared this model and several other public/private models.
#### 1. longchat topic retrieval
| Model | Accuracy |
|-------------------|---------|
| Claude2 | 0.9 |
| together llama2 32k | 0.15 |
| longchat 32k 1.5 | 0.05 |
| Anima 100K | 0.5 |
#### 2. longchat number retrieval
| Model | Accuracy |
|-------------------|---------|
| Claude2 | 0.85 |
| together llama2 32k | 0.2 |
| longchat 32k 1.5 | 0.05 |
| Anima 100K | 0.45 |
#### 3. Narrative QA in zeroscore
| Model | F1 |
|-------------------|---------|
| Claude2 | 0.6187 |
| together llama2 32k | 0.3833 |
| longchat 32k 1.5 | 0.2416 |
| Anima 100K | 0.4919 |
## Github
Github repo is [here](https://github.com/lyogavin/Anima/tree/main/anima_100k)
|
CyberHarem/jougasaki_mika_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T01:49:15Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/jougasaki_mika_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T01:28:33Z |
---
license: mit
datasets:
- CyberHarem/jougasaki_mika_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of jougasaki_mika_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5400, you need to download `5400/jougasaki_mika_idolmastercinderellagirls.pt` as the embedding and `5400/jougasaki_mika_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5400**, with the score of 0.993. The trigger words are:
1. `jougasaki_mika_idolmastercinderellagirls`
2. `pink_hair, yellow_eyes, smile, blush, breasts, jewelry, long_hair, bangs, bow, ponytail, hair_bow, medium_breasts, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.989 | [Download](8100/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bikini.png) | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.988 | [Download](7560/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bikini.png) | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.992 | [Download](7020/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bikini.png) | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.992 | [Download](6480/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bikini.png) | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.988 | [Download](5940/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bikini.png) | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| **5400** | **0.993** | [**Download**](5400/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.992 | [Download](4860/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bikini.png) | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.992 | [Download](4320/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.992 | [Download](3780/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.981 | [Download](3240/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bikini.png) | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.993 | [Download](2700/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bikini.png) | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.992 | [Download](2160/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bikini.png) | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.952 | [Download](1620/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bikini.png) | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.989 | [Download](1080/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bikini.png) | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.983 | [Download](540/jougasaki_mika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bikini.png) | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
trieudemo11/llama_7b_attrb_cate_4m_240_0
|
trieudemo11
| 2023-09-16T01:45:31Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T01:45:19Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
JoseVallar01/prueba13
|
JoseVallar01
| 2023-09-16T01:41:04Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-09-13T21:43:52Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jncraton/phi-1_5-ct2-int8
|
jncraton
| 2023-09-16T01:37:13Z | 3 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"arxiv:2309.05463",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-16T01:35:03Z |
---
license: other
language:
- en
pipeline_tag: text-generation
---
## Model Summary
The language model phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We **did not** fine-tune phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## Intended Uses
Given the nature of the training data, phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
#### QA format:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
```
where the model generates the text after "Answer:".
#### Chat format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
Alice: That's a good idea. I'll give it a try.
Charlie: Another thing that can help is to break up your study sessions into smaller chunks. It's easier to concentrate on one thing at a time.
Alice: That makes sense. I'll try that too.
Bob: And don't forget to take breaks! It's important to give your brain a rest so you can come back to your studies with a fresh perspective.
Alice: Thanks for the advice, guys. I feel more motivated now.
Charlie: No problem, Alice. We're all in this together.
Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
```
where the model generates the text after the first "Bob:".
#### Code format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes**
* phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks is out of the scope of this research project. As a result, phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
## Limitations of phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [flash-attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [Research License](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx).
### Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device('cuda')
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
inputs = tokenizer('''```python
def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
**Remark.** In the generation function, our model currently does not support beam search (`num_beams` >1) and `attention_mask' parameters.
Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
### Citation
You can find the paper at https://arxiv.org/abs/2309.05463
```bib
@article{textbooks2,
title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
journal={arXiv preprint arXiv:2309.05463},
year={2023}
}
```
|
nadeemraja/my_awesome_model
|
nadeemraja
| 2023-09-16T01:16:26Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-11T03:05:53Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93128
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2911
- Accuracy: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1191 | 1.0 | 1563 | 0.3104 | 0.9249 |
| 0.1073 | 2.0 | 3126 | 0.2911 | 0.9313 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
stablediffusionapi/indigo-furry-mix-v65
|
stablediffusionapi
| 2023-09-16T01:05:28Z | 54 | 0 |
diffusers
|
[
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T00:29:41Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# indigo-furry-mix-v65 API Inference
,%20standing,%20solo,%20muscle,%20detailed%20scale%20texture,%20old%20castle,%20(battlefield),%20(tribal%20cloth.jpeg)
## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "indigo-furry-mix-v65"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/indigo-furry-mix-v65)
Model link: [View model](https://stablediffusionapi.com/models/indigo-furry-mix-v65)
Credits: [View credits](https://civitai.com/?query=indigo-furry-mix-v65)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "indigo-furry-mix-v65",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
CyberHarem/anastasia_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T00:33:35Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/anastasia_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T00:19:45Z |
---
license: mit
datasets:
- CyberHarem/anastasia_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of anastasia_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6240, you need to download `6240/anastasia_idolmastercinderellagirls.pt` as the embedding and `6240/anastasia_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6240**, with the score of 0.934. The trigger words are:
1. `anastasia_idolmastercinderellagirls`
2. `short_hair, blue_eyes, grey_hair, smile, jewelry, breasts, hair_between_eyes, medium_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.925 | [Download](7200/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](7200/previews/pattern_6.png) |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.921 | [Download](6720/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](6720/previews/pattern_6.png) |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| **6240** | **0.934** | [**Download**](6240/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_6.png) |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.858 | [Download](5760/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](5760/previews/pattern_6.png) |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.892 | [Download](5280/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](5280/previews/pattern_6.png) |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.915 | [Download](4800/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](4800/previews/pattern_6.png) |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.931 | [Download](4320/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](4320/previews/pattern_6.png) |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.893 | [Download](3840/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](3840/previews/pattern_6.png) |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.910 | [Download](3360/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](3360/previews/pattern_6.png) |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.909 | [Download](2880/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](2880/previews/pattern_6.png) |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.906 | [Download](2400/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](2400/previews/pattern_6.png) |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.821 | [Download](1920/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](1920/previews/pattern_6.png) |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.776 | [Download](1440/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](1440/previews/pattern_6.png) |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.823 | [Download](960/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](960/previews/pattern_6.png) |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.803 | [Download](480/anastasia_idolmastercinderellagirls.zip) |  |  |  |  |  | [<NSFW, click to see>](480/previews/pattern_6.png) |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
Chanblock/llama-2-7b-langchain-chat-1000_dataset
|
Chanblock
| 2023-09-16T00:27:15Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:llama2",
"region:us"
] | null | 2023-09-15T23:59:23Z |
---
license: llama2
base_model: Photolens/llama-2-7b-langchain-chat
tags:
- generated_from_trainer
model-index:
- name: llama-2-7b-langchain-chat-1000_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-2-7b-langchain-chat-1000_dataset
This model is a fine-tuned version of [Photolens/llama-2-7b-langchain-chat](https://huggingface.co/Photolens/llama-2-7b-langchain-chat) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
wooii/PPO-BipedalWalker-v3
|
wooii
| 2023-09-15T23:53:23Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-15T23:52:42Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: 169.50 +/- 155.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Soxcr/Aniverse
|
Soxcr
| 2023-09-15T23:27:12Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-05T10:24:12Z |
---
license: creativeml-openrail-m
---
|
platzi/platzi-distilroberta-base-mrpc-glue-alejandro-arroyo
|
platzi
| 2023-09-15T23:26:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-15T23:15:45Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-alejandro-arroyo
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8357843137254902
- name: F1
type: f1
value: 0.8866328257191202
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-alejandro-arroyo
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9465
- Accuracy: 0.8358
- F1: 0.8866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4531 | 1.09 | 500 | 0.5192 | 0.8064 | 0.8636 |
| 0.2895 | 2.18 | 1000 | 1.0305 | 0.8186 | 0.8729 |
| 0.166 | 3.27 | 1500 | 0.9465 | 0.8358 | 0.8866 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
MarianaChapman/RuzeShoesReviews
|
MarianaChapman
| 2023-09-15T23:11:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-15T23:07:55Z |
---
license: bsl-1.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: image-to-text
tags:
- art
https://reviewsstate.com/ruze-shoes-reviews/-
|
mgmeskill/rl_course_vizdoom_health_gathering_supreme
|
mgmeskill
| 2023-09-15T22:44:36Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-15T22:44:26Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.94 +/- 4.18
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r mgmeskill/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
DriveMyScream/Speech_Recognition
|
DriveMyScream
| 2023-09-15T22:41:09Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-09-15T22:39:18Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
salim4n/Taxi-v3
|
salim4n
| 2023-09-15T21:48:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-15T21:48:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="salim4n/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CyberHarem/futaba_anzu_idolmastercinderellagirls
|
CyberHarem
| 2023-09-15T21:19:23Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/futaba_anzu_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-15T20:59:20Z |
---
license: mit
datasets:
- CyberHarem/futaba_anzu_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of futaba_anzu_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6240, you need to download `6240/futaba_anzu_idolmastercinderellagirls.pt` as the embedding and `6240/futaba_anzu_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6240**, with the score of 0.779. The trigger words are:
1. `futaba_anzu_idolmastercinderellagirls`
2. `long_hair, blonde_hair, twintails, brown_eyes, blush, low_twintails, smile, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.746 | [Download](7800/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.750 | [Download](7280/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.751 | [Download](6760/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| **6240** | **0.779** | [**Download**](6240/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.745 | [Download](5720/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.770 | [Download](5200/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.771 | [Download](4680/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.773 | [Download](4160/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.712 | [Download](3640/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.738 | [Download](3120/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.727 | [Download](2600/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.766 | [Download](2080/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.674 | [Download](1560/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.725 | [Download](1040/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.670 | [Download](520/futaba_anzu_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
DriveMyScream/Image_Colorization
|
DriveMyScream
| 2023-09-15T21:19:12Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-09-15T19:26:27Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
MohannadTak/ppo-LunarLander-v2-1e6
|
MohannadTak
| 2023-09-15T20:53:14Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-15T20:52:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.71 +/- 20.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vaiana/a2c-PandaReachDense-v3
|
vaiana
| 2023-09-15T20:47:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-15T20:41:58Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PygmalionAI/pygmalion-2-13b
|
PygmalionAI
| 2023-09-15T20:29:04Z | 2,530 | 80 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"dataset:PygmalionAI/PIPPA",
"dataset:Open-Orca/OpenOrca",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:databricks/databricks-dolly-15k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-04T22:05:31Z |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
license: llama2
datasets:
- PygmalionAI/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
---
<h1 style="text-align: center">Pygmalion-2 13B</h1>
<h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2>
## Model Details
The long-awaited release of our new models based on Llama-2 is finally here. Pygmalion-2 13B (formerly known as Metharme) is based on
[Llama-2 13B](https://huggingface.co/meta-llama/llama-2-13b-hf) released by Meta AI.
The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting,
but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion
that the Metharme prompting format is superior (and easier to use) compared to the classic Pygmalion.
This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories
and conversations with synthetically generated instructions attached.
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
## Prompting
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history.
### Prompting example
The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
```
## Dataset
The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction
datasets, and datasets acquired from various RP forums.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
## Acknowledgements
We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
QMB15/Stheno-L2-13B-8bit-exl2
|
QMB15
| 2023-09-15T20:28:39Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-15T20:07:08Z |
---
license: llama2
language:
- en
---
This is a exllama V2 quantization of https://huggingface.co/TheBloke/Stheno-L2-13B-GPTQ
Uses a target bpw of 8, intended for best quality on cards like a 3090 or similar.
Includes measurement.json for convenience of quantizing to other sizes.
Calibration data: https://huggingface.co/datasets/wikitext/resolve/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet
<img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg" style="width: 70%; min-width: 300px; display: block; margin: auto;">
An experimental merging of Several Models using two various methods, [Ties-Merge](https://github.com/cg123/ties-merge) and [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient)
I plan for this to be the base of my Model with my own [Stheno: ERP-Based LORA] merged in, some time in the future.
Stheno:
<br>Gradient Merge of Stheno-P1 & Stheno-P2.
SISTER MODEL HERE: [Stheno-Inverted-L2-13B](https://huggingface.co/Sao10K/Stheno-Inverted-L2-13B)
Quants courtesy of TheBloke!
<br>[GPTQ](https://huggingface.co/TheBloke/Stheno-L2-13B-GPTQ)
<br>[GGUF](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF)
<br>[GGML](https://huggingface.co/TheBloke/Stheno-L2-13B-GGML)
Test Checklist:
<br>Censorship - Fairly Uncensored
<br>Writing - Good Prose, Fairly Descriptive
<br>NSFW - Yes
<br>IQ Level - Pretty Smart
<br>Formatting - Proper Formatting with Examples
Stheno-P1 [Ties-Merge]
<br>-----[elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
<br>-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<br>-----[NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)+[nRuaif/Kimiko-v2 **LORA**](https://huggingface.co/nRuaif/Kimiko-v2-13B)
Stheno-P2 [Ties-Merge]
<br>-----[CalderaAI/13B-Legerdemain-L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)+[lemonilia/limarp-llama2-v2 **LORA**](https://huggingface.co/lemonilia/limarp-llama2-v2)
<br>-----[ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b)
<br>-----[Henk717/spring-dragon](https://huggingface.co/Henk717/spring-dragon)
Most formats could work, but my tests have all been done in Alpaca format and it works well.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
Below is the Illustration for the Final Merge:

Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [ties-merge](https://github.com/cg123/ties-merge) script, and [Gryphe](https://huggingface.co/Gryphe) for their great [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) script.
Thanks to the original model creators too!
```
Art by wada_kazu / わだかず (pixiv page private?)
```
|
9au5a/test_trainer
|
9au5a
| 2023-09-15T20:20:02Z | 181 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-15T20:13:57Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: test_trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0594
- Accuracy: 0.595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.1417 | 0.529 |
| No log | 2.0 | 250 | 1.0302 | 0.579 |
| No log | 3.0 | 375 | 1.0594 | 0.595 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ncdisrup-ai/test_trainer
|
ncdisrup-ai
| 2023-09-15T20:12:07Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"en",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T17:32:02Z |
---
license: apache-2.0
datasets:
- imdb
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
|
CyberHarem/sunazuka_akira_idolmastercinderellagirls
|
CyberHarem
| 2023-09-15T20:06:06Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/sunazuka_akira_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-15T19:51:43Z |
---
license: mit
datasets:
- CyberHarem/sunazuka_akira_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of sunazuka_akira_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2880, you need to download `2880/sunazuka_akira_idolmastercinderellagirls.pt` as the embedding and `2880/sunazuka_akira_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2880**, with the score of 0.993. The trigger words are:
1. `sunazuka_akira_idolmastercinderellagirls`
2. `mole, mole_under_eye, bangs, long_hair, hair_between_eyes, brown_eyes, sharp_teeth, teeth, twintails, brown_hair, blush, open_mouth, black_hair, necktie, green_necktie`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.991 | [Download](7200/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.992 | [Download](6720/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.991 | [Download](6240/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.991 | [Download](5760/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.989 | [Download](5280/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.991 | [Download](4800/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.927 | [Download](4320/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.988 | [Download](3840/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.990 | [Download](3360/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| **2880** | **0.993** | [**Download**](2880/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.989 | [Download](2400/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.987 | [Download](1920/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.989 | [Download](1440/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.988 | [Download](960/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.986 | [Download](480/sunazuka_akira_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.