modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 12:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 528
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 12:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
VictorGil75/workshops-setfit-model_V3
|
VictorGil75
| 2023-09-16T18:37:42Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-16T18:36:57Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# VictorGil75/workshops-setfit-model_V3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("VictorGil75/workshops-setfit-model_V3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
CyberHarem/kohinata_miho_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T18:32:25Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kohinata_miho_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T18:09:33Z |
---
license: mit
datasets:
- CyberHarem/kohinata_miho_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kohinata_miho_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4500, you need to download `4500/kohinata_miho_idolmastercinderellagirls.pt` as the embedding and `4500/kohinata_miho_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4500**, with the score of 0.950. The trigger words are:
1. `kohinata_miho_idolmastercinderellagirls`
2. `short_hair, ahoge, black_hair, blush, brown_eyes, smile, open_mouth, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.935 | [Download](7500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](7500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](7500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.945 | [Download](7000/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](7000/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](7000/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.946 | [Download](6500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](6500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](6500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.878 | [Download](6000/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](6000/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](6000/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.933 | [Download](5500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](5500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.907 | [Download](5000/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5000/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](5000/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| **4500** | **0.950** | [**Download**](4500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](4500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.902 | [Download](4000/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4000/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](4000/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.945 | [Download](3500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](3500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.923 | [Download](3000/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3000/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](3000/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.931 | [Download](2500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](2500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.915 | [Download](2000/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2000/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](2000/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.906 | [Download](1500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](1500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.863 | [Download](1000/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1000/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](1000/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.759 | [Download](500/kohinata_miho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](500/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](500/previews/pattern_9.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
kanishka/smolm-mlm-bpe-unmask-seed_777
|
kanishka
| 2023-09-16T18:28:52Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T16:28:31Z |
---
base_model: models/smolm-mlm/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-mlm-bpe-unmask-seed_777
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-mlm-bpe-unmask-seed_777
This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7113
- Accuracy: 0.4480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 512
- seed: 777
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.54 | 1.0 | 11938 | 3.4890 | 0.3516 |
| 3.3273 | 2.0 | 23876 | 3.3427 | 0.3628 |
| 3.1559 | 3.0 | 35814 | 3.1642 | 0.3866 |
| 3.0559 | 4.0 | 47752 | 3.0452 | 0.3995 |
| 2.9405 | 5.0 | 59690 | 2.9770 | 0.4121 |
| 2.8478 | 6.0 | 71628 | 2.8953 | 0.4208 |
| 2.7872 | 7.0 | 83566 | 2.8437 | 0.4273 |
| 2.7092 | 8.0 | 95504 | 2.7663 | 0.4386 |
| 2.6641 | 9.0 | 107442 | 2.7282 | 0.4471 |
| 2.6346 | 10.0 | 119380 | 2.6966 | 0.4522 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LibreSD/Elldreth
|
LibreSD
| 2023-09-16T18:19:23Z | 0 | 2 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-16T17:34:35Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
civitai.com/models/2540/elldreths-stolendreams-mix
|
Shishir1807/M3_llama
|
Shishir1807
| 2023-09-16T18:11:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-16T18:10:11Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/M3_llama",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/M3_llama",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/M3_llama",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/M3_llama" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
kanishka/smolm-mlm-bpe-unmask-seed_111
|
kanishka
| 2023-09-16T18:09:09Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:AO-CHILDES",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T04:18:06Z |
---
base_model: models/smolm-mlm/config.json
tags:
- generated_from_trainer
datasets:
- AO-CHILDES
metrics:
- accuracy
widget:
- text: Do you like your <mask> ?
- text: Look here . What is that <mask> ?
- text: Where is <mask> ?
model-index:
- name: smolm-mlm-bpe-unmask-seed_111
results: []
pipeline_tag: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-mlm-bpe-unmask-seed_111
This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on 5M words of American-English child-directed input.
It achieves the following results on the evaluation set:
- Loss: 2.6956
- Accuracy: 0.4492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 512
- seed: 111
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5129 | 1.0 | 11938 | 3.4627 | 0.3523 |
| 3.319 | 2.0 | 23876 | 3.3322 | 0.3641 |
| 3.1577 | 3.0 | 35814 | 3.1841 | 0.3810 |
| 3.0357 | 4.0 | 47752 | 3.0588 | 0.3982 |
| 2.9606 | 5.0 | 59690 | 2.9535 | 0.4109 |
| 2.87 | 6.0 | 71628 | 2.8745 | 0.4221 |
| 2.7817 | 7.0 | 83566 | 2.8351 | 0.4284 |
| 2.7388 | 8.0 | 95504 | 2.7536 | 0.4417 |
| 2.6618 | 9.0 | 107442 | 2.7308 | 0.4424 |
| 2.6258 | 10.0 | 119380 | 2.6880 | 0.4522 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Shishir1807/M2_llama
|
Shishir1807
| 2023-09-16T18:08:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-16T18:06:54Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/M2_llama",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/M2_llama",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/M2_llama",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/M2_llama" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
CyberHarem/laura_stuart_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T18:02:37Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/laura_stuart_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T17:50:22Z |
---
license: mit
datasets:
- CyberHarem/laura_stuart_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of laura_stuart_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5100, you need to download `5100/laura_stuart_toarumajutsunoindex.pt` as the embedding and `5100/laura_stuart_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5100**, with the score of 0.774. The trigger words are:
1. `laura_stuart_toarumajutsunoindex`
2. `blonde_hair, long_hair, blue_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5100** | **0.774** | [**Download**](5100/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.726 | [Download](4760/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.739 | [Download](4420/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.694 | [Download](4080/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.626 | [Download](3740/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.680 | [Download](3400/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.744 | [Download](3060/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.741 | [Download](2720/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.562 | [Download](2380/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.455 | [Download](2040/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.588 | [Download](1700/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.487 | [Download](1360/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.522 | [Download](1020/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.275 | [Download](680/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.186 | [Download](340/laura_stuart_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
rajashekarvt/finetuned-setfit-model
|
rajashekarvt
| 2023-09-16T17:59:58Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-16T17:42:02Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# rajashekarvt/finetuned-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("rajashekarvt/finetuned-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
VictorGil75/workshops-setfit-model_V2
|
VictorGil75
| 2023-09-16T17:44:01Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-16T17:43:07Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# VictorGil75/workshops-setfit-model_V2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("VictorGil75/workshops-setfit-model_V2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
0bi0n3/distilhubert-finetuned-gtzan
|
0bi0n3
| 2023-09-16T17:40:12Z | 141 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-16T14:12:00Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6253
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1493 | 1.0 | 56 | 2.0306 | 0.53 |
| 1.5907 | 1.99 | 112 | 1.4564 | 0.69 |
| 1.3192 | 2.99 | 168 | 1.1955 | 0.7 |
| 1.1758 | 4.0 | 225 | 1.0190 | 0.75 |
| 0.9033 | 5.0 | 281 | 0.8936 | 0.82 |
| 0.7127 | 5.99 | 337 | 0.7668 | 0.78 |
| 0.5503 | 6.99 | 393 | 0.7165 | 0.78 |
| 0.4843 | 8.0 | 450 | 0.6483 | 0.83 |
| 0.3883 | 9.0 | 506 | 0.6441 | 0.82 |
| 0.3674 | 9.96 | 560 | 0.6253 | 0.84 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
m10813108/test
|
m10813108
| 2023-09-16T17:39:17Z | 0 | 0 | null |
[
"arxiv:2307.01952",
"arxiv:2206.00364",
"region:us"
] | null | 2023-09-15T21:00:00Z |
# Generative Models by Stability AI

## News
**July 26, 2023**
- We are releasing two new open models with a permissive [`CreativeML Open RAIL++-M` license](model_licenses/LICENSE-SDXL1.0) (see [Inference](#inference) for file hashes):
- [SDXL-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0): An improved version over `SDXL-base-0.9`.
- [SDXL-refiner-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0): An improved version over `SDXL-refiner-0.9`.

**July 4, 2023**
- A technical report on SDXL is now available [here](https://arxiv.org/abs/2307.01952).
**June 22, 2023**
- We are releasing two new diffusion models for research purposes:
- `SDXL-base-0.9`: The base model was trained on a variety of aspect ratios on images with resolution 1024^2. The base model uses [OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main) for text encoding whereas the refiner model only uses the OpenCLIP model.
- `SDXL-refiner-0.9`: The refiner has been trained to denoise small noise levels of high quality data and as such is not expected to work as a text-to-image model; instead, it should only be used as an image-to-image model.
If you would like to access these models for your research, please apply using one of the following links:
[SDXL-0.9-Base model](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9), and [SDXL-0.9-Refiner](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9).
This means that you can apply for any of the two links - and if you are granted - you can access both.
Please log in to your Hugging Face Account with your organization email to request access.
**We plan to do a full release soon (July).**
## The codebase
### General Philosophy
Modularity is king. This repo implements a config-driven approach where we build and combine submodules by calling `instantiate_from_config()` on objects defined in yaml configs. See `configs/` for many examples.
### Changelog from the old `ldm` codebase
For training, we use [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/), but it should be easy to use other training wrappers around the base modules. The core diffusion model class (formerly `LatentDiffusion`, now `DiffusionEngine`) has been cleaned up:
- No more extensive subclassing! We now handle all types of conditioning inputs (vectors, sequences and spatial conditionings, and all combinations thereof) in a single class: `GeneralConditioner`, see `sgm/modules/encoders/modules.py`.
- We separate guiders (such as classifier-free guidance, see `sgm/modules/diffusionmodules/guiders.py`) from the
samplers (`sgm/modules/diffusionmodules/sampling.py`), and the samplers are independent of the model.
- We adopt the ["denoiser framework"](https://arxiv.org/abs/2206.00364) for both training and inference (most notable change is probably now the option to train continuous time models):
* Discrete times models (denoisers) are simply a special case of continuous time models (denoisers); see `sgm/modules/diffusionmodules/denoiser.py`.
* The following features are now independent: weighting of the diffusion loss function (`sgm/modules/diffusionmodules/denoiser_weighting.py`), preconditioning of the network (`sgm/modules/diffusionmodules/denoiser_scaling.py`), and sampling of noise levels during training (`sgm/modules/diffusionmodules/sigma_sampling.py`).
- Autoencoding models have also been cleaned up.
## Installation:
<a name="installation"></a>
#### 1. Clone the repo
```shell
git clone git@github.com:Stability-AI/generative-models.git
cd generative-models
```
#### 2. Setting up the virtualenv
This is assuming you have navigated to the `generative-models` root after cloning it.
**NOTE:** This is tested under `python3.8` and `python3.10`. For other python versions, you might encounter version conflicts.
**PyTorch 1.13**
```shell
# install required packages from pypi
python3 -m venv .pt13
source .pt13/bin/activate
pip3 install -r requirements/pt13.txt
```
**PyTorch 2.0**
```shell
# install required packages from pypi
python3 -m venv .pt2
source .pt2/bin/activate
pip3 install -r requirements/pt2.txt
```
#### 3. Install `sgm`
```shell
pip3 install .
```
#### 4. Install `sdata` for training
```shell
pip3 install -e git+https://github.com/Stability-AI/datapipelines.git@main#egg=sdata
```
## Packaging
This repository uses PEP 517 compliant packaging using [Hatch](https://hatch.pypa.io/latest/).
To build a distributable wheel, install `hatch` and run `hatch build`
(specifying `-t wheel` will skip building a sdist, which is not necessary).
```
pip install hatch
hatch build -t wheel
```
You will find the built package in `dist/`. You can install the wheel with `pip install dist/*.whl`.
Note that the package does **not** currently specify dependencies; you will need to install the required packages,
depending on your use case and PyTorch version, manually.
## Inference
We provide a [streamlit](https://streamlit.io/) demo for text-to-image and image-to-image sampling in `scripts/demo/sampling.py`.
We provide file hashes for the complete file as well as for only the saved tensors in the file (see [Model Spec](https://github.com/Stability-AI/ModelSpec) for a script to evaluate that).
The following models are currently supported:
- [SDXL-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
```
File Hash (sha256): 31e35c80fc4829d14f90153f4c74cd59c90b779f6afe05a74cd6120b893f7e5b
Tensordata Hash (sha256): 0xd7a9105a900fd52748f20725fe52fe52b507fd36bee4fc107b1550a26e6ee1d7
```
- [SDXL-refiner-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0)
```
File Hash (sha256): 7440042bbdc8a24813002c09b6b69b64dc90fded4472613437b7f55f9b7d9c5f
Tensordata Hash (sha256): 0x1a77d21bebc4b4de78c474a90cb74dc0d2217caf4061971dbfa75ad406b75d81
```
- [SDXL-base-0.9](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9)
- [SDXL-refiner-0.9](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9)
- [SD-2.1-512](https://huggingface.co/stabilityai/stable-diffusion-2-1-base/blob/main/v2-1_512-ema-pruned.safetensors)
- [SD-2.1-768](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.safetensors)
**Weights for SDXL**:
**SDXL-1.0:**
The weights of SDXL-1.0 are available (subject to a [`CreativeML Open RAIL++-M` license](model_licenses/LICENSE-SDXL1.0)) here:
- base model: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/
- refiner model: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/
**SDXL-0.9:**
The weights of SDXL-0.9 are available and subject to a [research license](model_licenses/LICENSE-SDXL0.9).
If you would like to access these models for your research, please apply using one of the following links:
[SDXL-base-0.9 model](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9), and [SDXL-refiner-0.9](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9).
This means that you can apply for any of the two links - and if you are granted - you can access both.
Please log in to your Hugging Face Account with your organization email to request access.
After obtaining the weights, place them into `checkpoints/`.
Next, start the demo using
```
streamlit run scripts/demo/sampling.py --server.port <your_port>
```
### Invisible Watermark Detection
Images generated with our code use the
[invisible-watermark](https://github.com/ShieldMnt/invisible-watermark/)
library to embed an invisible watermark into the model output. We also provide
a script to easily detect that watermark. Please note that this watermark is
not the same as in previous Stable Diffusion 1.x/2.x versions.
To run the script you need to either have a working installation as above or
try an _experimental_ import using only a minimal amount of packages:
```bash
python -m venv .detect
source .detect/bin/activate
pip install "numpy>=1.17" "PyWavelets>=1.1.1" "opencv-python>=4.1.0.25"
pip install --no-deps invisible-watermark
```
To run the script you need to have a working installation as above. The script
is then useable in the following ways (don't forget to activate your
virtual environment beforehand, e.g. `source .pt1/bin/activate`):
```bash
# test a single file
python scripts/demo/detect.py <your filename here>
# test multiple files at once
python scripts/demo/detect.py <filename 1> <filename 2> ... <filename n>
# test all files in a specific folder
python scripts/demo/detect.py <your folder name here>/*
```
## Training:
We are providing example training configs in `configs/example_training`. To launch a training, run
```
python main.py --base configs/<config1.yaml> configs/<config2.yaml>
```
where configs are merged from left to right (later configs overwrite the same values).
This can be used to combine model, training and data configs. However, all of them can also be
defined in a single config. For example, to run a class-conditional pixel-based diffusion model training on MNIST,
run
```bash
python main.py --base configs/example_training/toy/mnist_cond.yaml
```
**NOTE 1:** Using the non-toy-dataset configs `configs/example_training/imagenet-f8_cond.yaml`, `configs/example_training/txt2img-clipl.yaml` and `configs/example_training/txt2img-clipl-legacy-ucg-training.yaml` for training will require edits depending on the used dataset (which is expected to stored in tar-file in the [webdataset-format](https://github.com/webdataset/webdataset)). To find the parts which have to be adapted, search for comments containing `USER:` in the respective config.
**NOTE 2:** This repository supports both `pytorch1.13` and `pytorch2`for training generative models. However for autoencoder training as e.g. in `configs/example_training/autoencoder/kl-f4/imagenet-attnfree-logvar.yaml`, only `pytorch1.13` is supported.
**NOTE 3:** Training latent generative models (as e.g. in `configs/example_training/imagenet-f8_cond.yaml`) requires retrieving the checkpoint from [Hugging Face](https://huggingface.co/stabilityai/sdxl-vae/tree/main) and replacing the `CKPT_PATH` placeholder in [this line](configs/example_training/imagenet-f8_cond.yaml#81). The same is to be done for the provided text-to-image configs.
### Building New Diffusion Models
#### Conditioner
The `GeneralConditioner` is configured through the `conditioner_config`. Its only attribute is `emb_models`, a list of
different embedders (all inherited from `AbstractEmbModel`) that are used to condition the generative model.
All embedders should define whether or not they are trainable (`is_trainable`, default `False`), a classifier-free
guidance dropout rate is used (`ucg_rate`, default `0`), and an input key (`input_key`), for example, `txt` for text-conditioning or `cls` for class-conditioning.
When computing conditionings, the embedder will get `batch[input_key]` as input.
We currently support two to four dimensional conditionings and conditionings of different embedders are concatenated
appropriately.
Note that the order of the embedders in the `conditioner_config` is important.
#### Network
The neural network is set through the `network_config`. This used to be called `unet_config`, which is not general
enough as we plan to experiment with transformer-based diffusion backbones.
#### Loss
The loss is configured through `loss_config`. For standard diffusion model training, you will have to set `sigma_sampler_config`.
#### Sampler config
As discussed above, the sampler is independent of the model. In the `sampler_config`, we set the type of numerical
solver, number of steps, type of discretization, as well as, for example, guidance wrappers for classifier-free
guidance.
### Dataset Handling
For large scale training we recommend using the data pipelines from our [data pipelines](https://github.com/Stability-AI/datapipelines) project. The project is contained in the requirement and automatically included when following the steps from the [Installation section](#installation).
Small map-style datasets should be defined here in the repository (e.g., MNIST, CIFAR-10, ...), and return a dict of
data keys/values,
e.g.,
```python
example = {"jpg": x, # this is a tensor -1...1 chw
"txt": "a beautiful image"}
```
where we expect images in -1...1, channel-first format.
---
language:
- "List of ISO 639-1 code for your language"
- lang1
- lang2
thumbnail: "url to a thumbnail used in social sharing"
tags:
- tag1
- tag2
license: "any valid license identifier"
datasets:
- dataset1
- dataset2
metrics:
- metric1
- metric2
---
|
andreipb/roberta-poetry-religion-crpo
|
andreipb
| 2023-09-16T17:31:51Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-13T15:18:08Z |
---
license: mit
language:
- en
pipeline_tag: fill-mask
library_name: transformers
widget:
- text: "This morning, the CEO was <mask>."
example_title: "Example 1"
- text: "Yesterday, all the students were <mask> in the park."
example_title: "Example 2"
- text: "All the children seemed <mask>."
example_title: "Example 3"
- text: "I opened the door and found a <mask> behind it."
example_title: "Example 4"
- text: "We went to see the <mask> movie."
example_title: "Example 5"
---
# roberta-poetry-religion-crpo
This model is based on the [RoBERTa base model](https://huggingface.co/roberta-base) (125M parameters)
fine-tuned for 20 epochs on a poetry dataset of 38 MB. This dataset was extracted from
the [Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) using an automatic classifier
for poems in relation with the topic of **religion and spirituality**.
The model replaces a masked word, indicated by the `<mask>` tag, with a word associated with **religion and spirituality**, while preserving fluency.
Caution: the topic (here, **religion and spirituality**) only biases the choice of words with respect to the base model, but do not expect to find
only words strongly associated to this topic.
This model was trained by [Teo Ferrari](https://www.linkedin.com/in/teo-ferrari-0a4009176/)
as part of his Bachelor thesis at [HEIG-VD](https://gaps.heig-vd.ch/public/diplome/rapports.php?id=6763),
supervised by [Andrei Popescu-Belis](http://iict-space.heig-vd.ch/apu/).
The model is described in "[GPoeT: a Language Model Trained for Rhyme Generation on Synthetic Data](https://aclanthology.org/2023.latechclfl-1.2/)"
and is used in the [CR-PO](https://github.com/heig-iict-ida/crpo) system for [interactive poem generation](https://aclanthology.org/2022.lrec-1.377),
along with several other models for specific topics or emotions.
|
CyberHarem/mimura_kanako_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T17:18:12Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/mimura_kanako_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T16:59:50Z |
---
license: mit
datasets:
- CyberHarem/mimura_kanako_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mimura_kanako_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7280, you need to download `7280/mimura_kanako_idolmastercinderellagirls.pt` as the embedding and `7280/mimura_kanako_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7280**, with the score of 0.984. The trigger words are:
1. `mimura_kanako_idolmastercinderellagirls`
2. `brown_hair, short_hair, brown_eyes, blush, hair_ornament, breasts, flower, hair_flower, smile, large_breasts, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.935 | [Download](7800/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_5.png) |  |  | [<NSFW, click to see>](7800/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| **7280** | **0.984** | [**Download**](7280/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_5.png) |  |  | [<NSFW, click to see>](7280/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.916 | [Download](6760/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_5.png) |  |  | [<NSFW, click to see>](6760/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](6760/previews/bikini.png) | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.970 | [Download](6240/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_5.png) |  |  | [<NSFW, click to see>](6240/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.967 | [Download](5720/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_5.png) |  |  | [<NSFW, click to see>](5720/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.919 | [Download](5200/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_5.png) |  |  | [<NSFW, click to see>](5200/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](5200/previews/bikini.png) | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.966 | [Download](4680/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_5.png) |  |  | [<NSFW, click to see>](4680/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.970 | [Download](4160/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_5.png) |  |  | [<NSFW, click to see>](4160/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](4160/previews/bikini.png) | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.953 | [Download](3640/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_5.png) |  |  | [<NSFW, click to see>](3640/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.953 | [Download](3120/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_5.png) |  |  | [<NSFW, click to see>](3120/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.957 | [Download](2600/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_5.png) |  |  | [<NSFW, click to see>](2600/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.874 | [Download](2080/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_5.png) |  |  | [<NSFW, click to see>](2080/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.870 | [Download](1560/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_5.png) |  |  | [<NSFW, click to see>](1560/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.810 | [Download](1040/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_5.png) |  |  | [<NSFW, click to see>](1040/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.711 | [Download](520/mimura_kanako_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_5.png) |  |  | [<NSFW, click to see>](520/previews/pattern_8.png) |  |  |  | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
IlyaGusev/saiga2_7b_lora
|
IlyaGusev
| 2023-09-16T16:53:55Z | 0 | 31 | null |
[
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"dataset:IlyaGusev/ru_turbo_alpaca_evol_instruct",
"dataset:lksy/ru_instruct_gpt4",
"license:cc-by-4.0",
"region:us"
] |
text-generation
| 2023-07-21T08:16:20Z |
---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/ru_turbo_alpaca_evol_instruct
- lksy/ru_instruct_gpt4
language:
- ru
pipeline_tag: conversational
license: cc-by-4.0
---
# Saiga2 7B, Russian LLaMA2-based chatbot
Based on [LLaMA-2 7B HF](https://huggingface.co/meta-llama/Llama-2-7b-hf).
This is an adapter-only version.
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga2_7b_ggml).
Colab: [link](https://colab.research.google.com/drive/1Iw2TROpW0EjagyckyP-I-OWhLnLr7v2k).
Training code: [link](https://github.com/IlyaGusev/rulm/tree/master/self_instruct).
**WARNING 1**: Run with the development version of `transformers` and `peft`!
**WARNING 2**: Avoid using V100 (in Colab, for example). Outputs are much worse in this case.
**WARNING 3**: You can use the [Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16) base model instead.
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
MODEL_NAME = "IlyaGusev/saiga2_7b_lora"
DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n"
DEFAULT_SYSTEM_PROMPT = "Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."
class Conversation:
def __init__(
self,
message_template=DEFAULT_MESSAGE_TEMPLATE,
system_prompt=DEFAULT_SYSTEM_PROMPT,
start_token_id=1,
bot_token_id=9225
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{
"role": "system",
"content": system_prompt
}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({
"role": "user",
"content": message
})
def add_bot_message(self, message):
self.messages.append({
"role": "bot",
"content": message
})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode([self.start_token_id, self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(prompt, return_tensors="pt")
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(
**data,
generation_config=generation_config
)[0]
output_ids = output_ids[len(data["input_ids"][0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
config = PeftConfig.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(
model,
MODEL_NAME,
torch_dtype=torch.float16
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=False)
generation_config = GenerationConfig.from_pretrained(MODEL_NAME)
print(generation_config)
inputs = ["Почему трава зеленая?", "Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч"]
for inp in inputs:
conversation = Conversation()
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
print()
print("==============================")
print()
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы связан с наличием хлорофилла в её клетках. Хлорофилл - это пигмент, который обеспечивает защиту от ультрафиолетового излучения солнца. В процессе фотосинтеза растения используют энергию света для создания кислорода и органических веществ, таких как сахара. Ультрафиолетовое излучение может повредить хлорофиллы, поэтому они окрашиваются в зеленый цвет, чтобы предотвратить их повреждение.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Жила-была девочка по имени Таня. Она была очень активной и любила играть в футбол со своими друзьями. Однажды она купила новый мяч, который был еще более красивым и качественным, чем все остальные. Он был настолько же красочен, что казалось, будто он сам играет!
Однако, когда Танечка начала играть с ним, то заметила, что мяч не так уж и легкий, как она думала. Вместо того чтобы лететь высоко вверх, он падал на землю, и Таня чувствовала себя очень грустной. Но она решила, что это просто ее плохая игровая форма, и продолжила играть.
Но уже через несколько минут после того, как Таня бросала мяч в воздух, он завис над головой и начал медленно кружить вокруг. Таня стала беспокойна и испуганна - что случилось? Как ее мяч стал таким тяжелым?
В этот момент, Таня услышала шум из ближайшего парка. Она поспешно отправилась туда, чтобы узнать, что происходит. Когда она подошла к парку, она увидела маленького мальчика, который сидел на дереве и плевал на свой мяч.
"Что ты делаешь?" - спросила Таня. Мальчик повернулся к ней и показал ей свой мяч. "Этот мяч мне нравится больше всего", - говорил он. "Я хочу, чтобы он был такой же, как твой".
Таня поняла, что её мяч перестал быть для нее только игровым предметом, но и символом дружбы и общения. Она решила вернуться домой и взять его с собой.
Когда они пришли домой, Таня дала своему другу мяч и объяснила ему, что теперь их два мяча будут лучше вместе, чем один. Именно так, благодаря дружбе и сотрудничеству, Таня и ее новый друг стали лучшими командами. Их мячи стали играть вместе, и каждый из них становился еще лучше и быстрее благодаря друг другу.
Итак, Таня научилась, что настоящая дружба - это не только общительность и веселость, но и помощь и поддержка друг друга в трудных ситуациях. Благодаря своей новому другу, Таня научилась ценить своих товарищей и находить радость в том, что можно учиться у других людей.
```
v2:
- context extended to 4096
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ktxsy0v2)
- saiga2_7b_v2 vs saiga2_7b: 85-23-68
v1:
- dataset code revision 7712a061d993f61c49b1e2d992e893c48acb3a87
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/innzu7g8)
- 7 datasets: ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_turbo_alpaca_evol_instruct (iteration 1/2), ru_instruct_gpt4
- Datasets merging script: [create_chat_set.py](https://github.com/IlyaGusev/rulm/blob/e4238fd9a196405b566a2d5838ab44b7a0f4dc31/self_instruct/src/data_processing/create_chat_set.py)
- saiga7b_v5 vs saiga2_7b: 78-8-90
|
117jahfar/my-cats-new
|
117jahfar
| 2023-09-16T16:42:49Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T16:29:48Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-cats-new Dreambooth model trained by 117jahfar following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MESCOE223
Sample pictures of this concept:




|
prabhatsingh/bert-finetuned-squad
|
prabhatsingh
| 2023-09-16T16:42:39Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-16T15:22:30Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Unstoppable0/modexla
|
Unstoppable0
| 2023-09-16T16:39:54Z | 0 | 0 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-09-16T16:08:11Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
For the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
cgijoe/ppo-LunarLander-v2
|
cgijoe
| 2023-09-16T16:34:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T20:39:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -658.36 +/- 50.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/vento_of_the_front_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T16:33:53Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/vento_of_the_front_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T16:22:55Z |
---
license: mit
datasets:
- CyberHarem/vento_of_the_front_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of vento_of_the_front_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/vento_of_the_front_toarumajutsunoindex.pt` as the embedding and `4420/vento_of_the_front_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.967. The trigger words are:
1. `vento_of_the_front_toarumajutsunoindex`
2. `blonde_hair, piercing, blue_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.844 | [Download](5100/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.960 | [Download](4760/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.967** | [**Download**](4420/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.916 | [Download](4080/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.902 | [Download](3740/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.889 | [Download](3400/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.767 | [Download](3060/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.830 | [Download](2720/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.846 | [Download](2380/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.760 | [Download](2040/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.782 | [Download](1700/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.684 | [Download](1360/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.610 | [Download](1020/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.469 | [Download](680/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.441 | [Download](340/vento_of_the_front_toarumajutsunoindex.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
winglian/omega-3b
|
winglian
| 2023-09-16T16:32:13Z | 31 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"custom_code",
"en",
"dataset:nampdn-ai/tiny-textbooks",
"dataset:nampdn-ai/tiny-lessons",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-09-15T07:40:11Z |
---
datasets:
- nampdn-ai/tiny-textbooks
- nampdn-ai/tiny-lessons
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# Omega 2.6B
This model is derived from phi 1.3B using layer stacking techniques to double the number of hidden layers in the model.
The model was then trained for 1 epoch on data from tiny-textbooks and tiny-lessons.
# Training
https://wandb.ai/wing-lian/phi-2x-pt-tiny
|
esperesa/pegasus-samsum
|
esperesa
| 2023-09-16T16:27:14Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-16T15:16:22Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6909 | 0.54 | 500 | 1.4848 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.0.0
- Tokenizers 0.14.0
|
markredito/FilmGrain-LoRA-stablediffusion
|
markredito
| 2023-09-16T16:23:34Z | 10 | 1 |
diffusers
|
[
"diffusers",
"art",
"text-to-image",
"en",
"license:artistic-2.0",
"region:us"
] |
text-to-image
| 2023-09-09T17:22:12Z |
---
library_name: diffusers
license: artistic-2.0
pipeline_tag: text-to-image
language:
- en
tags:
- art
---
### Introducing Filmgrain LoRA for Stable Diffusion 1.5
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5ef5ed30-feca-46e4-b6b4-6b41ef2ed9f7/width=1024/00004-1404433137.jpeg" width="512" height="512">
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bc14aaa7-53c1-45ea-abbb-06ecdd011411/width=1024/00005-1404433138.jpeg" width="512" height="512">
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1d881292-c7f5-4155-bff3-fd951a9bf04d/width=1024/00008-1404433140.jpeg" width="512" height="512">
**Overview**
Meet our Low Rank Adapter (LoRA) for Stable Diffusion 1.5: your go-to for adding that nostalgic, film-like touch to digital images.
**Who's It For?**
If you love the "analog style" in photographs—grains, colors, and all—this one's for you.
**Features**
- Film Grain: Get that classic grainy texture.
- Slight Discoloration: Add subtle, film-like color shifts to your images.
**Recommended Settings:**
Note: This LoRA works well for portrait photography.
- Model/checkpoint to use this LoRA with: dreamshaper
- Steps: 25
- CFG scale: 7
- Sampler: Tested with Eular a and DPM++ 2M Karras
Experiment with different settings; you might get better results!
**Local Installation**
- Download and save the tensor file to your models\lora folder of your Stable Diffusion installation. If using Automatic1111 it would be here: \\a1111\\stable-diffusion-webui\\models\\Lora
- When using Automatic1111 UI, click “show/hide networks” button.
- Choose the LoRA, and it automatically adds the activation tag to your prompt.
**Alternate Download Link**
https://civitai.com/models/142795/filmgrain
|
ygaci/Taxi-v3
|
ygaci
| 2023-09-16T16:07:51Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T15:40:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ygaci/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hardikpanchariya/photo-sks-hp
|
hardikpanchariya
| 2023-09-16T16:04:02Z | 0 | 0 | null |
[
"text-to-image",
"doi:10.57967/hf/1119",
"license:openrail",
"region:us"
] |
text-to-image
| 2023-09-16T15:53:24Z |
---
license: openrail
pipeline_tag: text-to-image
---
|
SuperSecureHuman/t5_base_trails
|
SuperSecureHuman
| 2023-09-16T15:52:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-16T14:54:35Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: t5_base_trails
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_base_trails
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the opus_books dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/shutaura_sequenzia_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T15:51:16Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/shutaura_sequenzia_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T15:41:31Z |
---
license: mit
datasets:
- CyberHarem/shutaura_sequenzia_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of shutaura_sequenzia_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3740, you need to download `3740/shutaura_sequenzia_toarumajutsunoindex.pt` as the embedding and `3740/shutaura_sequenzia_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3740**, with the score of 0.932. The trigger words are:
1. `shutaura_sequenzia_toarumajutsunoindex`
2. `black_hair, long_hair, black_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.893 | [Download](5100/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.843 | [Download](4760/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.901 | [Download](4420/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.879 | [Download](4080/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| **3740** | **0.932** | [**Download**](3740/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.842 | [Download](3400/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.899 | [Download](3060/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.874 | [Download](2720/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.871 | [Download](2380/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.828 | [Download](2040/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.874 | [Download](1700/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.868 | [Download](1360/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.839 | [Download](1020/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.859 | [Download](680/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.612 | [Download](340/shutaura_sequenzia_toarumajutsunoindex.zip) |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Anastasiaps/Reinforce-Cart-Pole
|
Anastasiaps
| 2023-09-16T15:34:47Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T15:34:38Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cart-Pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 452.30 +/- 97.19
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Norah-K/PDPL_CAMeLBERT
|
Norah-K
| 2023-09-16T15:28:59Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:NorahNasser/autotrain-data-camel_pdpl",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T15:25:01Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- NorahNasser/autotrain-data-camel_pdpl
co2_eq_emissions:
emissions: 0.04275306009908754
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 89564143964
- CO2 Emissions (in grams): 0.0428
## Validation Metrics
- Loss: 0.293
- Accuracy: 0.929
- Macro F1: 0.926
- Micro F1: 0.929
- Weighted F1: 0.929
- Macro Precision: 0.936
- Micro Precision: 0.929
- Weighted Precision: 0.930
- Macro Recall: 0.918
- Micro Recall: 0.929
- Weighted Recall: 0.929
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/NorahNasser/autotrain-camel_pdpl-89564143964
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("NorahNasser/autotrain-camel_pdpl-89564143964", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("NorahNasser/autotrain-camel_pdpl-89564143964", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ygaci/q-FrozenLake-v1-4x4-noSlippery
|
ygaci
| 2023-09-16T15:23:37Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T15:23:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ygaci/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sd-concepts-library/finn-token
|
sd-concepts-library
| 2023-09-16T15:16:11Z | 0 | 0 | null |
[
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] | null | 2023-09-16T15:16:10Z |
---
license: mit
base_model: runwayml/stable-diffusion-v1-5
---
### finn-token on Stable Diffusion
This is the `<finn-token>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
anyuanay/my_awesome_model
|
anyuanay
| 2023-09-16T15:14:44Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T14:57:41Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 157 | 0.0004 | 1.0 |
| No log | 2.0 | 314 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DaunIDebil/Zelensky-RVC2.0
|
DaunIDebil
| 2023-09-16T15:11:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-16T15:07:23Z |
model is not mine. i found it somewhere on the internet in may.
|
kanishka/smolm-mlm-bpe-unmask-seed_222
|
kanishka
| 2023-09-16T15:09:48Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T06:21:18Z |
---
base_model: models/smolm-mlm/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-mlm-bpe-unmask-seed_222
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-mlm-bpe-unmask-seed_222
This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7005
- Accuracy: 0.4481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 512
- seed: 222
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5334 | 1.0 | 11938 | 3.4896 | 0.3463 |
| 3.3402 | 2.0 | 23876 | 3.3814 | 0.3590 |
| 3.1641 | 3.0 | 35814 | 3.1702 | 0.3844 |
| 3.0325 | 4.0 | 47752 | 3.0475 | 0.4019 |
| 2.951 | 5.0 | 59690 | 2.9666 | 0.4095 |
| 2.8583 | 6.0 | 71628 | 2.8908 | 0.4201 |
| 2.7872 | 7.0 | 83566 | 2.8299 | 0.4310 |
| 2.7348 | 8.0 | 95504 | 2.7900 | 0.4335 |
| 2.6584 | 9.0 | 107442 | 2.7272 | 0.4443 |
| 2.6462 | 10.0 | 119380 | 2.6962 | 0.4501 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CyberHarem/himegami_aisa_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T15:09:18Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/himegami_aisa_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-16T08:04:57Z |
---
license: mit
datasets:
- CyberHarem/himegami_aisa_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of himegami_aisa_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5040, you need to download `5040/himegami_aisa_toarumajutsunoindex.pt` as the embedding and `5040/himegami_aisa_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5040**, with the score of 0.886. The trigger words are:
1. `himegami_aisa_toarumajutsunoindex`
2. `long_hair, black_hair, bangs, blunt_bangs, purple_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5400 | 0.848 | [Download](5400/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| **5040** | **0.886** | [**Download**](5040/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4680 | 0.882 | [Download](4680/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4320 | 0.820 | [Download](4320/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3960 | 0.862 | [Download](3960/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3600 | 0.748 | [Download](3600/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3240 | 0.728 | [Download](3240/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2880 | 0.783 | [Download](2880/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2520 | 0.724 | [Download](2520/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) |  |  |
| 2160 | 0.655 | [Download](2160/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1800 | 0.602 | [Download](1800/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1440 | 0.627 | [Download](1440/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 1080 | 0.689 | [Download](1080/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 720 | 0.521 | [Download](720/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](720/previews/nude.png) | [<NSFW, click to see>](720/previews/nude2.png) |  |  |
| 360 | 0.481 | [Download](360/himegami_aisa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](360/previews/nude.png) | [<NSFW, click to see>](360/previews/nude2.png) |  |  |
|
AliBagherz/result
|
AliBagherz
| 2023-09-16T15:03:20Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:pquad",
"base_model:HooshvareLab/bert-fa-base-uncased",
"base_model:finetune:HooshvareLab/bert-fa-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-16T14:56:38Z |
---
license: apache-2.0
base_model: HooshvareLab/bert-fa-base-uncased
tags:
- generated_from_trainer
datasets:
- pquad
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased) on the pquad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.3084 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
QMB15/mythomax-L2-13B-4.625bit-exl2
|
QMB15
| 2023-09-16T14:51:14Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-16T14:35:16Z |
---
license: other
language:
- en
---
This is an exllama V2 quantization of https://huggingface.co/Gryphe/MythoMax-L2-13b
Uses a target bpw of 4.625.
Includes measurement.json for convenience of quantizing to other sizes.
Calibration data: https://huggingface.co/datasets/wikitext/resolve/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet
An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
The script and the acccompanying templates I used to produce both can [be found here](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
This model is proficient at both roleplaying and storywriting due to its unique nature.
Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) (You're the best!)
## Model details
The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
This type of merge is incapable of being illustrated, as each of its 363 tensors had an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
## Prompt Format
This model primarily uses Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
---
license: other
---
|
ygaci/ppo-Huggy
|
ygaci
| 2023-09-16T14:26:15Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-16T14:26:05Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ygaci/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/tsukuyomi_komoe_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T14:20:13Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/tsukuyomi_komoe_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-16T07:41:46Z |
---
license: mit
datasets:
- CyberHarem/tsukuyomi_komoe_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of tsukuyomi_komoe_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6380, you need to download `6380/tsukuyomi_komoe_toarumajutsunoindex.pt` as the embedding and `6380/tsukuyomi_komoe_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6380**, with the score of 0.879. The trigger words are:
1. `tsukuyomi_komoe_toarumajutsunoindex`
2. `short_hair, pink_hair, pink_eyes, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8700 | 0.831 | [Download](8700/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8700/previews/nude.png) | [<NSFW, click to see>](8700/previews/nude2.png) |  |  |
| 8120 | 0.855 | [Download](8120/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8120/previews/nude.png) | [<NSFW, click to see>](8120/previews/nude2.png) |  |  |
| 7540 | 0.873 | [Download](7540/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7540/previews/nude.png) | [<NSFW, click to see>](7540/previews/nude2.png) |  |  |
| 6960 | 0.868 | [Download](6960/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6960/previews/nude.png) | [<NSFW, click to see>](6960/previews/nude2.png) |  |  |
| **6380** | **0.879** | [**Download**](6380/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6380/previews/nude.png) | [<NSFW, click to see>](6380/previews/nude2.png) |  |  |
| 5800 | 0.838 | [Download](5800/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5800/previews/nude.png) | [<NSFW, click to see>](5800/previews/nude2.png) |  |  |
| 5220 | 0.720 | [Download](5220/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5220/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5220/previews/nude.png) | [<NSFW, click to see>](5220/previews/nude2.png) |  |  |
| 4640 | 0.811 | [Download](4640/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4640/previews/nude.png) | [<NSFW, click to see>](4640/previews/nude2.png) |  |  |
| 4060 | 0.696 | [Download](4060/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4060/previews/nude.png) | [<NSFW, click to see>](4060/previews/nude2.png) |  |  |
| 3480 | 0.703 | [Download](3480/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3480/previews/nude.png) | [<NSFW, click to see>](3480/previews/nude2.png) |  |  |
| 2900 | 0.612 | [Download](2900/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2900/previews/nude.png) | [<NSFW, click to see>](2900/previews/nude2.png) |  |  |
| 2320 | 0.400 | [Download](2320/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2320/previews/nude.png) | [<NSFW, click to see>](2320/previews/nude2.png) |  |  |
| 1740 | 0.439 | [Download](1740/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1740/previews/nude.png) | [<NSFW, click to see>](1740/previews/nude2.png) |  |  |
| 1160 | 0.190 | [Download](1160/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1160/previews/nude.png) | [<NSFW, click to see>](1160/previews/nude2.png) |  |  |
| 580 | 0.091 | [Download](580/tsukuyomi_komoe_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](580/previews/nude.png) | [<NSFW, click to see>](580/previews/nude2.png) |  |  |
|
shaowenchen/llama-2-7b-langchain-chat-gguf
|
shaowenchen
| 2023-09-16T14:09:07Z | 239 | 2 | null |
[
"gguf",
"meta",
"llama",
"llama-2",
"chinese",
"7b",
"text-generation",
"zh",
"en",
"license:other",
"region:us"
] |
text-generation
| 2023-09-16T07:09:33Z |
---
inference: false
language:
- zh
- en
license: other
model_creator: Photolens
model_link: https://huggingface.co/Photolens/llama-2-7b-langchain-chat
model_name: llama-2-7b-langchain-chat
model_type: llama
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- meta
- gguf
- llama
- llama-2
- chinese
- 7b
---
## Provided files
| Name | Quant method | Size |
| ------------------------------------- | ------------ | ------ |
| llama-2-7b-langchain-chat.Q2_K.gguf | Q2_K | 2.6 GB |
| llama-2-7b-langchain-chat.Q3_K.gguf | Q3_K | 3.1 GB |
| llama-2-7b-langchain-chat.Q3_K_L.gguf | Q3_K_L | 3.3 GB |
| llama-2-7b-langchain-chat.Q3_K_S.gguf | Q3_K_S | 2.7 GB |
| llama-2-7b-langchain-chat.Q4_0.gguf | Q4_0 | 3.6 GB |
| llama-2-7b-langchain-chat.Q4_1.gguf | Q4_1 | 3.9 GB |
| llama-2-7b-langchain-chat.Q4_K.gguf | Q4_K | 3.8 GB |
| llama-2-7b-langchain-chat.Q4_K_S.gguf | Q4_K_S | 3.6 GB |
| llama-2-7b-langchain-chat.Q5_0.gguf | Q5_0 | 4.3 GB |
| llama-2-7b-langchain-chat.Q5_1.gguf | Q5_1 | 4.7 GB |
| llama-2-7b-langchain-chat.Q5_K.gguf | Q5_K | 4.5 GB |
| llama-2-7b-langchain-chat.Q5_K_S.gguf | Q5_K_S | 4.3 GB |
| llama-2-7b-langchain-chat.Q6_K.gguf | Q6_K | 5.1 GB |
| llama-2-7b-langchain-chat.Q8_0.gguf | Q8_0 | 6.7 GB |
| llama-2-7b-langchain-chat.gguf | full | 13 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
## Provided images
| Name | Quant method | Size |
| ------------------------------------------------- | ------------ | ------- |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q2_K | Q2_K | 6.72 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q3_K | Q3_K | 7.64 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q3_K_L | Q3_K_L | 8.27 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q3_K_S | Q3_K_S | 6.97 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_0 | Q4_0 | 8.55 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_1 | Q4_1 | 9.41 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_K | Q4_K | 9.17 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q4_K_S | Q4_K_S | 8.72 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q5_0 | Q5_0 | 10.4 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q5_K | Q5_K | 10.6 GB |
| shaowenchen/llama-2-7b-langchain-chat-gguf:Q5_K_S | Q5_K_S | 10.4 GB |
Usage:
```
docker run --rm -p 8000:8000 shaowenchen/llama-2-7b-langchain-chat-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
LarryAIDraw/Fu_Hua_Azure_Empyrea_v2_0
|
LarryAIDraw
| 2023-09-16T13:59:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-16T13:56:34Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/77109/honkai-impact-3rd-fu-hua-azure-empyrea-or
|
CyberHarem/ninomiya_asuka_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T13:38:49Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/ninomiya_asuka_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T13:23:12Z |
---
license: mit
datasets:
- CyberHarem/ninomiya_asuka_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ninomiya_asuka_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4800, you need to download `4800/ninomiya_asuka_idolmastercinderellagirls.pt` as the embedding and `4800/ninomiya_asuka_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4800**, with the score of 0.979. The trigger words are:
1. `ninomiya_asuka_idolmastercinderellagirls`
2. `multicolored_hair, two-tone_hair, long_hair, purple_eyes, orange_hair, hair_between_eyes, smile, bangs, brown_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.961 | [Download](7200/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.952 | [Download](6720/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.971 | [Download](6240/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.963 | [Download](5760/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.956 | [Download](5280/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| **4800** | **0.979** | [**Download**](4800/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.979 | [Download](4320/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.952 | [Download](3840/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.969 | [Download](3360/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.926 | [Download](2880/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.941 | [Download](2400/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.980 | [Download](1920/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.911 | [Download](1440/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.890 | [Download](960/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.980 | [Download](480/ninomiya_asuka_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
LarryAIDraw/haruna_ba
|
LarryAIDraw
| 2023-09-16T13:30:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-16T13:30:10Z |
---
license: creativeml-openrail-m
---
|
nikhilwani/masked-language-model
|
nikhilwani
| 2023-09-16T13:18:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T13:04:50Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: masked-language-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# masked-language-model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2449 | 1.0 | 1141 | 2.0667 |
| 2.1633 | 2.0 | 2282 | 1.9991 |
| 2.1264 | 3.0 | 3423 | 1.9840 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kyuwon416/rl_course_vizdoom_health_gathering_supreme
|
kyuwon416
| 2023-09-16T13:12:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T11:31:35Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.12 +/- 5.35
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kyuwon416/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
CyberHarem/takitsubo_rikou_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T13:08:35Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/takitsubo_rikou_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-15T23:44:24Z |
---
license: mit
datasets:
- CyberHarem/takitsubo_rikou_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of takitsubo_rikou_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4080, you need to download `4080/takitsubo_rikou_toarumajutsunoindex.pt` as the embedding and `4080/takitsubo_rikou_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4080**, with the score of 0.948. The trigger words are:
1. `takitsubo_rikou_toarumajutsunoindex`
2. `short_hair, black_hair, brown_eyes, brown_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.921 | [Download](5100/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.939 | [Download](4760/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.937 | [Download](4420/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| **4080** | **0.948** | [**Download**](4080/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.945 | [Download](3740/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.799 | [Download](3400/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.924 | [Download](3060/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.941 | [Download](2720/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.914 | [Download](2380/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.939 | [Download](2040/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.856 | [Download](1700/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.838 | [Download](1360/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.810 | [Download](1020/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.757 | [Download](680/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.501 | [Download](340/takitsubo_rikou_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Legendary256/const-v1-llama2-13b-chat
|
Legendary256
| 2023-09-16T13:04:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T13:04:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
bongo2112/sdxl-db-mbosso-headshot
|
bongo2112
| 2023-09-16T12:58:56Z | 3 | 3 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-16T12:56:52Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of mshededembosso man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
VictorGil75/Modelo_Clasificacion_Taller_NoTaller_GPT4
|
VictorGil75
| 2023-09-16T12:49:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T12:49:31Z |
# Modelo_Clasificacion_Taller_NoTaller_GPT4
Modelo de clasificación para identificar si una revisión pertenece a un Taller o no.
Este modelo fue entrenado para clasificar reseñas en la categoría 'Taller' o 'No Taller'.
|
ByadminPresents/ru-rivet-RVC-model
|
ByadminPresents
| 2023-09-16T12:43:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-16T12:23:18Z |
RVCv2 model of Rivet from Ratchet and Clank Rift Apart (russian dub) 200 epochs
---
license: openrail
---
|
Muhammadreza/dl-test-1
|
Muhammadreza
| 2023-09-16T12:42:21Z | 3 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T12:31:46Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dl-test-1 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
upro/blip
|
upro
| 2023-09-16T12:37:39Z | 101 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"blip",
"image-text-to-text",
"image-captioning",
"image-to-text",
"arxiv:2201.12086",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-09-16T12:37:39Z |
---
pipeline_tag: image-to-text
tags:
- image-captioning
languages:
- en
license: bsd-3-clause
---
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone).
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
fetiska/cold_weapon
|
fetiska
| 2023-09-16T12:18:33Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-16T12:18:30Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: fetiska/cold_weapon
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DomArruda/LunarLanderV2
|
DomArruda
| 2023-09-16T12:08:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T12:06:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.13 +/- 20.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kyuwon416/dqn-SpaceInvadersNoFrameskip-v4
|
kyuwon416
| 2023-09-16T11:55:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T11:55:19Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 527.00 +/- 171.86
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kyuwon416 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kyuwon416 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kyuwon416
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
imone/CodeLlama_13B_with_EOT_token
|
imone
| 2023-09-16T11:55:50Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-09-16T11:44:59Z |
---
license: llama2
---
# Code Llama 13B with End-of-turn (EOT) Token
This is the Code Llama 13B model with `<|end_of_turn|>` token added as id `32016` and other special tokens. The token input/output embedding is initialized as the mean of all existing input/output token embeddings, respectively.
## Special tokens added:
```json
{
"<|end_of_turn|>": 32016,
"<|verdict|>": 32017,
"<|PAD|>": 32018,
"<|PAD2|>": 32019,
}
```
|
open-eye/RETFound_MAE
|
open-eye
| 2023-09-16T11:52:03Z | 0 | 7 | null |
[
"Foundation model for eye",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-09-16T11:28:33Z |
---
tags:
- Foundation model for eye
license: cc-by-nc-4.0
---
## RETFound - A foundation model for retinal imaging
This is the official repo for RETFound, which is based on [MAE](https://github.com/facebookresearch/mae):
Please contact **ykzhoua@gmail.com** or **yukun.zhou.19@ucl.ac.uk** if you have questions.
Keras version implemented by Yuka Kihara can be found [here](https://github.com/uw-biomedical-ml/RETFound_MAE)
### Key features
- RETFound is pre-trained on 1.6 million retinal images with self-supervised learning
- RETFound has been validated in multiple disease detection tasks
- RETFound can be efficiently adapted to customised tasks
### News
- A [visualisation demo](https://github.com/rmaphoh/RETFound_MAE/blob/main/RETFound_visualize.ipynb) is added
### Install environment
1. Create environment with conda:
```
conda create -n retfound python=3.7.5 -y
conda activate retfound
```
2. Install dependencies
```
git clone https://github.com/rmaphoh/RETFound_MAE/
cd RETFound_MAE
pip install -r requirement.txt
```
### Fine-tuning with RETFound weights
To fine tune RETFound on your own data, follow these steps:
1. Download the RETFound pre-trained weights
<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom"></th>
<th valign="bottom">ViT-Large</th>
<!-- TABLE BODY -->
<tr><td align="left">Colour fundus image</td>
<td align="center"><a href="https://drive.google.com/file/d/1l62zbWUFTlp214SvK6eMwPQZAzcwoeBE/view?usp=sharing">download</a></td>
</tr>
<!-- TABLE BODY -->
<tr><td align="left">OCT</td>
<td align="center"><a href="https://drive.google.com/file/d/1m6s7QYkjyjJDlpEuXm7Xp3PmjN-elfW2/view?usp=sharing">download</a></td>
</tr>
</tbody></table>
2. Organise your data into this directory structure (using IDRiD as an [example](Example.ipynb))
<p align="left">
<img src="./pic/file_index.jpg" width="160">
</p>
3. Start fine-tuning (use IDRiD as example). A fine-tuned checkpoint will be saved during training. Evaluation will be run after training.
```
python -m torch.distributed.launch --nproc_per_node=1 --master_port=48798 main_finetune.py \
--batch_size 16 \
--world_size 1 \
--model vit_large_patch16 \
--epochs 50 \
--blr 5e-3 --layer_decay 0.65 \
--weight_decay 0.05 --drop_path 0.2 \
--nb_classes 5 \
--data_path ./IDRiD_data/ \
--task ./finetune_IDRiD/ \
--finetune ./RETFound_cfp_weights.pth
```
4. For evaluation only
```
python -m torch.distributed.launch --nproc_per_node=1 --master_port=48798 main_finetune.py \
--eval --batch_size 16 \
--world_size 1 \
--model vit_large_patch16 \
--epochs 50 \
--blr 5e-3 --layer_decay 0.65 \
--weight_decay 0.05 --drop_path 0.2 \
--nb_classes 5 \
--data_path ./IDRiD_data/ \
--task ./internal_IDRiD/ \
--resume ./finetune_IDRiD/checkpoint-best.pth
```
### Load the model and weights (if you want to call the model in your code)
```python
import torch
import models_vit
from util.pos_embed import interpolate_pos_embed
from timm.models.layers import trunc_normal_
# call the model
model = models_vit.__dict__['vit_large_patch16'](
num_classes=2,
drop_path_rate=0.2,
global_pool=True,
)
# load RETFound weights
checkpoint = torch.load('RETFound_cfp_weights.pth', map_location='cpu')
checkpoint_model = checkpoint['model']
state_dict = model.state_dict()
for k in ['head.weight', 'head.bias']:
if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape:
print(f"Removing key {k} from pretrained checkpoint")
del checkpoint_model[k]
# interpolate position embedding
interpolate_pos_embed(model, checkpoint_model)
# load pre-trained model
msg = model.load_state_dict(checkpoint_model, strict=False)
assert set(msg.missing_keys) == {'head.weight', 'head.bias', 'fc_norm.weight', 'fc_norm.bias'}
# manually initialize fc layer
trunc_normal_(model.head.weight, std=2e-5)
print("Model = %s" % str(model))
```
|
him009/bloomz_fine_tuned_marketing_email
|
him009
| 2023-09-16T11:45:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T11:45:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
bongo2112/sdxl-db-harmonize-headshot
|
bongo2112
| 2023-09-16T11:43:38Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-16T11:41:34Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of kondeharmo man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
raffel-22/emotion_classification_2
|
raffel-22
| 2023-09-16T11:35:36Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-16T11:19:27Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification_2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.51875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3274
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9337 | 0.3563 |
| No log | 2.0 | 40 | 1.7116 | 0.3375 |
| No log | 3.0 | 60 | 1.5755 | 0.4562 |
| No log | 4.0 | 80 | 1.4939 | 0.45 |
| No log | 5.0 | 100 | 1.4377 | 0.5062 |
| No log | 6.0 | 120 | 1.4363 | 0.4562 |
| No log | 7.0 | 140 | 1.3615 | 0.5125 |
| No log | 8.0 | 160 | 1.3021 | 0.5375 |
| No log | 9.0 | 180 | 1.3307 | 0.525 |
| No log | 10.0 | 200 | 1.3085 | 0.4938 |
| No log | 11.0 | 220 | 1.2798 | 0.5813 |
| No log | 12.0 | 240 | 1.2707 | 0.525 |
| No log | 13.0 | 260 | 1.2339 | 0.55 |
| No log | 14.0 | 280 | 1.3053 | 0.5437 |
| No log | 15.0 | 300 | 1.3038 | 0.4938 |
| No log | 16.0 | 320 | 1.3088 | 0.5375 |
| No log | 17.0 | 340 | 1.3336 | 0.5312 |
| No log | 18.0 | 360 | 1.3053 | 0.5 |
| No log | 19.0 | 380 | 1.2206 | 0.5687 |
| No log | 20.0 | 400 | 1.2598 | 0.5312 |
| No log | 21.0 | 420 | 1.3332 | 0.5125 |
| No log | 22.0 | 440 | 1.3388 | 0.5312 |
| No log | 23.0 | 460 | 1.3129 | 0.5563 |
| No log | 24.0 | 480 | 1.3632 | 0.5062 |
| 0.9153 | 25.0 | 500 | 1.4166 | 0.4688 |
| 0.9153 | 26.0 | 520 | 1.4094 | 0.5 |
| 0.9153 | 27.0 | 540 | 1.4294 | 0.475 |
| 0.9153 | 28.0 | 560 | 1.4937 | 0.475 |
| 0.9153 | 29.0 | 580 | 1.3897 | 0.4938 |
| 0.9153 | 30.0 | 600 | 1.4565 | 0.475 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
beethogedeon/truck-detection
|
beethogedeon
| 2023-09-16T11:24:49Z | 0 | 0 | null |
[
"object-detection",
"en",
"fr",
"dataset:Beetho/Trucks-Detection-Yolov8",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2023-09-16T02:23:09Z |
---
license: gpl-3.0
datasets:
- Beetho/Trucks-Detection-Yolov8
language:
- en
- fr
pipeline_tag: object-detection
---
|
samkas125/bert-large-legal-sentence-classification
|
samkas125
| 2023-09-16T11:20:32Z | 202 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T07:37:20Z |
---
license: mit
---
# bert-large-legal-sentence-classification
## Model Description
**bert-large-legal-sentence-classification** is a finetuned `bert-large-cased` model that is ready to use for legal sentence classification into rhetorical roles. It was fine-tuned using a repository of [analyzed disability-claim decisions](https://github.com/vernrwalker/VetClaims-JSON/) issued by the Board of Veterans' Appeals ("BVA") of the U.S. Department of Veterans Affairs.
### How to use
#### With HuggingFace pipeline
```python
from transformers import AutoTokenizer, BertForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("samkas125/bert-large-legal-sentence-classification")
model = BertForSequenceClassification.from_pretrained('samkas125/bert-large-legal-sentence-classification')
nlp = pipeline("text-classification", model=model, tokenizer=tokenizer)
example = "The Veteran did not have a psychiatric disorder in service that was unrelated to the use of drugs."
results = nlp(example)
print(results)
```
#### Without HuggingFace pipeline
```python
from transformers import AutoTokenizer, BertForSequenceClassification
from torch import softmax, argmax
tokenizer = AutoTokenizer.from_pretrained("samkas125/bert-large-legal-sentence-classification")
model = BertForSequenceClassification.from_pretrained('samkas125/bert-large-legal-sentence-classification')
sentence = "The Veteran did not have a psychiatric disorder in service that was unrelated to the use of drugs."
encoded_input = tokenizer(sentence, return_tensors='pt')
output = model(**encoded_input)
logits = output.logits
probs = softmax(logits, dim=1)
predicted_class = argmax(probs, dim=1).item()
print(predicted_class)
```
Check the `config.json` file to map the index of a class to its name.
### Limitations and bias
This model is limited by the size of its training set of 6153 sentences. Additionally, it was only trained on disability claim decisions issued by the U.S. Department of Veterans Affairs. It may not generalize well to non-legal sentences, or sentences from other types of cases within the legal domain.
## Training data
A repository of [analyzed disability-claim decisions](https://github.com/vernrwalker/VetClaims-JSON/) issued by the Board of Veterans' Appeals ("BVA") of the U.S. Department of Veterans Affairs was used to finetune the `bert-large-cased` model for sequence classification. It consisted of 6153 sentences from 50 cases. The sentences were classified by the Research Laboratory for Law, Logic and Technology (LLT Lab) at the Maurice A. Deane School of Law at Hofstra University, in New York. The analysis consists of classifying sentences in those decisions according to their rhetorical role (as defined within the same repository).
|
CDS-GROUP/graph-classification
|
CDS-GROUP
| 2023-09-16T11:16:29Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"graphormer",
"generated_from_trainer",
"base_model:clefourrier/graphormer-base-pcqm4mv2",
"base_model:finetune:clefourrier/graphormer-base-pcqm4mv2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-09-16T11:16:17Z |
---
license: mit
base_model: clefourrier/graphormer-base-pcqm4mv2
tags:
- generated_from_trainer
model-index:
- name: graph-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# graph-classification
This model is a fine-tuned version of [clefourrier/graphormer-base-pcqm4mv2](https://huggingface.co/clefourrier/graphormer-base-pcqm4mv2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 1280
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kyuwon416/LunarLander-v2
|
kyuwon416
| 2023-09-16T11:07:42Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T11:04:41Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -159.89 +/- 85.61
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'kyuwon416/LunarLander-v2-1'
'batch_size': 512
'minibatch_size': 128}
```
|
CyberHarem/uiharu_kazari_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T10:53:02Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/uiharu_kazari_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-15T21:30:58Z |
---
license: mit
datasets:
- CyberHarem/uiharu_kazari_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of uiharu_kazari_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4080, you need to download `4080/uiharu_kazari_toarumajutsunoindex.pt` as the embedding and `4080/uiharu_kazari_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4080**, with the score of 0.974. The trigger words are:
1. `uiharu_kazari_toarumajutsunoindex`
2. `short_hair, black_hair, hair_ornament, flower, hair_flower, head_wreath, serafuku, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.955 | [Download](5100/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.946 | [Download](4760/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.969 | [Download](4420/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| **4080** | **0.974** | [**Download**](4080/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.858 | [Download](3740/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.886 | [Download](3400/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.865 | [Download](3060/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.895 | [Download](2720/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.924 | [Download](2380/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.929 | [Download](2040/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.843 | [Download](1700/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.806 | [Download](1360/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.795 | [Download](1020/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.694 | [Download](680/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.614 | [Download](340/uiharu_kazari_toarumajutsunoindex.zip) |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
kanishka/smolm-mlm-bpe-unmask-seed_333
|
kanishka
| 2023-09-16T10:24:59Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-16T08:23:14Z |
---
base_model: models/smolm-mlm/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-mlm-bpe-unmask-seed_333
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-mlm-bpe-unmask-seed_333
This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6815
- Accuracy: 0.4514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 512
- seed: 333
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5475 | 1.0 | 11938 | 3.5281 | 0.3476 |
| 3.2985 | 2.0 | 23876 | 3.3235 | 0.3647 |
| 3.1876 | 3.0 | 35814 | 3.1843 | 0.3816 |
| 3.032 | 4.0 | 47752 | 3.0760 | 0.3987 |
| 2.9248 | 5.0 | 59690 | 2.9351 | 0.4159 |
| 2.8678 | 6.0 | 71628 | 2.8834 | 0.4227 |
| 2.7793 | 7.0 | 83566 | 2.8332 | 0.4274 |
| 2.7193 | 8.0 | 95504 | 2.7603 | 0.4419 |
| 2.66 | 9.0 | 107442 | 2.7326 | 0.4453 |
| 2.6078 | 10.0 | 119380 | 2.7035 | 0.4500 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CyberHarem/index_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T10:10:24Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/index_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-15T19:18:09Z |
---
license: mit
datasets:
- CyberHarem/index_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of index_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 11700, you need to download `11700/index_toarumajutsunoindex.pt` as the embedding and `11700/index_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 11700**, with the score of 0.916. The trigger words are:
1. `index_toarumajutsunoindex`
2. `long_hair, habit, nun, safety_pin, robe, green_eyes, grey_hair, blue_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | pattern_22 | pattern_23 | pattern_24 | pattern_25 | pattern_26 | pattern_27 | pattern_28 | pattern_29 | pattern_30 | pattern_31 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:----------|:----------|:----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------|
| **11700** | **0.916** | [**Download**](11700/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](11700/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](11700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](11700/previews/nude.png) | [<NSFW, click to see>](11700/previews/nude2.png) |  |  |
| 10920 | 0.909 | [Download](10920/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](10920/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10920/previews/nude.png) | [<NSFW, click to see>](10920/previews/nude2.png) |  |  |
| 10140 | 0.842 | [Download](10140/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](10140/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10140/previews/nude.png) | [<NSFW, click to see>](10140/previews/nude2.png) |  |  |
| 9360 | 0.906 | [Download](9360/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](9360/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9360/previews/nude.png) | [<NSFW, click to see>](9360/previews/nude2.png) |  |  |
| 8580 | 0.896 | [Download](8580/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](8580/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8580/previews/nude.png) | [<NSFW, click to see>](8580/previews/nude2.png) |  |  |
| 7800 | 0.829 | [Download](7800/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](7800/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7020 | 0.820 | [Download](7020/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](7020/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6240 | 0.800 | [Download](6240/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](6240/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5460 | 0.823 | [Download](5460/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](5460/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5460/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5460/previews/nude.png) | [<NSFW, click to see>](5460/previews/nude2.png) |  |  |
| 4680 | 0.848 | [Download](4680/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](4680/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 3900 | 0.847 | [Download](3900/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](3900/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3900/previews/nude.png) | [<NSFW, click to see>](3900/previews/nude2.png) |  |  |
| 3120 | 0.755 | [Download](3120/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](3120/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2340 | 0.826 | [Download](2340/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](2340/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2340/previews/nude.png) | [<NSFW, click to see>](2340/previews/nude2.png) |  |  |
| 1560 | 0.806 | [Download](1560/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](1560/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 780 | 0.579 | [Download](780/index_toarumajutsunoindex.zip) |  | [<NSFW, click to see>](780/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](780/previews/nude.png) | [<NSFW, click to see>](780/previews/nude2.png) |  |  |
|
Shishir1807/CT_M2
|
Shishir1807
| 2023-09-16T09:41:37Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-16T09:41:01Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/CT_M2",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/CT_M2",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/CT_M2",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/CT_M2" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
ceviriyap/bert_5
|
ceviriyap
| 2023-09-16T09:33:52Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2023-09-16T09:26:17Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: bert_5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert_5
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 7.7984
- Validation Loss: 8.7049
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.4283 | 8.6635 | 0 |
| 7.9611 | 8.5577 | 1 |
| 7.8389 | 8.6501 | 2 |
| 7.7437 | 8.6738 | 3 |
| 7.7984 | 8.7049 | 4 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.10.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kaekitsune/RVC-MMTL
|
kaekitsune
| 2023-09-16T09:33:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T21:29:45Z |
---
license: creativeml-openrail-m
---
|
Shishir1807/CT_M5
|
Shishir1807
| 2023-09-16T09:26:26Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-16T09:25:51Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/CT_M5",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/CT_M5",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/CT_M5",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/CT_M5" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
VictorGil75/Modelo_Clasificacion_Taller_NoTaller_V3
|
VictorGil75
| 2023-09-16T09:12:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T09:12:11Z |
# Modelo_Clasificacion_Taller_NoTaller_V3
Modelo de clasificación para identificar si una revisión pertenece a un Taller o no.
Este modelo fue entrenado para clasificar reseñas en la categoría 'Taller' o 'No Taller'.
|
kingsankha/my_awesome_model
|
kingsankha
| 2023-09-16T09:09:51Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-15T14:23:52Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: kingsankha/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kingsankha/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3223
- Validation Loss: 0.2669
- Train Accuracy: 0.8878
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3785, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0651 | 1.0124 | 0.4824 | 0 |
| 0.9823 | 0.8702 | 0.6169 | 1 |
| 0.8404 | 0.6475 | 0.7266 | 2 |
| 0.6569 | 0.4672 | 0.8116 | 3 |
| 0.5071 | 0.3573 | 0.8487 | 4 |
| 0.4086 | 0.3027 | 0.8694 | 5 |
| 0.3471 | 0.2676 | 0.8889 | 6 |
| 0.3220 | 0.2669 | 0.8878 | 7 |
| 0.3195 | 0.2669 | 0.8878 | 8 |
| 0.3223 | 0.2669 | 0.8878 | 9 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/ootsuki_yui_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T09:05:06Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/ootsuki_yui_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T08:43:01Z |
---
license: mit
datasets:
- CyberHarem/ootsuki_yui_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ootsuki_yui_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6760, you need to download `6760/ootsuki_yui_idolmastercinderellagirls.pt` as the embedding and `6760/ootsuki_yui_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6760**, with the score of 0.945. The trigger words are:
1. `ootsuki_yui_idolmastercinderellagirls`
2. `blonde_hair, long_hair, blue_eyes, smile, blush, bangs, breasts, open_mouth, wavy_hair, collarbone, jewelry, teeth, large_breasts, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.937 | [Download](7800/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_15.png) |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.943 | [Download](7280/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_15.png) |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| **6760** | **0.945** | [**Download**](6760/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_15.png) |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.884 | [Download](6240/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_15.png) |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.892 | [Download](5720/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_15.png) |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.911 | [Download](5200/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_15.png) |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.902 | [Download](4680/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_15.png) |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.833 | [Download](4160/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_15.png) |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.885 | [Download](3640/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_15.png) |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.882 | [Download](3120/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_15.png) |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.902 | [Download](2600/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_15.png) |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.899 | [Download](2080/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_15.png) |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.901 | [Download](1560/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_15.png) |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.895 | [Download](1040/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_15.png) |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.778 | [Download](520/ootsuki_yui_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_15.png) |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
Doctor-Shotgun/CalliopeDS-L2-13B-exl2
|
Doctor-Shotgun
| 2023-09-16T09:04:14Z | 10 | 2 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"llama-2",
"en",
"license:agpl-3.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-09-16T02:43:51Z |
---
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: agpl-3.0
---
# CalliopeDS-L2-13B-exl2
Exllama v2 quant of [Doctor-Shotgun/CalliopeDS-L2-13B](https://huggingface.co/Doctor-Shotgun/CalliopeDS-L2-13B)
Branches:
- main: 4 decoder bits per weight, 6 head bits
- ideal for 12gb GPUs, or 16gb GPUs with NTK extended context or CFG
- 6.0bpw-h6: 6 decoder bits per weight, 6 head bits
- ideal for 16gb GPUs, or 24gb GPUs with NTK extended context or CFG
- 8bit-32g-h8: all tensors 8bit 32g, 8 head bits
- experimental quant, this is with exllamav2 monkeypatched to quantize all tensors to 8bit 32g
- similar in size to old GPTQ 8bit no groupsize, recommend 24gb GPU
- maxbpw-h8: ???bpw, 8 head bits
- experimental quant, this is the maximum optimized mixed quant size that the current version of exllamav2 produces
- somewhat larger than 6.0bpw but not as large as 8bit, recommend 24gb GPU
|
Shishir1807/CT_M4
|
Shishir1807
| 2023-09-16T09:01:56Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-16T09:01:16Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Shishir1807/CT_M4",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Shishir1807/CT_M4",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Shishir1807/CT_M4",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shishir1807/CT_M4" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
kishanmurthy/bloom-3b-finetuned-squad-v2
|
kishanmurthy
| 2023-09-16T08:47:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T06:55:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
wooingg/koalpaca-polyglot-12.8b-bill
|
wooingg
| 2023-09-16T08:44:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T08:43:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Yntec/aMovieX
|
Yntec
| 2023-09-16T08:29:51Z | 9,944 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"MagicArt35",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T07:32:13Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- MagicArt35
---
# AmovieX
Samples and prompts:

Pretty cute girl in a future where humanity has colonized the stars, a group of explorers embarks on a journey to a distant planet, hoping to discover new forms of life and unlock the secrets of the universe. But as they descend through the planet’s thick atmosphere, they discover that the world below is more dangerous and mysterious than they could have ever imagined.

Create pretty cute girl in an otherworldly landscape inspired by a specific planet or moon in our solar system, featuring unique geological formations and extraterrestrial flora and fauna.
Original page:
https://civitai.com/models/94687/photo-movie-x
|
pralaypati/stable-diffusion-fine-tuned
|
pralaypati
| 2023-09-16T08:25:10Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T06:52:18Z |
---
license: apache-2.0
---
A sample fine-tuned version of a pre-trained stable diffusion model. Model has been fine-tuned on two subjects and the corresponding prompts.
|
stanidiener/q-FrozenLake-v1-4x4-noSlippery
|
stanidiener
| 2023-09-16T08:19:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T08:19:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="stanidiener/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CyberHarem/jougasaki_rika_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T07:49:40Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/jougasaki_rika_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T07:27:26Z |
---
license: mit
datasets:
- CyberHarem/jougasaki_rika_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of jougasaki_rika_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6240, you need to download `6240/jougasaki_rika_idolmastercinderellagirls.pt` as the embedding and `6240/jougasaki_rika_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6240**, with the score of 0.888. The trigger words are:
1. `jougasaki_rika_idolmastercinderellagirls`
2. `blonde_hair, long_hair, green_eyes, two_side_up, smile, blush, open_mouth, jewelry, bangs, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.870 | [Download](7800/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.823 | [Download](7280/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.807 | [Download](6760/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| **6240** | **0.888** | [**Download**](6240/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.887 | [Download](5720/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.867 | [Download](5200/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.880 | [Download](4680/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.867 | [Download](4160/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.849 | [Download](3640/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.858 | [Download](3120/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.864 | [Download](2600/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.876 | [Download](2080/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.806 | [Download](1560/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.846 | [Download](1040/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.813 | [Download](520/jougasaki_rika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
minhbui/viettel_v1_mix_100k
|
minhbui
| 2023-09-16T07:19:29Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"vi",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-16T03:51:46Z |
---
license: llama2
language:
- vi
- en
---
Our model finetune qlora on 100k samples (50k dolphins data + 43k webglm data + 10k squad paraphrases answer). All the data is translated into Vietnamese.
|
thainq107/flan-t5-small-twitter-sentiment-analysis
|
thainq107
| 2023-09-16T07:14:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-16T05:28:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-small-twitter-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-twitter-sentiment-analysis
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1901
- Accuracy: 0.8412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2136 | 1.0 | 1875 | 0.1923 | 0.8270 |
| 0.1971 | 2.0 | 3750 | 0.1865 | 0.8367 |
| 0.1883 | 3.0 | 5625 | 0.1830 | 0.8403 |
| 0.1846 | 4.0 | 7500 | 0.1811 | 0.8415 |
| 0.1753 | 5.0 | 9375 | 0.1793 | 0.8442 |
| 0.1704 | 6.0 | 11250 | 0.1813 | 0.8433 |
| 0.1689 | 7.0 | 13125 | 0.1815 | 0.8458 |
| 0.1648 | 8.0 | 15000 | 0.1827 | 0.8461 |
| 0.1661 | 9.0 | 16875 | 0.1821 | 0.8475 |
| 0.1618 | 10.0 | 18750 | 0.1818 | 0.8471 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1
- Datasets 2.9.0
- Tokenizers 0.13.3
|
cedricsarigumba/llama2-qlora-finetunined-french
|
cedricsarigumba
| 2023-09-16T07:12:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T07:12:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Leekp/toonmaker3
|
Leekp
| 2023-09-16T07:02:08Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-16T07:02:07Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Korean webtoon image depicting a character named fred
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
thainq107/flan-t5-small-twitter-sentiment-analysis-zero-shot
|
thainq107
| 2023-09-16T07:01:14Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-16T03:38:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-small-twitter-sentiment-analysis-zero-shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-twitter-sentiment-analysis-zero-shot
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1810
- Accuracy: 0.8484
It achieves the following results on the test set:
- Loss: 0.1899
- Accuracy: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.213 | 1.0 | 1875 | 0.1922 | 0.8268 |
| 0.1963 | 2.0 | 3750 | 0.1863 | 0.8374 |
| 0.188 | 3.0 | 5625 | 0.1817 | 0.8413 |
| 0.1835 | 4.0 | 7500 | 0.1795 | 0.8438 |
| 0.1754 | 5.0 | 9375 | 0.1786 | 0.8459 |
| 0.1715 | 6.0 | 11250 | 0.1806 | 0.8464 |
| 0.1692 | 7.0 | 13125 | 0.1799 | 0.8478 |
| 0.1646 | 8.0 | 15000 | 0.1810 | 0.8484 |
| 0.1664 | 9.0 | 16875 | 0.1810 | 0.8484 |
| 0.1622 | 10.0 | 18750 | 0.1803 | 0.8484 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1
- Datasets 2.9.0
- Tokenizers 0.13.3
|
CyberHarem/musujime_awaki_toarumajutsunoindex
|
CyberHarem
| 2023-09-16T06:52:06Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/musujime_awaki_toarumajutsunoindex",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-16T07:06:55Z |
---
license: mit
datasets:
- CyberHarem/musujime_awaki_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of musujime_awaki_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/musujime_awaki_toarumajutsunoindex.pt` as the embedding and `4760/musujime_awaki_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.867. The trigger words are:
1. `musujime_awaki_toarumajutsunoindex`
2. `twintails, red_hair, long_hair, sarashi`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.853 | [Download](5100/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.867** | [**Download**](4760/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.791 | [Download](4420/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.798 | [Download](4080/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.858 | [Download](3740/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.843 | [Download](3400/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.796 | [Download](3060/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.805 | [Download](2720/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.788 | [Download](2380/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.705 | [Download](2040/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.639 | [Download](1700/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.549 | [Download](1360/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.558 | [Download](1020/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.408 | [Download](680/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.259 | [Download](340/musujime_awaki_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ceviriyap/bert_4
|
ceviriyap
| 2023-09-16T06:33:15Z | 47 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2023-09-16T05:18:30Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: bert_4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert_4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: nan
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 9.1922 | 0 |
| nan | 1 |
| nan | 2 |
| nan | 3 |
| nan | 4 |
| nan | 5 |
| nan | 6 |
| nan | 7 |
| nan | 8 |
| nan | 9 |
| nan | 10 |
| nan | 11 |
| nan | 12 |
| nan | 13 |
| nan | 14 |
| nan | 15 |
| nan | 16 |
| nan | 17 |
| nan | 18 |
| nan | 19 |
| nan | 20 |
| nan | 21 |
| nan | 22 |
| nan | 23 |
| nan | 24 |
| nan | 25 |
| nan | 26 |
| nan | 27 |
| nan | 28 |
| nan | 29 |
| nan | 30 |
| nan | 31 |
| nan | 32 |
| nan | 33 |
| nan | 34 |
| nan | 35 |
| nan | 36 |
| nan | 37 |
| nan | 38 |
| nan | 39 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.10.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sohug/Ashbab
|
sohug
| 2023-09-16T06:31:02Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-16T05:14:14Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of a hatil bed
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sohug/Ashbab
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of a hatil bed using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
msethi006/my_awesome_peft_model
|
msethi006
| 2023-09-16T06:30:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-16T06:30:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Josevega69/jose69
|
Josevega69
| 2023-09-16T06:25:04Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-16T05:36:00Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: jose69
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jose69
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0328
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1307 | 3.85 | 500 | 0.0328 | 0.9850 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
shantanudave/shantanuimagessept10
|
shantanudave
| 2023-09-16T06:23:42Z | 1 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-16T06:18:49Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sdaveshantanu
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
VictorGil75/Modelo_Clasificacion_Taller_NoTaller
|
VictorGil75
| 2023-09-16T06:21:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-16T06:21:28Z |
# Modelo_Clasificacion_Taller_NoTaller
Modelo de clasificación para identificar si una revisión pertenece a un Taller o no.
Este modelo fue entrenado para clasificar reseñas en la categoría 'Taller' o 'No Taller'.
|
tensor-diffusion/chilloutmix-NI
|
tensor-diffusion
| 2023-09-16T06:18:07Z | 352 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"DiffusionPipeline",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-16T05:57:17Z |
---
license: openrail++
pipeline_tag: text-to-image
tags:
- stable-diffusion
- text-to-image
- diffusers
- DiffusionPipeline
library_name: diffusers
---
|
LuisChDev/LunarLander-v2-ppo
|
LuisChDev
| 2023-09-16T06:04:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-16T06:04:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.31 +/- 20.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MouseTrap/StyleGen-Loopster-DL
|
MouseTrap
| 2023-09-16T05:57:46Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:riffusion/riffusion-model-v1",
"base_model:adapter:riffusion/riffusion-model-v1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-16T05:50:05Z |
---
license: creativeml-openrail-m
base_model: riffusion/riffusion-model-v1
instance_prompt: Loopster style
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MouseTrap/StyleGen-Looper
These are LoRA adaption weights for riffusion/riffusion-model-v1. The weights were trained on Loopster style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
|
mbarekat/ppo-SnowballTarget
|
mbarekat
| 2023-09-16T05:51:35Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-16T05:51:31Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mbarekat/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
om-ashish-soni/pos-ner-tagging-v2
|
om-ashish-soni
| 2023-09-16T05:25:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:om-ashish-soni/pos-ner-tagging-v2",
"base_model:finetune:om-ashish-soni/pos-ner-tagging-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-16T04:22:26Z |
---
license: apache-2.0
base_model: om-ashish-soni/pos-ner-tagging-v2
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos-ner-tagging-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9393653920267203
- name: Recall
type: recall
value: 0.9408358887483113
- name: F1
type: f1
value: 0.9401000653531749
- name: Accuracy
type: accuracy
value: 0.9270324365691411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos-ner-tagging-v2
This model is a fine-tuned version of [om-ashish-soni/pos-ner-tagging-v2](https://huggingface.co/om-ashish-soni/pos-ner-tagging-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6442
- Precision: 0.9394
- Recall: 0.9408
- F1: 0.9401
- Accuracy: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3297 | 1.0 | 1756 | 0.4190 | 0.9189 | 0.9231 | 0.9210 | 0.9051 |
| 0.2521 | 2.0 | 3512 | 0.3836 | 0.9210 | 0.9300 | 0.9255 | 0.9114 |
| 0.1932 | 3.0 | 5268 | 0.4155 | 0.9295 | 0.9338 | 0.9316 | 0.9183 |
| 0.1325 | 4.0 | 7024 | 0.3969 | 0.9328 | 0.9356 | 0.9342 | 0.9211 |
| 0.0973 | 5.0 | 8780 | 0.4247 | 0.9332 | 0.9367 | 0.9349 | 0.9222 |
| 0.0799 | 6.0 | 10536 | 0.4606 | 0.9338 | 0.9374 | 0.9356 | 0.9229 |
| 0.0554 | 7.0 | 12292 | 0.4836 | 0.9333 | 0.9379 | 0.9356 | 0.9239 |
| 0.0415 | 8.0 | 14048 | 0.5271 | 0.9361 | 0.9391 | 0.9376 | 0.9245 |
| 0.0285 | 9.0 | 15804 | 0.5363 | 0.9366 | 0.9397 | 0.9381 | 0.9253 |
| 0.022 | 10.0 | 17560 | 0.5653 | 0.9377 | 0.9396 | 0.9387 | 0.9258 |
| 0.0146 | 11.0 | 19316 | 0.5962 | 0.9374 | 0.9400 | 0.9387 | 0.9259 |
| 0.0121 | 12.0 | 21072 | 0.6061 | 0.9385 | 0.9401 | 0.9393 | 0.9266 |
| 0.0085 | 13.0 | 22828 | 0.6263 | 0.9384 | 0.9403 | 0.9394 | 0.9261 |
| 0.0062 | 14.0 | 24584 | 0.6365 | 0.9381 | 0.9399 | 0.9390 | 0.9259 |
| 0.0053 | 15.0 | 26340 | 0.6386 | 0.9384 | 0.9402 | 0.9393 | 0.9264 |
| 0.0042 | 16.0 | 28096 | 0.6442 | 0.9394 | 0.9408 | 0.9401 | 0.9270 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/nitta_minami_idolmastercinderellagirls
|
CyberHarem
| 2023-09-16T05:21:04Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/nitta_minami_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-16T05:01:57Z |
---
license: mit
datasets:
- CyberHarem/nitta_minami_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of nitta_minami_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7000, you need to download `7000/nitta_minami_idolmastercinderellagirls.pt` as the embedding and `7000/nitta_minami_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7000**, with the score of 0.973. The trigger words are:
1. `nitta_minami_idolmastercinderellagirls`
2. `brown_hair, brown_eyes, long_hair, smile, blush, breasts, medium_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.969 | [Download](7500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| **7000** | **0.973** | [**Download**](7000/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.968 | [Download](6500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.971 | [Download](6000/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.973 | [Download](5500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.958 | [Download](5000/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.967 | [Download](4500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.962 | [Download](4000/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.969 | [Download](3500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.965 | [Download](3000/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.941 | [Download](2500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.966 | [Download](2000/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.972 | [Download](1500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.956 | [Download](1000/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.926 | [Download](500/nitta_minami_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.