modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ghost9023/DEEPNOID-llama2-7b-PoC-Only
|
ghost9023
| 2023-09-21T10:16:48Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T02:26:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
yejeekang/qlora-koalpaca-polyglot-12.8b-50step
|
yejeekang
| 2023-09-21T10:15:07Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T05:03:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
ditobagus/image_classification
|
ditobagus
| 2023-09-21T10:13:26Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-12T09:55:32Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: image_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6845
- Accuracy: 0.0626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6177 | 1.0 | 788 | 4.5441 | 0.0572 |
| 0.6328 | 2.0 | 1576 | 4.6145 | 0.0628 |
| 0.5851 | 3.0 | 2364 | 4.6799 | 0.0648 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/kudou_shinobu_idolmastercinderellagirls
|
CyberHarem
| 2023-09-21T09:57:21Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kudou_shinobu_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-21T09:39:20Z |
---
license: mit
datasets:
- CyberHarem/kudou_shinobu_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kudou_shinobu_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2040, you need to download `2040/kudou_shinobu_idolmastercinderellagirls.pt` as the embedding and `2040/kudou_shinobu_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2040**, with the score of 0.956. The trigger words are:
1. `kudou_shinobu_idolmastercinderellagirls`
2. `brown_hair, short_hair, blue_eyes, smile, open_mouth, blush`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.949 | [Download](5100/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.885 | [Download](4760/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.938 | [Download](4420/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.925 | [Download](4080/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.940 | [Download](3740/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.918 | [Download](3400/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.948 | [Download](3060/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.915 | [Download](2720/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.942 | [Download](2380/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| **2040** | **0.956** | [**Download**](2040/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.923 | [Download](1700/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.919 | [Download](1360/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.843 | [Download](1020/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.742 | [Download](680/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.793 | [Download](340/kudou_shinobu_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
loupzeur/Pyramids
|
loupzeur
| 2023-09-21T09:56:07Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-09-21T09:54:54Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: loupzeur/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
shareAI/CodeLlama-13b-English-Chat
|
shareAI
| 2023-09-21T09:55:56Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"custom_code",
"en",
"dataset:shareAI/ShareGPT-Chinese-English-90k",
"dataset:shareAI/CodeChat",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-20T16:48:51Z |
---
license: openrail
datasets:
- shareAI/ShareGPT-Chinese-English-90k
- shareAI/CodeChat
language:
- en
library_name: transformers
tags:
- code
---
Code:
(just run it, and the model weights will be auto download)
Github:https://github.com/CrazyBoyM/CodeLLaMA-chat
```
# from Firefly
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
def main():
model_name = 'shareAI/CodeLLaMA-chat-13b-Chinese'
device = 'cuda'
max_new_tokens = 500 # max token for reply.
history_max_len = 1000 # max token in history
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto'
).to(device).eval()
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
use_fast=False
)
history_token_ids = torch.tensor([[]], dtype=torch.long)
user_input = input('User:')
while True:
input_ids = tokenizer(user_input, return_tensors="pt", add_special_tokens=False).input_ids
eos_token_id = torch.tensor([[tokenizer.eos_token_id]], dtype=torch.long)
user_input_ids = torch.concat([input_ids, eos_token_id], dim=1)
history_token_ids = torch.concat((history_token_ids, user_input_ids), dim=1)
model_input_ids = history_token_ids[:, -history_max_len:].to(device)
with torch.no_grad():
outputs = model.generate(
input_ids=model_input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p,
temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id
)
model_input_ids_len = model_input_ids.size(1)
response_ids = outputs[:, model_input_ids_len:]
history_token_ids = torch.concat((history_token_ids, response_ids.cpu()), dim=1)
response = tokenizer.batch_decode(response_ids)
print("Bot:" + response[0].strip().replace(tokenizer.eos_token, ""))
user_input = input('User:')
if __name__ == '__main__':
main()
```
|
EnzoZacharias/starcoder-fine-tuned-plc_V1
|
EnzoZacharias
| 2023-09-21T09:41:57Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:finetune:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-09-21T09:20:41Z |
---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: starcoder-fine-tuned-plc_V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoder-fine-tuned-plc_V1
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.1.0.dev20230823
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jimson719/phi-1_5-finetuned-gsm8k
|
jimson719
| 2023-09-21T09:26:54Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
] | null | 2023-09-21T09:07:05Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hustvl/vitmatte-base-composition-1k
|
hustvl
| 2023-09-21T09:25:07Z | 14,261 | 10 |
transformers
|
[
"transformers",
"pytorch",
"vitmatte",
"vision",
"arxiv:2305.15272",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-10T07:56:12Z |
---
license: apache-2.0
tags:
- vision
---
# ViTMatte model
ViTMatte model trained on Composition-1k. It was introduced in the paper [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) by Yao et al. and first released in [this repository](https://github.com/hustvl/ViTMatte).
Disclaimer: The team releasing ViTMatte did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ViTMatte is a simple approach to image matting, the task of accurately estimating the foreground object in an image. The model consists of a Vision Transformer (ViT) with a lightweight head on top.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitmatte_architecture.png"
alt="drawing" width="600"/>
<small> ViTMatte high-level overview. Taken from the <a href="https://arxiv.org/abs/2305.15272">original paper.</a> </small>
## Intended uses & limitations
You can use the raw model for image matting. See the [model hub](https://huggingface.co/models?search=vitmatte) to look for other
fine-tuned versions that may interest you.
### How to use
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/vitmatte#transformers.VitMatteForImageMatting.forward.example).
### BibTeX entry and citation info
```bibtex
@misc{yao2023vitmatte,
title={ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers},
author={Jingfeng Yao and Xinggang Wang and Shusheng Yang and Baoyuan Wang},
year={2023},
eprint={2305.15272},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
Akbartus/Lora360
|
Akbartus
| 2023-09-21T09:18:27Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2023-08-16T05:18:38Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: 360, 360 view
widget:
- text: 360 view
inference:
parameters:
width: 768
height: 512
num_inference_steps: 100
guidance_scale: 9.0
---
|
JoyboyXoXo/rl_course_vizdoom_health_gathering_supreme
|
JoyboyXoXo
| 2023-09-21T09:17:48Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-21T09:17:39Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.70 +/- 2.25
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r JoyboyXoXo/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
sebastiantrbl/test-DialoGPT-finetune
|
sebastiantrbl
| 2023-09-21T09:16:30Z | 207 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:daily_dialog",
"base_model:microsoft/DialoGPT-medium",
"base_model:finetune:microsoft/DialoGPT-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-21T08:19:37Z |
---
license: mit
base_model: microsoft/DialoGPT-medium
tags:
- generated_from_trainer
datasets:
- daily_dialog
model-index:
- name: tmplo2wugb5
results: []
pipeline_tag: conversational
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmplo2wugb5
This model is a fine-tuned version of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) on the daily_dialog dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
YSKartal/bert-base-turkish-cased-turkish_offensive_trained_model
|
YSKartal
| 2023-09-21T09:14:25Z | 76 | 3 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-27T12:20:20Z |
---
license: mit
tags:
- generated_from_keras_callback
base_model: dbmdz/bert-base-turkish-cased
model-index:
- name: YSKartal/bert-base-turkish-cased-turkish_offensive_trained_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# YSKartal/bert-base-turkish-cased-turkish_offensive_trained_model
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on [offenseval2020_tr](https://huggingface.co/datasets/offenseval2020_tr) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0365
- Validation Loss: 0.4846
- Train F1: 0.6993
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7936, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.3003 | 0.2664 | 0.6971 | 0 |
| 0.1866 | 0.3018 | 0.6990 | 1 |
| 0.0860 | 0.3803 | 0.7032 | 2 |
| 0.0365 | 0.4846 | 0.6993 | 3 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Charishma010997/Falcon7b_finetuned
|
Charishma010997
| 2023-09-21T09:04:56Z | 0 | 0 |
peft
|
[
"peft",
"falcon",
"custom_code",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2023-09-17T05:40:57Z |
---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
thiru1/distilgpt2-finetuned-wikitext2
|
thiru1
| 2023-09-21T09:02:53Z | 187 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-21T08:22:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/ebihara_naho_idolmastercinderellagirls
|
CyberHarem
| 2023-09-21T09:00:22Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/ebihara_naho_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-21T08:45:07Z |
---
license: mit
datasets:
- CyberHarem/ebihara_naho_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ebihara_naho_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4080, you need to download `4080/ebihara_naho_idolmastercinderellagirls.pt` as the embedding and `4080/ebihara_naho_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4080**, with the score of 0.956. The trigger words are:
1. `ebihara_naho_idolmastercinderellagirls`
2. `black_hair, green_eyes, breasts, blush, large_breasts, smile, ponytail, cleavage, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.861 | [Download](5100/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5100/previews/pattern_4.png) |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.946 | [Download](4760/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4760/previews/pattern_4.png) |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.913 | [Download](4420/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4420/previews/pattern_4.png) |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| **4080** | **0.956** | [**Download**](4080/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4080/previews/pattern_4.png) |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.948 | [Download](3740/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3740/previews/pattern_4.png) |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.914 | [Download](3400/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3400/previews/pattern_4.png) |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.937 | [Download](3060/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3060/previews/pattern_4.png) |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.845 | [Download](2720/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2720/previews/pattern_4.png) |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.904 | [Download](2380/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2380/previews/pattern_4.png) |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.904 | [Download](2040/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2040/previews/pattern_4.png) |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.926 | [Download](1700/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1700/previews/pattern_4.png) |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.940 | [Download](1360/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1360/previews/pattern_4.png) |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.942 | [Download](1020/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1020/previews/pattern_4.png) |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.924 | [Download](680/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](680/previews/pattern_4.png) |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.912 | [Download](340/ebihara_naho_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](340/previews/pattern_4.png) |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
eunyounglee/GPT-NeoX-1.3B-2GB-Eng
|
eunyounglee
| 2023-09-21T08:58:57Z | 60 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"eng",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-21T05:43:41Z |
---
language:
- eng
pipeline_tag: text-generation
Trained: Pretrain
Config file: 1.3B
Data: English News Dataset 2GB (177MB)
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Pretrained GPT-NeoX model with 2.06GB English news dataset. Took about 2 hour and 10 minutes to reach 10,000 iterations. Trained on p3dn.24xlarge.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Eunyoung Lee
- **Model type:** GPT-NeoX
- **Language(s) (NLP):** English
|
loupzeur/a2c-PandaReachDense-v3
|
loupzeur
| 2023-09-21T08:56:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-21T08:15:07Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dsmsb/16_combo_2109_v2
|
dsmsb
| 2023-09-21T08:48:35Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-21T08:08:13Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 16_combo_webscrap_2109_v1_addgptdf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 16_combo_webscrap_2109_v1_addgptdf
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Accuracy: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 467 | 0.8510 | 0.7806 |
| 1.534 | 2.0 | 934 | 0.5037 | 0.8696 |
| 0.7131 | 3.0 | 1401 | 0.3481 | 0.9104 |
| 0.4879 | 4.0 | 1868 | 0.2717 | 0.9244 |
| 0.3665 | 5.0 | 2335 | 0.2324 | 0.9360 |
| 0.2948 | 6.0 | 2802 | 0.1949 | 0.9451 |
| 0.24 | 7.0 | 3269 | 0.1550 | 0.9566 |
| 0.1961 | 8.0 | 3736 | 0.1495 | 0.9568 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
elemtopos/dqn-SpaceInvadersNoFrameskip-v4
|
elemtopos
| 2023-09-21T08:46:40Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T15:49:36Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 270.50 +/- 83.53
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga elemtopos -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga elemtopos -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga elemtopos
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Aharneish/qa-model
|
Aharneish
| 2023-09-21T08:42:04Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-05T15:31:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
model-index:
- name: qa-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Aharneish/qa-flant5
|
Aharneish
| 2023-09-21T08:41:58Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-09T10:07:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/flan-t5-base
model-index:
- name: qa-flant5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa-flant5
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
linoyts/huggy-lora-sdxl-v6
|
linoyts
| 2023-09-21T08:39:23Z | 206 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-21T08:39:09Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
pivotal_tuning: true
textual_embeddings: embeddings.pti
instance_prompt: <s0><s1>
inference: false
---
# huggy-lora-sdxl-v6 LoRA by [linoytsaban](https://replicate.com/linoytsaban)
### caption prefix: a TOK emoji, steps: 1500

>
## Inference with Replicate API
Grab your replicate token [here](https://replicate.com/account)
```bash
pip install replicate
export REPLICATE_API_TOKEN=r8_*************************************
```
```py
import replicate
output = replicate.run(
"linoy_lora@sha256:c659971dd2ba3789a80549674b90f69eebd865164d1219f53f96f7f7506911c1",
input={"prompt": "a hugging face emoji in the style of TOK, dressed as yoda"}
)
print(output)
```
You may also do inference via the API with Node.js or curl, and locally with COG and Docker, [check out the Replicate API page for this model](https://replicate.com/linoytsaban/linoy_lora/api)
## Inference with 🧨 diffusers
Replicate SDXL LoRAs are trained with Pivotal Tuning, which combines training a concept via Dreambooth LoRA with training a new token with Textual Inversion.
As `diffusers` doesn't yet support textual inversion for SDXL, we will use cog-sdxl `TokenEmbeddingsHandler` class.
The trigger tokens for your prompt will be `<s0><s1>`
```shell
pip install diffusers transformers accelerate safetensors huggingface_hub
git clone https://github.com/replicate/cog-sdxl cog_sdxl
```
```py
import torch
from huggingface_hub import hf_hub_download
from diffusers import DiffusionPipeline
from cog_sdxl.dataset_and_utils import TokenEmbeddingsHandler
from diffusers.models import AutoencoderKL
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
pipe.load_lora_weights("LinoyTsaban/huggy-lora-sdxl-v6", weight_name="lora.safetensors")
text_encoders = [pipe.text_encoder, pipe.text_encoder_2]
tokenizers = [pipe.tokenizer, pipe.tokenizer_2]
embedding_path = hf_hub_download(repo_id="LinoyTsaban/huggy-lora-sdxl-v6", filename="embeddings.pti", repo_type="model")
embhandler = TokenEmbeddingsHandler(text_encoders, tokenizers)
embhandler.load_embeddings(embedding_path)
prompt="a hugging face emoji in the style of <s0><s1>, dressed as yoda"
images = pipe(
prompt,
cross_attention_kwargs={"scale": 0.8},
).images
#your output image
images[0]
```
|
EnzoZacharias/LLama2-7b-fine-tuned-plc_V1
|
EnzoZacharias
| 2023-09-21T08:37:41Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-09-21T08:28:00Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: LLama2-7b-fine-tuned-plc_V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLama2-7b-fine-tuned-plc_V1
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.1.0.dev20230823
- Datasets 2.14.4
- Tokenizers 0.13.3
|
JoyboyXoXo/ppo-lunarlander-v3
|
JoyboyXoXo
| 2023-09-21T08:34:09Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-21T08:34:03Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -178.93 +/- 58.48
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'JoyboyXoXo/ppo-lunarlander-v3'
'batch_size': 512
'minibatch_size': 128}
```
|
TemporalGames/opt-1.3b-lambada_rmt_ms7_bptt7_sl2028_mt10_lTrue_LORA_cur2
|
TemporalGames
| 2023-09-21T08:28:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T08:28:39Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
McMilly/TNF-Milly
|
McMilly
| 2023-09-21T08:17:33Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-09-21T08:17:33Z |
---
license: bigscience-openrail-m
---
|
actualbrain/Reinforce-pixelcopter-v1
|
actualbrain
| 2023-09-21T08:17:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-21T07:50:19Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.40 +/- 12.92
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
huygdng/whisper_small_tw11
|
huygdng
| 2023-09-21T08:15:03Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-21T08:14:16Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper_small_tw11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_small_tw11
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0462
- Wer: 1.3691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1200
- training_steps: 2400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.5528 | 2.86 | 400 | 2.8688 | 2.2350 |
| 1.5461 | 5.71 | 800 | 2.2548 | 2.1533 |
| 0.6586 | 8.57 | 1200 | 2.4110 | 1.5250 |
| 0.1633 | 11.43 | 1600 | 2.6985 | 1.4415 |
| 0.0318 | 14.29 | 2000 | 2.9465 | 1.2165 |
| 0.0119 | 17.14 | 2400 | 3.0462 | 1.3691 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
jmbilbao25/falcon-7b-instruct-ft-adapters
|
jmbilbao25
| 2023-09-21T08:11:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T08:11:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
hihisu1231/mbti_230921_4
|
hihisu1231
| 2023-09-21T08:10:51Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-21T08:06:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: polyglot-1.3b-koalpaca-v1.1a-rtx3090__230921_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polyglot-1.3b-koalpaca-v1.1a-rtx3090__230921_4
This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
hbbz/cyberhbbz
|
hbbz
| 2023-09-21T08:02:30Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-21T07:59:13Z |
---
license: creativeml-openrail-m
---
|
QWW/dreambooth_beacon
|
QWW
| 2023-09-21T07:55:28Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-21T07:42:22Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - QWW/dreambooth_beacon
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
loupzeur/ppo-SnowballTarget
|
loupzeur
| 2023-09-21T07:53:51Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-21T07:52:53Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: loupzeur/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AIYIYA/my_html2
|
AIYIYA
| 2023-09-21T07:50:25Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-21T06:37:34Z |
---
base_model: bert-base-chinese
tags:
- generated_from_keras_callback
model-index:
- name: AIYIYA/my_html2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AIYIYA/my_html2
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1581
- Train Accuracy: 0.9835
- Validation Loss: 0.1561
- Validation Accuracy: 1.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 24, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3969 | 0.9339 | 0.2428 | 0.9512 | 0 |
| 0.1840 | 0.9835 | 0.1561 | 1.0 | 1 |
| 0.1581 | 0.9835 | 0.1561 | 1.0 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Apptware/D_tell_market_falcon7b_sharded
|
Apptware
| 2023-09-21T07:47:05Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T07:47:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
yezituan/test
|
yezituan
| 2023-09-21T07:46:42Z | 0 | 0 | null |
[
"en",
"dataset:allenai/dolma",
"license:openrail",
"region:us"
] | null | 2023-09-21T07:44:03Z |
---
license: openrail
datasets:
- allenai/dolma
language:
- en
metrics:
- accuracy
---
|
Chang-Su/llama-2-13b-chat-ko-adapter
|
Chang-Su
| 2023-09-21T07:41:56Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-09T14:11:05Z |
---
license: cc-by-nc-sa-4.0
---
|
zongxiao/distilhubert-finetuned-gtzan
|
zongxiao
| 2023-09-21T07:20:31Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-21T03:32:42Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5365
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.002 | 1.0 | 112 | 1.8275 | 0.38 |
| 1.3205 | 2.0 | 225 | 1.1926 | 0.72 |
| 1.0811 | 3.0 | 337 | 0.9175 | 0.75 |
| 1.0449 | 4.0 | 450 | 0.8505 | 0.73 |
| 0.6167 | 5.0 | 562 | 0.6636 | 0.82 |
| 0.4868 | 6.0 | 675 | 0.7787 | 0.77 |
| 0.3014 | 7.0 | 787 | 0.5535 | 0.83 |
| 0.2111 | 8.0 | 900 | 0.5329 | 0.82 |
| 0.1308 | 9.0 | 1012 | 0.5277 | 0.85 |
| 0.0825 | 9.96 | 1120 | 0.5365 | 0.84 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
ckeisc/lora-trained
|
ckeisc
| 2023-09-21T07:10:31Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:SG161222/Realistic_Vision_V5.0_noVAE",
"base_model:adapter:SG161222/Realistic_Vision_V5.0_noVAE",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-21T05:06:28Z |
---
license: creativeml-openrail-m
base_model: SG161222/Realistic_Vision_V5.0_noVAE
instance_prompt: a photo of ch1u_bubu toddler
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - ckeisc/lora-trained
These are LoRA adaption weights for SG161222/Realistic_Vision_V5.0_noVAE. The weights were trained on a photo of ch1u_bubu toddler using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
CyberHarem/wakui_rumi_idolmastercinderellagirls
|
CyberHarem
| 2023-09-21T07:09:06Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/wakui_rumi_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-21T06:58:24Z |
---
license: mit
datasets:
- CyberHarem/wakui_rumi_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of wakui_rumi_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/wakui_rumi_idolmastercinderellagirls.pt` as the embedding and `4760/wakui_rumi_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.939. The trigger words are:
1. `wakui_rumi_idolmastercinderellagirls`
2. `short_hair, blue_hair, jewelry, black_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.923 | [Download](5100/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.939** | [**Download**](4760/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.854 | [Download](4420/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.820 | [Download](4080/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.873 | [Download](3740/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.766 | [Download](3400/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.829 | [Download](3060/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.758 | [Download](2720/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.755 | [Download](2380/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.911 | [Download](2040/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.842 | [Download](1700/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.824 | [Download](1360/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.908 | [Download](1020/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.903 | [Download](680/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.898 | [Download](340/wakui_rumi_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
turboderp/Llama2-13B-exl2
|
turboderp
| 2023-09-21T06:44:13Z | 19 | 2 | null |
[
"region:us"
] | null | 2023-09-21T06:42:13Z |
EXL2 quants of Llama2-13B
[2.50 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/2.5bpw)
[3.00 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/3.0bpw)
[3.50 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/3.5bpw)
[4.00 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/4.0bpw)
[4.65 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/4.65bpw)
[5.00 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/5.0bpw)
[6.00 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/6.0bpw)
[8.00 bits per weight](https://huggingface.co/turboderp/Llama2-13B-exl2/tree/8.0bpw)
[measurement.json](https://huggingface.co/turboderp/Llama2-13B-exl2/blob/main/measurement.json)
|
h4lo/my_awesome_eli5_clm-model-text
|
h4lo
| 2023-09-21T06:41:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-21T06:18:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model-text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model-text
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.8707 | 1.0 | 1133 | 3.7535 |
| 3.7616 | 2.0 | 2266 | 3.7337 |
| 3.6998 | 3.0 | 3399 | 3.7246 |
| 3.6529 | 4.0 | 4532 | 3.7209 |
| 3.6022 | 5.0 | 5665 | 3.7203 |
| 3.5724 | 6.0 | 6798 | 3.7218 |
| 3.5374 | 7.0 | 7931 | 3.7198 |
| 3.5151 | 8.0 | 9064 | 3.7240 |
| 3.5004 | 9.0 | 10197 | 3.7274 |
| 3.4857 | 10.0 | 11330 | 3.7288 |
| 3.4702 | 11.0 | 12463 | 3.7305 |
| 3.4646 | 12.0 | 13596 | 3.7314 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
uttu/llama2_dolly_20_steps
|
uttu
| 2023-09-21T06:39:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T06:39:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
lash/phi-1_5-finetuned-gsm8k
|
lash
| 2023-09-21T06:34:12Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-09-21T06:22:54Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.1.0.dev20230629
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Sandra26/Sandy
|
Sandra26
| 2023-09-21T06:29:44Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T21:04:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: Sandy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8406862745098039
- name: F1
type: f1
value: 0.8820326678765881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sandy
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6410
- Accuracy: 0.8407
- F1: 0.8820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4994 | 1.09 | 500 | 0.7821 | 0.8211 | 0.8793 |
| 0.3466 | 2.18 | 1000 | 0.6410 | 0.8407 | 0.8820 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/tsuchiya_ako_idolmastercinderellagirls
|
CyberHarem
| 2023-09-21T06:19:56Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/tsuchiya_ako_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-21T06:09:58Z |
---
license: mit
datasets:
- CyberHarem/tsuchiya_ako_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of tsuchiya_ako_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3060, you need to download `3060/tsuchiya_ako_idolmastercinderellagirls.pt` as the embedding and `3060/tsuchiya_ako_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3060**, with the score of 0.968. The trigger words are:
1. `tsuchiya_ako_idolmastercinderellagirls`
2. `brown_hair, short_hair, glasses, hair_ornament, mole, hairclip, ahoge, green_eyes, smile, mole_under_mouth, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.960 | [Download](5100/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](5100/previews/bondage.png) | [<NSFW, click to see>](5100/previews/free.png) |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.953 | [Download](4760/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](4760/previews/bondage.png) | [<NSFW, click to see>](4760/previews/free.png) |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.950 | [Download](4420/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](4420/previews/bondage.png) | [<NSFW, click to see>](4420/previews/free.png) |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.950 | [Download](4080/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](4080/previews/bondage.png) | [<NSFW, click to see>](4080/previews/free.png) |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.953 | [Download](3740/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](3740/previews/bondage.png) | [<NSFW, click to see>](3740/previews/free.png) |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.961 | [Download](3400/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](3400/previews/bondage.png) | [<NSFW, click to see>](3400/previews/free.png) |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| **3060** | **0.968** | [**Download**](3060/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](3060/previews/bondage.png) | [<NSFW, click to see>](3060/previews/free.png) |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.957 | [Download](2720/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](2720/previews/bondage.png) | [<NSFW, click to see>](2720/previews/free.png) |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.925 | [Download](2380/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](2380/previews/bondage.png) | [<NSFW, click to see>](2380/previews/free.png) |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.917 | [Download](2040/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](2040/previews/bondage.png) | [<NSFW, click to see>](2040/previews/free.png) |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.880 | [Download](1700/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](1700/previews/bondage.png) | [<NSFW, click to see>](1700/previews/free.png) |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.942 | [Download](1360/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](1360/previews/bondage.png) | [<NSFW, click to see>](1360/previews/free.png) |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.908 | [Download](1020/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](1020/previews/bondage.png) | [<NSFW, click to see>](1020/previews/free.png) |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.915 | [Download](680/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](680/previews/bondage.png) | [<NSFW, click to see>](680/previews/free.png) |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.835 | [Download](340/tsuchiya_ako_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](340/previews/bondage.png) | [<NSFW, click to see>](340/previews/free.png) |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Eito2023/EitisStimmen
|
Eito2023
| 2023-09-21T06:19:08Z | 0 | 0 |
nemo
|
[
"nemo",
"de",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"license:other",
"region:us"
] | null | 2023-09-21T06:13:18Z |
---
license: other
datasets:
- totally-not-an-llm/EverythingLM-data-V3
language:
- de
metrics:
- code_eval
library_name: nemo
---
|
li-ping/songgodv2
|
li-ping
| 2023-09-21T06:11:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T05:54:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
h4lo/my_awesome_billsum_model_0921
|
h4lo
| 2023-09-21T06:09:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-21T05:52:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_eli5_clm-model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1959
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3071
- Rouge1: 0.1959
- Rouge2: 0.1013
- Rougel: 0.1685
- Rougelsum: 0.1683
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7637 | 0.1277 | 0.0387 | 0.1065 | 0.1066 | 19.0 |
| No log | 2.0 | 124 | 2.5350 | 0.1408 | 0.0506 | 0.1165 | 0.1165 | 19.0 |
| No log | 3.0 | 186 | 2.4431 | 0.1503 | 0.0589 | 0.1245 | 0.1245 | 19.0 |
| No log | 4.0 | 248 | 2.3946 | 0.1774 | 0.0796 | 0.1502 | 0.1501 | 19.0 |
| No log | 5.0 | 310 | 2.3601 | 0.19 | 0.0939 | 0.1631 | 0.1631 | 19.0 |
| No log | 6.0 | 372 | 2.3400 | 0.1952 | 0.0993 | 0.1676 | 0.1676 | 19.0 |
| No log | 7.0 | 434 | 2.3238 | 0.196 | 0.1003 | 0.1682 | 0.1681 | 19.0 |
| No log | 8.0 | 496 | 2.3140 | 0.1973 | 0.1017 | 0.1693 | 0.1692 | 19.0 |
| 2.7599 | 9.0 | 558 | 2.3084 | 0.1957 | 0.1009 | 0.1686 | 0.1682 | 19.0 |
| 2.7599 | 10.0 | 620 | 2.3071 | 0.1959 | 0.1013 | 0.1685 | 0.1683 | 19.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Spacetimetravel/autotrain-financial-conversation_financial-summary-bart-90558144325
|
Spacetimetravel
| 2023-09-21T06:00:57Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:Spacetimetravel/autotrain-data-financial-conversation_financial-summary-bart",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-09-21T05:59:10Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- Spacetimetravel/autotrain-data-financial-conversation_financial-summary-bart
co2_eq_emissions:
emissions: 0.05543082382688346
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 90558144325
- CO2 Emissions (in grams): 0.0554
## Validation Metrics
- Loss: 1.555
- Rouge1: 61.365
- Rouge2: 33.249
- RougeL: 48.538
- RougeLsum: 51.545
- Gen Len: 72.500
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Spacetimetravel/autotrain-financial-conversation_financial-summary-bart-90558144325
```
|
0xk1h0/codegen2.5-7b-py150k-r20-QLoRA
|
0xk1h0
| 2023-09-21T06:00:16Z | 2 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T05:13:47Z |
---
library_name: peft
---
## Model Usage
```python
import wandb
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments
from peft import LoraConfig, prepare_model_for_kbit_training, get_peft_model, AutoPeftModelForCausalLM
from datasets import load_dataset
from random import randrange
from trl import SFTTrainer
from huggingface_hub import login
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-mono", trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
device_map = {"":0}
model = AutoPeftModelForCausalLM.from_pretrained("0xk1h0/codegen2.5-7b-py150k-r20-QLoRA", device_map=device_map, torch_dtype=torch.bfloat16)
text ="""
# Generate AES MODE encrypt python function.
"""
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(
input_ids=inputs["input_ids"].to("cuda"),
attention_mask=inputs["attention_mask"],
# max_new_tokens=50,
max_length=256,
do_sample=True,
temperature = 0.4,
top_p=0.95,
pad_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
Spacetimetravel/autotrain-financial-conversation_financial-summary-t5-90557144324
|
Spacetimetravel
| 2023-09-21T05:59:15Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:Spacetimetravel/autotrain-data-financial-conversation_financial-summary-t5",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-09-21T05:57:40Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- Spacetimetravel/autotrain-data-financial-conversation_financial-summary-t5
co2_eq_emissions:
emissions: 0.009489750490178377
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 90557144324
- CO2 Emissions (in grams): 0.0095
## Validation Metrics
- Loss: 1.623
- Rouge1: 16.937
- Rouge2: 5.254
- RougeL: 16.937
- RougeLsum: 16.937
- Gen Len: 19.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Spacetimetravel/autotrain-financial-conversation_financial-summary-t5-90557144324
```
|
mirfan899/urdu-distilbert-ner
|
mirfan899
| 2023-09-21T05:56:53Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-21T05:56:31Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: urdu-distilbert-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu-distilbert-ner
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1387
- Precision: 0.7575
- Recall: 0.8057
- F1: 0.7809
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1637 | 1.0 | 2272 | 0.1505 | 0.7131 | 0.7800 | 0.7451 | 0.9457 |
| 0.1159 | 2.0 | 4544 | 0.1390 | 0.7377 | 0.8037 | 0.7693 | 0.9507 |
| 0.0882 | 3.0 | 6816 | 0.1387 | 0.7575 | 0.8057 | 0.7809 | 0.9535 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
actualbrain/Reinforce-CartPolev1
|
actualbrain
| 2023-09-21T05:55:54Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-03T11:07:02Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPolev1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
OpenDILabCommunity/Hopper-v3-DDPG
|
OpenDILabCommunity
| 2023-09-21T05:49:02Z | 0 | 0 |
pytorch
|
[
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"Hopper-v3",
"en",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2023-04-19T01:05:47Z |
---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- Hopper-v3
benchmark_name: OpenAI/Gym/MuJoCo
task_name: Hopper-v3
pipeline_tag: reinforcement-learning
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/MuJoCo-Hopper-v3
type: OpenAI/Gym/MuJoCo-Hopper-v3
metrics:
- type: mean_reward
value: 3784.92 +/- 29.08
name: mean_reward
---
# Play **Hopper-v3** with **DDPG** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **DDPG** implementation to OpenAI/Gym/MuJoCo **Hopper-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
sudo apt update -y && sudo apt install -y build-essential libgl1-mesa-dev libgl1-mesa-glx libglew-dev libosmesa6-dev libglfw3 libglfw3-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import DDPGAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = DDPGAgent(env_id="Hopper-v3", exp_name="Hopper-v3-DDPG", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import DDPGAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Hopper-v3-DDPG")
# Instantiate the agent
agent = DDPGAgent(env_id="Hopper-v3", exp_name="Hopper-v3-DDPG", cfg=cfg.exp_config, policy_state_dict=policy_state_dict)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import DDPGAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = DDPGAgent(env_id="Hopper-v3", exp_name="Hopper-v3-DDPG")
# Train the agent
return_ = agent.train(step=int(10000000), collector_env_num=4, evaluator_env_num=4, debug=False)
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/MuJoCo",
task_name="Hopper-v3",
algo_name="DDPG",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/ddpg.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html",
installation_guide='''
sudo apt update -y \
&& sudo apt install -y \
build-essential \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
libglfw3 \
libglfw3-dev \
libsdl2-dev \
libsdl2-image-dev \
libglm-dev \
libfreetype6-dev \
patchelf
mkdir -p ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz
tar -xf mujoco.tar.gz -C ~/.mujoco
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin
pip3 install "cython<3"
pip3 install DI-engine[common_env]
''',
usage_file_by_git_clone="./ddpg/hopper_ddpg_deploy.py",
usage_file_by_huggingface_ding="./ddpg/hopper_ddpg_download.py",
train_file="./ddpg/hopper_ddpg.py",
repo_id="OpenDILabCommunity/Hopper-v3-DDPG",
create_repo=False
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 6000,
'n_evaluator_episode': 8,
'env_id': 'Hopper-v3',
'norm_obs': {
'use_norm': False
},
'norm_reward': {
'use_norm': False
},
'collector_env_num': 1,
'evaluator_env_num': 8,
'env_wrapper': 'mujoco_default'
},
'policy': {
'model': {
'obs_shape': 11,
'action_shape': 3,
'twin_critic': False,
'actor_head_hidden_size': 256,
'critic_head_hidden_size': 256,
'action_space': 'regression'
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 1,
'batch_size': 256,
'learning_rate_actor': 0.001,
'learning_rate_critic': 0.001,
'ignore_done': False,
'target_theta': 0.005,
'discount_factor': 0.99,
'actor_update_freq': 1,
'noise': False
},
'collect': {
'collector': {},
'unroll_len': 1,
'noise_sigma': 0.1,
'n_sample': 1
},
'eval': {
'evaluator': {
'eval_freq': 5000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'figure_path': None,
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 6000,
'n_episode': 8
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 1000000
}
},
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'ddpg',
'priority': False,
'priority_IS_weight': False,
'random_collect_size': 25000,
'transition_with_policy_data': False,
'action_space': 'continuous',
'reward_batch_norm': False,
'multi_agent': False,
'cfg_type': 'DDPGPolicyDict'
},
'exp_name': 'Hopper-v3-DDPG',
'seed': 0,
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Hopper-v3-DDPG)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/ddpg.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Hopper-v3-DDPG/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Hopper-v3-DDPG/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 1090.03 KB
- **Last Update Date:** 2023-09-21
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/MuJoCo
- **Task:** Hopper-v3
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.9
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html)
|
xizhn/output_model_dir
|
xizhn
| 2023-09-21T05:38:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-20T04:59:23Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dress
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - xizhn/output_model_dir
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dress using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
li-ping/songgod
|
li-ping
| 2023-09-21T05:36:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T05:29:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/wakiyama_tamami_idolmastercinderellagirls
|
CyberHarem
| 2023-09-21T05:29:25Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/wakiyama_tamami_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-21T05:17:23Z |
---
license: mit
datasets:
- CyberHarem/wakiyama_tamami_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of wakiyama_tamami_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5100, you need to download `5100/wakiyama_tamami_idolmastercinderellagirls.pt` as the embedding and `5100/wakiyama_tamami_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5100**, with the score of 0.955. The trigger words are:
1. `wakiyama_tamami_idolmastercinderellagirls`
2. `short_hair, ahoge, brown_hair, brown_eyes, blush, smile, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5100** | **0.955** | [**Download**](5100/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5100/previews/pattern_4.png) |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.890 | [Download](4760/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4760/previews/pattern_4.png) |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.921 | [Download](4420/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4420/previews/pattern_4.png) |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.925 | [Download](4080/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4080/previews/pattern_4.png) |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.928 | [Download](3740/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3740/previews/pattern_4.png) |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.917 | [Download](3400/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3400/previews/pattern_4.png) |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.924 | [Download](3060/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3060/previews/pattern_4.png) |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.934 | [Download](2720/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2720/previews/pattern_4.png) |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.917 | [Download](2380/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2380/previews/pattern_4.png) |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.931 | [Download](2040/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2040/previews/pattern_4.png) |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.900 | [Download](1700/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1700/previews/pattern_4.png) |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.879 | [Download](1360/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1360/previews/pattern_4.png) |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.867 | [Download](1020/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1020/previews/pattern_4.png) |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.823 | [Download](680/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](680/previews/pattern_4.png) |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.513 | [Download](340/wakiyama_tamami_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](340/previews/pattern_4.png) |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
edzou/bert-finetuned-ner
|
edzou
| 2023-09-21T05:22:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-20T06:58:54Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.33.2
- Pytorch 1.12.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
trieudemo11/llama_7b_attrb_cate_4m_6
|
trieudemo11
| 2023-09-21T05:05:55Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T05:05:37Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
0xk1h0/codegen1-6B-peft-qlora
|
0xk1h0
| 2023-09-21T05:02:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-18T01:28:55Z |
---
library_name: peft
base_model: Salesforce/codegen-6b-mono
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
0xk1h0/codegen2.5-7b-py150k-r20-LoRA
|
0xk1h0
| 2023-09-21T04:58:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T04:48:19Z |
---
library_name: peft
---
## Model Usage
```python
import torch
import transformers
from finetune_peft import get_peft_config, PEFTArguments
from peft import get_peft_model
model_path = 'Salesforce/codegen25-7b-mono'
# peft_path = 'models/codegen25_7b/checkpoint'
peft_path = '0xk1h0/codegen25-7b-py150k-r20'
# peft_path = 'models/alpaca-llama-7b-peft/params.p'
torch.set_default_tensor_type(torch.cuda.HalfTensor)
model = transformers.AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, cache_dir='models')
peft_config = get_peft_config(peft_args=PEFTArguments(peft_mode="lora"))
model = get_peft_model(model, peft_config)
# model.load_state_dict(torch.load(peft_path), strict=False)
torch.set_default_tensor_type(torch.cuda.FloatTensor)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
batch = tokenizer("""
### Generate AES MODE encrypt function.
""", return_tensors="pt")
with torch.no_grad():
out = model.generate(
input_ids=batch["input_ids"],
attention_mask=torch.ones_like(batch["input_ids"]),
max_length=256,
do_sample=True,
temperature = 0.4,
top_p=0.95
)
print(tokenizer.decode(out[0]))
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
bongo2112/sdxl-db-ommydimpos-headshot
|
bongo2112
| 2023-09-21T04:52:52Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-20T22:50:16Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of ommydimpotz man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
awrysfab/emotion_classification
|
awrysfab
| 2023-09-21T04:48:06Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-21T04:34:56Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2383
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0769 | 1.0 | 10 | 2.0617 | 0.1812 |
| 2.0383 | 2.0 | 20 | 2.0104 | 0.3 |
| 1.9423 | 3.0 | 30 | 1.8932 | 0.425 |
| 1.7923 | 4.0 | 40 | 1.7442 | 0.475 |
| 1.6547 | 5.0 | 50 | 1.6047 | 0.4875 |
| 1.5297 | 6.0 | 60 | 1.5184 | 0.5437 |
| 1.4345 | 7.0 | 70 | 1.4392 | 0.5625 |
| 1.337 | 8.0 | 80 | 1.3847 | 0.5875 |
| 1.2722 | 9.0 | 90 | 1.3442 | 0.55 |
| 1.217 | 10.0 | 100 | 1.3058 | 0.5625 |
| 1.1497 | 11.0 | 110 | 1.2914 | 0.55 |
| 1.0977 | 12.0 | 120 | 1.2377 | 0.6125 |
| 1.0507 | 13.0 | 130 | 1.2253 | 0.5687 |
| 1.0268 | 14.0 | 140 | 1.2269 | 0.5938 |
| 0.967 | 15.0 | 150 | 1.2260 | 0.5938 |
| 0.9269 | 16.0 | 160 | 1.2421 | 0.5687 |
| 0.9102 | 17.0 | 170 | 1.2218 | 0.5687 |
| 0.8883 | 18.0 | 180 | 1.2207 | 0.5687 |
| 0.8633 | 19.0 | 190 | 1.1933 | 0.6062 |
| 0.8557 | 20.0 | 200 | 1.1830 | 0.575 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
isanchez/text-comp
|
isanchez
| 2023-09-21T04:46:05Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T21:05:25Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: text-comp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8357843137254902
- name: F1
type: f1
value: 0.8770642201834863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-comp
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5361
- Accuracy: 0.8358
- F1: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5903 | 1.09 | 500 | 0.4340 | 0.8137 | 0.8643 |
| 0.3827 | 2.18 | 1000 | 0.5361 | 0.8358 | 0.8771 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ashishpatel26/phi-1_5-finetuned-gsm8k
|
ashishpatel26
| 2023-09-21T04:41:04Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
] | null | 2023-09-21T04:21:11Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kcyu/LoRA_model_Vit-cifar_10
|
kcyu
| 2023-09-21T04:33:56Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T04:31:44Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
rmuema/orca_mini_3B_test_guanaco
|
rmuema
| 2023-09-21T04:28:15Z | 2 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-15T01:45:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
- base_model: psmathur/orca_mini_3b
### Framework versions
- PEFT 0.6.0.dev0
|
Pradeep016/GAN
|
Pradeep016
| 2023-09-21T04:27:14Z | 0 | 0 |
keras
|
[
"keras",
"license:mit",
"region:us"
] | null | 2023-09-21T04:19:15Z |
---
license: mit
library_name: keras
---
|
hihisu1231/mbti_230921_2
|
hihisu1231
| 2023-09-21T04:13:04Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-21T04:08:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: polyglot-1.3b-koalpaca-v1.1a-rtx3090__230921_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polyglot-1.3b-koalpaca-v1.1a-rtx3090__230921_2
This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CyberHarem/matsuyama_kumiko_idolmastercinderellagirls
|
CyberHarem
| 2023-09-21T03:47:28Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/matsuyama_kumiko_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-21T03:36:56Z |
---
license: mit
datasets:
- CyberHarem/matsuyama_kumiko_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of matsuyama_kumiko_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2380, you need to download `2380/matsuyama_kumiko_idolmastercinderellagirls.pt` as the embedding and `2380/matsuyama_kumiko_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2380**, with the score of 0.967. The trigger words are:
1. `matsuyama_kumiko_idolmastercinderellagirls`
2. `long_hair, brown_hair, smile, green_eyes, breasts, blush, cleavage, medium_breasts, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.897 | [Download](5100/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.960 | [Download](4760/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.948 | [Download](4420/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.953 | [Download](4080/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.884 | [Download](3740/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.961 | [Download](3400/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.898 | [Download](3060/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.896 | [Download](2720/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| **2380** | **0.967** | [**Download**](2380/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.945 | [Download](2040/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.963 | [Download](1700/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.930 | [Download](1360/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.930 | [Download](1020/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.920 | [Download](680/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.924 | [Download](340/matsuyama_kumiko_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ShaunThayil/distilbert-training-4
|
ShaunThayil
| 2023-09-21T03:43:12Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-21T03:42:39Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-training-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-training-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0316
- Accuracy: 0.9944
- Precision: 0.9955
- Recall: 0.9822
- F1: 0.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 0.5 | 262 | 0.0957 | 0.9817 | 0.9562 | 0.9711 | 0.9636 |
| No log | 1.0 | 524 | 0.0390 | 0.9939 | 0.9977 | 0.9778 | 0.9877 |
| 0.1008 | 1.5 | 786 | 0.0361 | 0.9944 | 0.9955 | 0.9822 | 0.9888 |
| 0.1008 | 2.0 | 1048 | 0.0385 | 0.9922 | 0.9866 | 0.9822 | 0.9844 |
| 0.0331 | 2.5 | 1310 | 0.0270 | 0.9956 | 0.9977 | 0.9844 | 0.9911 |
| 0.0331 | 2.99 | 1572 | 0.0358 | 0.9939 | 0.9955 | 0.98 | 0.9877 |
| 0.0151 | 3.49 | 1834 | 0.0292 | 0.9956 | 0.9955 | 0.9867 | 0.9911 |
| 0.0151 | 3.99 | 2096 | 0.0316 | 0.9944 | 0.9955 | 0.9822 | 0.9888 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230913+cu121
- Tokenizers 0.13.3
|
nomsgadded/opt_RestaurantReview
|
nomsgadded
| 2023-09-21T03:38:05Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"opt",
"code",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-09-20T00:06:47Z |
---
pipeline_tag: text-classification
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
tags:
- code
---
This is the Finetune version of the facebook/opt-350m model
Dataset is RestaurantReview from kaggle
How to use? Input text must be in the form of
##Rating :{text}
e.g. ##Rating :It was really nice to dine there, however the waiter is very mean.
Then it will return the possible rating customer gave to the restaurant.
|
zfox/finetuning-sentiment-model-3000-samples
|
zfox
| 2023-09-21T03:34:37Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"doi:10.57967/hf/1135",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-21T03:28:42Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8692810457516339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3195
- Accuracy: 0.8667
- F1: 0.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Rosi-si/my_awesome_gec
|
Rosi-si
| 2023-09-21T03:23:43Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Unbabel/gec-t5_small",
"base_model:finetune:Unbabel/gec-t5_small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-21T01:37:44Z |
---
license: apache-2.0
base_model: Unbabel/gec-t5_small
tags:
- generated_from_trainer
model-index:
- name: my_awesome_gec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_gec
This model is a fine-tuned version of [Unbabel/gec-t5_small](https://huggingface.co/Unbabel/gec-t5_small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3667 | 1.0 | 4187 | 0.3417 |
| 0.3209 | 2.0 | 8374 | 0.2941 |
| 0.299 | 3.0 | 12561 | 0.2738 |
| 0.2904 | 4.0 | 16748 | 0.2674 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
RylonW/ppo-LunarLander-v4
|
RylonW
| 2023-09-21T03:08:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-21T03:08:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.19 +/- 10.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yudiwbs/marian-finetuned-kde4-en-to-id
|
yudiwbs
| 2023-09-21T03:06:04Z | 67 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-en-id",
"base_model:finetune:Helsinki-NLP/opus-mt-en-id",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-30T04:58:11Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-id
tags:
- generated_from_keras_callback
model-index:
- name: yudiwbs/marian-finetuned-kde4-en-to-id
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yudiwbs/marian-finetuned-kde4-en-to-id
Penjelasan: https://yudiwbs.wordpress.com/2023/09/01/fine-tune-model-machine-translation-inggris-indonesia-en-id/
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-id](https://huggingface.co/Helsinki-NLP/opus-mt-en-id) on KDE dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5779
- Validation Loss: 0.6892
- Epoch: 2
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1245, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0329 | 0.7683 | 0 |
| 0.7086 | 0.7042 | 1 |
| 0.5779 | 0.6892 | 2 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
OpenMotionLab/MotionGPT-base
|
OpenMotionLab
| 2023-09-21T03:01:56Z | 0 | 7 | null |
[
"arxiv:2306.14795",
"license:cc",
"region:us"
] | null | 2023-09-08T12:39:03Z |
---
license: cc
---
<div align= "center">
<h1> MotionGPT </h1>
</div>
<div align="center">
<h2> <a href="https://motion-gpt.github.io/">MotionGPT: Human Motion as a Foreign Language</a></h2>
<p align="center">
<a href="https://motion-gpt.github.io/">Project Page</a> •
<a href="https://arxiv.org/abs/2306.14795">Arxiv Paper</a> •
<a href="https://huggingface.co/spaces/OpenMotionLab/MotionGPT">HuggingFace Demo</a> •
<a href="#️-faq">FAQ</a> •
<a href="#-citation">Citation
</p>
</div>
<div align="center">
<!-- <img src="https://cdn.discordapp.com/attachments/941582479117127680/1111543600879259749/20230526075532.png" width="350px"> -->
</div>
<!-- ### [MotionGPT: Human Motion as a Foreign Language](https://motion-gpt.github.io/) -->
<!-- ### [Project Page](https://motion-gpt.github.io/) | [Arxiv Paper](https://arxiv.org/abs/2306.14795) | [HuggingFace Demo](xxx) -->
## 🏃 Intro MotionGPT
MotionGPT is a **unified** and **user-friendly** motion-language model to learn the semantic coupling of two modalities and generate high-quality motions and text descriptions on **multiple motion tasks**.
Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multi-modal data, such as motion, remains challenging and untouched so far. Fortunately, human motion displays a semantic coupling akin to human language, often perceived as a form of body language. By fusing language data with large-scale motion models, motion-language pre-training that can enhance the performance of motion-related tasks becomes feasible. Driven by this insight, we propose MotionGPT, a unified, versatile, and user-friendly motion-language model to handle multiple motion-relevant tasks. Specifically, we employ the discrete vector quantization for human motion and transfer 3D motion into motion tokens, similar to the generation process of word tokens. Building upon this “motion vocabulary”, we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language. Moreover, inspired by prompt learning, we pre-train MotionGPT with a mixture of motion-language data and fine-tune it on prompt-based question-and-answer tasks. Extensive experiments demonstrate that MotionGPT achieves state-of-the-art performances on multiple motion tasks including text-driven motion generation, motion captioning, motion prediction, and motion in-between.
MotionGPT: Human Motion as a Foreign Language - [[ArXiv](https://arxiv.org/abs/2306.14795)]
|
LeWince/training_df_fullctxt_and_sent_split_filtered_0_15_PubMedBert
|
LeWince
| 2023-09-21T02:57:43Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-20T23:23:31Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
- precision
- recall
- f1
model-index:
- name: training_df_fullctxt_and_sent_split_filtered_0_15_PubMedBert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training_df_fullctxt_and_sent_split_filtered_0_15_PubMedBert
This model is a fine-tuned version of [dmis-lab/TinyPubMedBERT-v1.0](https://huggingface.co/dmis-lab/TinyPubMedBERT-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3031
- Rouge1: 0.8717
- Rouge2: 0.6989
- Rougel: 0.6336
- Rougelsum: 0.6336
- Exact Match: 0.0
- Precision: [0.8712936639785767, 0.9647811651229858]
- Recall: [0.8689576387405396, 0.9682695865631104]
- F1: [0.8701240420341492, 0.9665222764015198]
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Exact Match | Precision | Recall | F1 | Hashcode |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-----------:|:----------------------------------------:|:----------------------------------------:|:----------------------------------------:|:---------------------------------------------------------:|
| 0.4001 | 1.0 | 5881 | 0.3415 | 0.6842 | 0.6047 | 0.6120 | 0.6120 | 0.0 | [0.8383916616439819, 0.960318922996521] | [0.7912731170654297, 0.963049054145813] | [0.8141512274742126, 0.9616820812225342] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.3165 | 2.0 | 11762 | 0.3255 | 0.7947 | 0.6870 | 0.6369 | 0.6369 | 0.0 | [0.8562091588973999, 0.9591262340545654] | [0.841107964515686, 0.9619568586349487] | [0.8485913872718811, 0.9605394601821899] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.2971 | 3.0 | 17643 | 0.3178 | 0.8168 | 0.6965 | 0.6365 | 0.6365 | 0.0 | [0.8633116483688354, 0.978273868560791] | [0.8504236936569214, 0.9788444638252258] | [0.856819212436676, 0.9785590767860413] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.2853 | 4.0 | 23524 | 0.2934 | 0.8134 | 0.7020 | 0.6328 | 0.6328 | 0.0 | [0.8643838167190552, 0.9647811651229858] | [0.8536887764930725, 0.9682695865631104] | [0.859002947807312, 0.9665222764015198] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.2744 | 5.0 | 29405 | 0.2968 | 0.8664 | 0.7077 | 0.6357 | 0.6357 | 0.0 | [0.8695193529129028, 0.9638710021972656] | [0.8581283688545227, 0.9666727185249329] | [0.8637862205505371, 0.9652698636054993] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.2669 | 6.0 | 35286 | 0.3027 | 0.8472 | 0.6949 | 0.6378 | 0.6378 | 0.0 | [0.8685003519058228, 0.9665455222129822] | [0.8652210235595703, 0.9689881801605225] | [0.8668575882911682, 0.9677652716636658] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.2595 | 7.0 | 41167 | 0.2996 | 0.8840 | 0.7193 | 0.6447 | 0.6447 | 0.0 | [0.8698508143424988, 0.9638710021972656] | [0.8639194965362549, 0.9666727185249329] | [0.8668749332427979, 0.9652698636054993] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.253 | 8.0 | 47048 | 0.2972 | 0.8518 | 0.6891 | 0.6363 | 0.6363 | 0.0 | [0.8666473031044006, 0.9638710021972656] | [0.863062858581543, 0.9666727185249329] | [0.8648514151573181, 0.9652698636054993] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.2481 | 9.0 | 52929 | 0.2985 | 0.8533 | 0.6843 | 0.6309 | 0.6309 | 0.0 | [0.8691736459732056, 0.9647811651229858] | [0.8661415576934814, 0.9682695865631104] | [0.8676549196243286, 0.9665222764015198] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
| 0.243 | 10.0 | 58810 | 0.3031 | 0.8717 | 0.6989 | 0.6336 | 0.6336 | 0.0 | [0.8712936639785767, 0.9647811651229858] | [0.8689576387405396, 0.9682695865631104] | [0.8701240420341492, 0.9665222764015198] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.28.0) |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kcyu/Cifar100_LoRA_model_Vit-cifar_100
|
kcyu
| 2023-09-21T02:39:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T02:39:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Spacetimetravel/autotrain-financial-conversation_financial-summary-90517144315
|
Spacetimetravel
| 2023-09-21T02:27:57Z | 112 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:Spacetimetravel/autotrain-data-financial-conversation_financial-summary",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-09-21T02:27:22Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- Spacetimetravel/autotrain-data-financial-conversation_financial-summary
co2_eq_emissions:
emissions: 0.0034691778675638176
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 90517144315
- CO2 Emissions (in grams): 0.0035
## Validation Metrics
- Loss: 2.350
- Rouge1: 13.269
- Rouge2: 6.044
- RougeL: 11.731
- RougeLsum: 13.269
- Gen Len: 19.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Spacetimetravel/autotrain-financial-conversation_financial-summary-90517144315
```
|
LykosAI/Upscalers
|
LykosAI
| 2023-09-21T02:18:49Z | 0 | 0 | null |
[
"license:agpl-3.0",
"region:us"
] | null | 2023-09-20T19:00:08Z |
---
license: agpl-3.0
---
## Collection of community image upscalers
License varies by model
- Individual files will have an accompanying "ModelName - LICENSE.txt"
- Collections of files from the same source may instead have a "LICENSE.txt" in the directory
|
caochengchen/rare-puppers
|
caochengchen
| 2023-09-21T01:52:38Z | 194 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-21T01:52:30Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7313432693481445
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
newronai/clma2-13b-Chat-Adapter-text2sql-numstation-3epoch
|
newronai
| 2023-09-21T01:17:46Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-21T01:17:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
isashap/contexttrained-validationloss-waldomodel
|
isashap
| 2023-09-21T00:35:27Z | 33 | 0 |
peft
|
[
"peft",
"text-generation",
"region:us"
] |
text-generation
| 2023-09-21T00:05:53Z |
---
library_name: peft
pipeline_tag: text-generation
widget:
- text: "Job: Skills: Resume Point:"
---
|
leonidaster/PhotoGasmv1.0
|
leonidaster
| 2023-09-21T00:32:14Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-21T00:23:04Z |
---
license: creativeml-openrail-m
---
|
tuanio/wav2vec2-large-xls-r-300m-cv_vi
|
tuanio
| 2023-09-21T00:29:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-20T10:17:20Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-cv_vi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: vi
split: test
args: vi
metrics:
- name: Wer
type: wer
value: 0.663156740155753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-cv_vi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3858
- Wer: 0.6632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 14.1667 | 9.2 | 200 | 4.5633 | 1.0 |
| 3.6334 | 18.39 | 400 | 3.4332 | 1.0 |
| 1.938 | 27.59 | 600 | 1.2434 | 0.7082 |
| 0.3082 | 36.78 | 800 | 1.2288 | 0.6534 |
| 0.1766 | 45.98 | 1000 | 1.2915 | 0.6500 |
| 0.1287 | 55.17 | 1200 | 1.3452 | 0.6269 |
| 0.1043 | 64.37 | 1400 | 1.4746 | 0.6395 |
| 0.0834 | 73.56 | 1600 | 1.4731 | 0.6347 |
| 0.0837 | 82.76 | 1800 | 1.5893 | 0.6493 |
| 0.0711 | 91.95 | 2000 | 1.6205 | 0.6522 |
| 0.0672 | 101.15 | 2200 | 1.5513 | 0.6503 |
| 0.0745 | 110.34 | 2400 | 1.6509 | 0.6774 |
| 0.07 | 119.54 | 2600 | 1.6779 | 0.6543 |
| 0.0492 | 128.74 | 2800 | 1.7616 | 0.6611 |
| 0.0473 | 137.93 | 3000 | 1.7885 | 0.6634 |
| 0.0535 | 147.13 | 3200 | 1.8877 | 0.6806 |
| 0.0468 | 156.32 | 3400 | 1.7766 | 0.6671 |
| 0.0386 | 165.52 | 3600 | 1.7956 | 0.6494 |
| 0.0418 | 174.71 | 3800 | 1.9402 | 0.6851 |
| 0.0426 | 183.91 | 4000 | 1.9777 | 0.6927 |
| 0.0395 | 193.1 | 4200 | 1.8733 | 0.6689 |
| 0.0392 | 202.3 | 4400 | 1.8994 | 0.6774 |
| 0.0377 | 211.49 | 4600 | 1.9983 | 0.6889 |
| 0.0354 | 220.69 | 4800 | 1.8858 | 0.6645 |
| 0.0315 | 229.89 | 5000 | 1.9716 | 0.6805 |
| 0.0312 | 239.08 | 5200 | 2.0422 | 0.6825 |
| 0.0292 | 248.28 | 5400 | 2.0780 | 0.7019 |
| 0.0283 | 257.47 | 5600 | 1.9102 | 0.6743 |
| 0.025 | 266.67 | 5800 | 1.9745 | 0.6756 |
| 0.0246 | 275.86 | 6000 | 2.1289 | 0.6918 |
| 0.0234 | 285.06 | 6200 | 2.1775 | 0.7068 |
| 0.0219 | 294.25 | 6400 | 2.1755 | 0.6935 |
| 0.0182 | 303.45 | 6600 | 2.1602 | 0.6764 |
| 0.0174 | 312.64 | 6800 | 2.1359 | 0.6596 |
| 0.0157 | 321.84 | 7000 | 2.1958 | 0.6797 |
| 0.0147 | 331.03 | 7200 | 2.1460 | 0.6657 |
| 0.0135 | 340.23 | 7400 | 2.2716 | 0.6719 |
| 0.0124 | 349.43 | 7600 | 2.3556 | 0.6762 |
| 0.0109 | 358.62 | 7800 | 2.2520 | 0.6632 |
| 0.0115 | 367.82 | 8000 | 2.3112 | 0.6802 |
| 0.0108 | 377.01 | 8200 | 2.2925 | 0.6659 |
| 0.0106 | 386.21 | 8400 | 2.2950 | 0.6726 |
| 0.0088 | 395.4 | 8600 | 2.3078 | 0.6735 |
| 0.0084 | 404.6 | 8800 | 2.3538 | 0.6723 |
| 0.0079 | 413.79 | 9000 | 2.3212 | 0.6615 |
| 0.0074 | 422.99 | 9200 | 2.3908 | 0.6774 |
| 0.0094 | 432.18 | 9400 | 2.3164 | 0.6779 |
| 0.0077 | 441.38 | 9600 | 2.3105 | 0.6649 |
| 0.0066 | 450.57 | 9800 | 2.3599 | 0.6742 |
| 0.007 | 459.77 | 10000 | 2.3675 | 0.6709 |
| 0.0056 | 468.97 | 10200 | 2.3964 | 0.6677 |
| 0.0049 | 478.16 | 10400 | 2.3858 | 0.6632 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/yao_feifei_idolmastercinderellagirls
|
CyberHarem
| 2023-09-21T00:28:18Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yao_feifei_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-21T00:15:23Z |
---
license: mit
datasets:
- CyberHarem/yao_feifei_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yao_feifei_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5100, you need to download `5100/yao_feifei_idolmastercinderellagirls.pt` as the embedding and `5100/yao_feifei_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5100**, with the score of 0.948. The trigger words are:
1. `yao_feifei_idolmastercinderellagirls`
2. `green_eyes, black_hair, smile, hair_bun, double_bun, open_mouth, short_hair, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5100** | **0.948** | [**Download**](5100/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.894 | [Download](4760/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.926 | [Download](4420/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.895 | [Download](4080/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.924 | [Download](3740/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.921 | [Download](3400/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.913 | [Download](3060/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.928 | [Download](2720/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.919 | [Download](2380/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.860 | [Download](2040/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.825 | [Download](1700/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.761 | [Download](1360/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.797 | [Download](1020/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.770 | [Download](680/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.718 | [Download](340/yao_feifei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
sandeshrajx/code-alpaca-100
|
sandeshrajx
| 2023-09-21T00:21:45Z | 3 | 0 |
peft
|
[
"peft",
"base_model:abhishek/llama-2-7b-hf-small-shards",
"base_model:adapter:abhishek/llama-2-7b-hf-small-shards",
"region:us"
] | null | 2023-08-02T05:12:04Z |
---
library_name: peft
base_model: abhishek/llama-2-7b-hf-small-shards
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
vadimgm/lora-trained-xl
|
vadimgm
| 2023-09-21T00:14:07Z | 3 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-20T23:24:34Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - vadimgm/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
grace-pro/snli_test_100k
|
grace-pro
| 2023-09-21T00:07:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T23:16:43Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: snli_test_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# snli_test_100k
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1739
- Accuracy: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1736 | 1.0 | 9375 | 0.1710 | 0.9392 |
| 0.1416 | 2.0 | 18750 | 0.1747 | 0.9412 |
| 0.1057 | 3.0 | 28125 | 0.1739 | 0.9451 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
digiplay/OldFish_v1.1_personal_HDmix
|
digiplay
| 2023-09-20T23:57:19Z | 332 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-20T19:22:02Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Use some merge ways to make OldFish_v1.1 into diffusers .safetensors WORK OK file.
Original Author's models page:
https://civitai.com/models/14978?modelVersionId=22052
Sample image generated by huggingface's API :
bright color,light color, 1girl

1 girl, masterpiece , magazine cover,

close-up ,masterpiece,highres, highest quality,intricate detail,best texture,realistic,8k,soft light,perfect shadow, sunny,portrait,1girl,hanfu,walking,Luxury, street shot,



|
CatGalaxy/cat
|
CatGalaxy
| 2023-09-20T23:51:49Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-3.0",
"region:us"
] | null | 2023-09-20T23:51:49Z |
---
license: cc-by-nc-sa-3.0
---
|
sapharos/jairo-reyes
|
sapharos
| 2023-09-20T23:38:40Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-20T19:58:33Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of JARV
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
bedus-creation/eng-limbu-t5-large-all-002
|
bedus-creation
| 2023-09-20T23:37:45Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-20T17:27:13Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/eng-limbu-t5-large-all-002
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/eng-limbu-t5-large-all-002
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8999
- Validation Loss: 2.7328
- Epoch: 279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.7953 | 7.0449 | 0 |
| 7.0758 | 6.6946 | 1 |
| 6.7576 | 6.5212 | 2 |
| 6.5967 | 6.3865 | 3 |
| 6.4694 | 6.2904 | 4 |
| 6.3887 | 6.2178 | 5 |
| 6.2966 | 6.1474 | 6 |
| 6.2517 | 6.0932 | 7 |
| 6.1860 | 6.0366 | 8 |
| 6.1346 | 5.9946 | 9 |
| 6.0712 | 5.9387 | 10 |
| 6.0509 | 5.9157 | 11 |
| 6.0028 | 5.8848 | 12 |
| 5.9767 | 5.8508 | 13 |
| 5.9447 | 5.8147 | 14 |
| 5.8854 | 5.7756 | 15 |
| 5.8718 | 5.7431 | 16 |
| 5.8380 | 5.7119 | 17 |
| 5.8139 | 5.6781 | 18 |
| 5.7940 | 5.6455 | 19 |
| 5.7526 | 5.6239 | 20 |
| 5.7284 | 5.5838 | 21 |
| 5.6846 | 5.5729 | 22 |
| 5.6370 | 5.5342 | 23 |
| 5.6364 | 5.4946 | 24 |
| 5.5995 | 5.4774 | 25 |
| 5.5687 | 5.4367 | 26 |
| 5.5542 | 5.4143 | 27 |
| 5.5180 | 5.3827 | 28 |
| 5.4891 | 5.3586 | 29 |
| 5.4495 | 5.3369 | 30 |
| 5.4378 | 5.3089 | 31 |
| 5.4178 | 5.2933 | 32 |
| 5.4018 | 5.2644 | 33 |
| 5.3636 | 5.2449 | 34 |
| 5.3411 | 5.2251 | 35 |
| 5.2948 | 5.1966 | 36 |
| 5.2743 | 5.1697 | 37 |
| 5.2674 | 5.1476 | 38 |
| 5.2382 | 5.1407 | 39 |
| 5.2198 | 5.1172 | 40 |
| 5.1973 | 5.0913 | 41 |
| 5.1627 | 5.0737 | 42 |
| 5.1588 | 5.0510 | 43 |
| 5.1127 | 5.0454 | 44 |
| 5.0878 | 5.0105 | 45 |
| 5.0613 | 5.0046 | 46 |
| 5.0591 | 4.9855 | 47 |
| 5.0412 | 4.9752 | 48 |
| 4.9854 | 4.9594 | 49 |
| 4.9747 | 4.9363 | 50 |
| 4.9700 | 4.9218 | 51 |
| 4.9462 | 4.9077 | 52 |
| 4.9262 | 4.8845 | 53 |
| 4.9259 | 4.8694 | 54 |
| 4.8775 | 4.8454 | 55 |
| 4.8740 | 4.8548 | 56 |
| 4.8358 | 4.8191 | 57 |
| 4.8322 | 4.8062 | 58 |
| 4.7923 | 4.7926 | 59 |
| 4.7962 | 4.7772 | 60 |
| 4.7558 | 4.7718 | 61 |
| 4.7590 | 4.7415 | 62 |
| 4.7218 | 4.7336 | 63 |
| 4.7066 | 4.7259 | 64 |
| 4.6890 | 4.7041 | 65 |
| 4.6694 | 4.7048 | 66 |
| 4.6403 | 4.6774 | 67 |
| 4.6289 | 4.6763 | 68 |
| 4.6279 | 4.6538 | 69 |
| 4.6049 | 4.6313 | 70 |
| 4.5677 | 4.6278 | 71 |
| 4.5795 | 4.6051 | 72 |
| 4.5540 | 4.5965 | 73 |
| 4.5160 | 4.5783 | 74 |
| 4.5139 | 4.5696 | 75 |
| 4.5000 | 4.5461 | 76 |
| 4.4890 | 4.5406 | 77 |
| 4.4287 | 4.5367 | 78 |
| 4.4327 | 4.5103 | 79 |
| 4.4258 | 4.4959 | 80 |
| 4.4061 | 4.4783 | 81 |
| 4.3990 | 4.4655 | 82 |
| 4.3895 | 4.4568 | 83 |
| 4.3561 | 4.4437 | 84 |
| 4.3408 | 4.4307 | 85 |
| 4.3202 | 4.4179 | 86 |
| 4.2858 | 4.4040 | 87 |
| 4.2933 | 4.4001 | 88 |
| 4.2824 | 4.3876 | 89 |
| 4.2461 | 4.3682 | 90 |
| 4.2468 | 4.3575 | 91 |
| 4.2210 | 4.3480 | 92 |
| 4.2108 | 4.3273 | 93 |
| 4.1970 | 4.3143 | 94 |
| 4.1821 | 4.3085 | 95 |
| 4.1640 | 4.2918 | 96 |
| 4.1481 | 4.2699 | 97 |
| 4.1312 | 4.2643 | 98 |
| 4.1221 | 4.2473 | 99 |
| 4.1146 | 4.2410 | 100 |
| 4.0680 | 4.2203 | 101 |
| 4.0452 | 4.2196 | 102 |
| 4.0217 | 4.2066 | 103 |
| 4.0366 | 4.2025 | 104 |
| 4.0123 | 4.1800 | 105 |
| 3.9836 | 4.1794 | 106 |
| 3.9713 | 4.1535 | 107 |
| 3.9780 | 4.1415 | 108 |
| 3.9404 | 4.1295 | 109 |
| 3.9220 | 4.1263 | 110 |
| 3.9356 | 4.1115 | 111 |
| 3.8844 | 4.0967 | 112 |
| 3.8773 | 4.0870 | 113 |
| 3.8716 | 4.0853 | 114 |
| 3.8412 | 4.0683 | 115 |
| 3.8377 | 4.0502 | 116 |
| 3.8244 | 4.0485 | 117 |
| 3.8084 | 4.0419 | 118 |
| 3.8034 | 4.0267 | 119 |
| 3.7625 | 4.0202 | 120 |
| 3.7533 | 4.0012 | 121 |
| 3.7537 | 3.9910 | 122 |
| 3.7306 | 3.9875 | 123 |
| 3.7285 | 3.9704 | 124 |
| 3.7029 | 3.9639 | 125 |
| 3.6878 | 3.9554 | 126 |
| 3.6739 | 3.9437 | 127 |
| 3.6867 | 3.9331 | 128 |
| 3.6416 | 3.9241 | 129 |
| 3.6223 | 3.9166 | 130 |
| 3.6140 | 3.9054 | 131 |
| 3.6078 | 3.8965 | 132 |
| 3.5949 | 3.8874 | 133 |
| 3.5544 | 3.8686 | 134 |
| 3.5501 | 3.8648 | 135 |
| 3.5556 | 3.8563 | 136 |
| 3.5244 | 3.8394 | 137 |
| 3.4931 | 3.8349 | 138 |
| 3.4979 | 3.8258 | 139 |
| 3.4661 | 3.8151 | 140 |
| 3.4753 | 3.7984 | 141 |
| 3.4504 | 3.7964 | 142 |
| 3.4576 | 3.7955 | 143 |
| 3.4260 | 3.7821 | 144 |
| 3.4178 | 3.7637 | 145 |
| 3.3994 | 3.7522 | 146 |
| 3.3944 | 3.7481 | 147 |
| 3.3643 | 3.7424 | 148 |
| 3.3789 | 3.7233 | 149 |
| 3.3367 | 3.7110 | 150 |
| 3.3153 | 3.7045 | 151 |
| 3.3118 | 3.6975 | 152 |
| 3.3088 | 3.6891 | 153 |
| 3.2876 | 3.6760 | 154 |
| 3.2608 | 3.6659 | 155 |
| 3.2618 | 3.6630 | 156 |
| 3.2502 | 3.6473 | 157 |
| 3.2326 | 3.6375 | 158 |
| 3.2107 | 3.6316 | 159 |
| 3.1976 | 3.6233 | 160 |
| 3.1935 | 3.6101 | 161 |
| 3.1789 | 3.6092 | 162 |
| 3.1475 | 3.6092 | 163 |
| 3.1672 | 3.5901 | 164 |
| 3.1377 | 3.5858 | 165 |
| 3.1281 | 3.5749 | 166 |
| 3.1049 | 3.5581 | 167 |
| 3.0839 | 3.5556 | 168 |
| 3.0851 | 3.5453 | 169 |
| 3.0769 | 3.5320 | 170 |
| 3.0775 | 3.5266 | 171 |
| 3.0284 | 3.5204 | 172 |
| 3.0525 | 3.5146 | 173 |
| 3.0226 | 3.5012 | 174 |
| 2.9960 | 3.4935 | 175 |
| 2.9902 | 3.4852 | 176 |
| 2.9843 | 3.4776 | 177 |
| 2.9690 | 3.4626 | 178 |
| 2.9569 | 3.4593 | 179 |
| 2.9346 | 3.4547 | 180 |
| 2.9186 | 3.4286 | 181 |
| 2.9128 | 3.4255 | 182 |
| 2.9268 | 3.4247 | 183 |
| 2.9021 | 3.4132 | 184 |
| 2.8866 | 3.4039 | 185 |
| 2.8667 | 3.4000 | 186 |
| 2.8837 | 3.3907 | 187 |
| 2.8454 | 3.3769 | 188 |
| 2.8227 | 3.3815 | 189 |
| 2.8175 | 3.3662 | 190 |
| 2.8069 | 3.3581 | 191 |
| 2.7910 | 3.3586 | 192 |
| 2.7819 | 3.3428 | 193 |
| 2.7717 | 3.3350 | 194 |
| 2.7649 | 3.3186 | 195 |
| 2.7390 | 3.3211 | 196 |
| 2.7235 | 3.3040 | 197 |
| 2.7286 | 3.2991 | 198 |
| 2.7103 | 3.2952 | 199 |
| 2.7014 | 3.2773 | 200 |
| 2.6868 | 3.2711 | 201 |
| 2.6902 | 3.2669 | 202 |
| 2.6576 | 3.2577 | 203 |
| 2.6249 | 3.2544 | 204 |
| 2.6401 | 3.2438 | 205 |
| 2.6318 | 3.2227 | 206 |
| 2.6323 | 3.2356 | 207 |
| 2.6169 | 3.2217 | 208 |
| 2.6088 | 3.2107 | 209 |
| 2.5782 | 3.2105 | 210 |
| 2.5698 | 3.2004 | 211 |
| 2.5615 | 3.1989 | 212 |
| 2.5591 | 3.1856 | 213 |
| 2.5351 | 3.1888 | 214 |
| 2.5340 | 3.1684 | 215 |
| 2.5246 | 3.1591 | 216 |
| 2.5193 | 3.1515 | 217 |
| 2.4921 | 3.1439 | 218 |
| 2.4864 | 3.1377 | 219 |
| 2.4649 | 3.1273 | 220 |
| 2.4677 | 3.1305 | 221 |
| 2.4673 | 3.1219 | 222 |
| 2.4337 | 3.1115 | 223 |
| 2.4299 | 3.1004 | 224 |
| 2.3988 | 3.0971 | 225 |
| 2.4104 | 3.0896 | 226 |
| 2.4033 | 3.0806 | 227 |
| 2.3804 | 3.0762 | 228 |
| 2.3520 | 3.0737 | 229 |
| 2.3598 | 3.0566 | 230 |
| 2.3498 | 3.0555 | 231 |
| 2.3629 | 3.0408 | 232 |
| 2.3383 | 3.0410 | 233 |
| 2.3226 | 3.0288 | 234 |
| 2.3126 | 3.0275 | 235 |
| 2.3112 | 3.0293 | 236 |
| 2.2838 | 3.0065 | 237 |
| 2.2786 | 2.9994 | 238 |
| 2.2599 | 2.9986 | 239 |
| 2.2481 | 2.9894 | 240 |
| 2.2472 | 2.9854 | 241 |
| 2.2187 | 2.9790 | 242 |
| 2.2278 | 2.9645 | 243 |
| 2.2268 | 2.9652 | 244 |
| 2.2018 | 2.9571 | 245 |
| 2.1895 | 2.9434 | 246 |
| 2.1744 | 2.9463 | 247 |
| 2.1717 | 2.9351 | 248 |
| 2.1529 | 2.9302 | 249 |
| 2.1614 | 2.9310 | 250 |
| 2.1574 | 2.9184 | 251 |
| 2.1357 | 2.9118 | 252 |
| 2.1349 | 2.9017 | 253 |
| 2.1102 | 2.8898 | 254 |
| 2.1137 | 2.8973 | 255 |
| 2.0954 | 2.8839 | 256 |
| 2.0988 | 2.8771 | 257 |
| 2.0826 | 2.8695 | 258 |
| 2.0792 | 2.8674 | 259 |
| 2.0666 | 2.8579 | 260 |
| 2.0672 | 2.8475 | 261 |
| 2.0357 | 2.8424 | 262 |
| 2.0348 | 2.8343 | 263 |
| 2.0250 | 2.8397 | 264 |
| 2.0141 | 2.8213 | 265 |
| 2.0042 | 2.8273 | 266 |
| 2.0160 | 2.8118 | 267 |
| 1.9873 | 2.8120 | 268 |
| 1.9815 | 2.7944 | 269 |
| 1.9853 | 2.7964 | 270 |
| 1.9556 | 2.7879 | 271 |
| 1.9385 | 2.7821 | 272 |
| 1.9195 | 2.7754 | 273 |
| 1.9332 | 2.7688 | 274 |
| 1.9269 | 2.7578 | 275 |
| 1.9224 | 2.7474 | 276 |
| 1.9158 | 2.7407 | 277 |
| 1.9042 | 2.7362 | 278 |
| 1.8999 | 2.7328 | 279 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/zaizen_tokiko_idolmastercinderellagirls
|
CyberHarem
| 2023-09-20T23:36:12Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/zaizen_tokiko_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-20T23:22:28Z |
---
license: mit
datasets:
- CyberHarem/zaizen_tokiko_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of zaizen_tokiko_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5700, you need to download `5700/zaizen_tokiko_idolmastercinderellagirls.pt` as the embedding and `5700/zaizen_tokiko_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5700**, with the score of 0.955. The trigger words are:
1. `zaizen_tokiko_idolmastercinderellagirls`
2. `long_hair, brown_eyes, brown_hair, breasts, jewelry, red_hair, smile`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5700** | **0.955** | [**Download**](5700/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5700/previews/bikini.png) | [<NSFW, click to see>](5700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5700/previews/nude.png) | [<NSFW, click to see>](5700/previews/nude2.png) |  |  |
| 5320 | 0.914 | [Download](5320/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5320/previews/bikini.png) | [<NSFW, click to see>](5320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5320/previews/nude.png) | [<NSFW, click to see>](5320/previews/nude2.png) |  |  |
| 4940 | 0.953 | [Download](4940/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4940/previews/bikini.png) | [<NSFW, click to see>](4940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4940/previews/nude.png) | [<NSFW, click to see>](4940/previews/nude2.png) |  |  |
| 4560 | 0.951 | [Download](4560/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4560/previews/bikini.png) | [<NSFW, click to see>](4560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4560/previews/nude.png) | [<NSFW, click to see>](4560/previews/nude2.png) |  |  |
| 4180 | 0.935 | [Download](4180/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4180/previews/bikini.png) | [<NSFW, click to see>](4180/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4180/previews/nude.png) | [<NSFW, click to see>](4180/previews/nude2.png) |  |  |
| 3800 | 0.932 | [Download](3800/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3800/previews/bikini.png) | [<NSFW, click to see>](3800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3800/previews/nude.png) | [<NSFW, click to see>](3800/previews/nude2.png) |  |  |
| 3420 | 0.932 | [Download](3420/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3420/previews/bikini.png) | [<NSFW, click to see>](3420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3420/previews/nude.png) | [<NSFW, click to see>](3420/previews/nude2.png) |  |  |
| 3040 | 0.943 | [Download](3040/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3040/previews/bikini.png) | [<NSFW, click to see>](3040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3040/previews/nude.png) | [<NSFW, click to see>](3040/previews/nude2.png) |  |  |
| 2660 | 0.940 | [Download](2660/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2660/previews/bikini.png) | [<NSFW, click to see>](2660/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2660/previews/nude.png) | [<NSFW, click to see>](2660/previews/nude2.png) |  |  |
| 2280 | 0.907 | [Download](2280/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2280/previews/bikini.png) | [<NSFW, click to see>](2280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2280/previews/nude.png) | [<NSFW, click to see>](2280/previews/nude2.png) |  |  |
| 1900 | 0.942 | [Download](1900/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1900/previews/bikini.png) | [<NSFW, click to see>](1900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1900/previews/nude.png) | [<NSFW, click to see>](1900/previews/nude2.png) |  |  |
| 1520 | 0.928 | [Download](1520/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1520/previews/bikini.png) | [<NSFW, click to see>](1520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1520/previews/nude.png) | [<NSFW, click to see>](1520/previews/nude2.png) |  |  |
| 1140 | 0.918 | [Download](1140/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1140/previews/bikini.png) | [<NSFW, click to see>](1140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1140/previews/nude.png) | [<NSFW, click to see>](1140/previews/nude2.png) |  |  |
| 760 | 0.935 | [Download](760/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](760/previews/bikini.png) | [<NSFW, click to see>](760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](760/previews/nude.png) | [<NSFW, click to see>](760/previews/nude2.png) |  |  |
| 380 | 0.816 | [Download](380/zaizen_tokiko_idolmastercinderellagirls.zip) |  |  |  |  |  |  | [<NSFW, click to see>](380/previews/bikini.png) | [<NSFW, click to see>](380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](380/previews/nude.png) | [<NSFW, click to see>](380/previews/nude2.png) |  |  |
|
JandC/donut-base-sroie
|
JandC
| 2023-09-20T23:33:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-09-07T00:20:48Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
YULU-BIKE/Shared-Ride
|
YULU-BIKE
| 2023-09-20T23:23:53Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T23:23:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
davera-017/Pixel-copter-ultimooooo
|
davera-017
| 2023-09-20T23:02:45Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T23:02:41Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixel-copter-ultimooooo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 41.40 +/- 32.90
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.