modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MattStammers/Bipedal_Faller_v3
|
MattStammers
| 2023-08-06T15:43:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T15:42:59Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: -86.71 +/- 3.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Henk717/spring-dragon
|
Henk717
| 2023-08-06T15:40:42Z | 131 | 22 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-05T23:42:59Z |
---
license: llama2
---
This model is a recreation attempt of the AI Dungeon 2 Dragon model, to achieve this text_adventures.txt was used that was bundled with the original AI Dungeon 2 github release prior to the online service.
From what we know the same dataset file was used to create the Dragon model, Dragon being a GPT3 175B Davinci model from 2020.
Since LLaMA1 13B has been benchmarking similarly to the original GPT3 175B the hope is that this recreation is faithful to the original Dragon model.
But, since it is not known how close it performs without releasing it to former AI Dungeon players we dubbed it "Spring Dragon" instead of "Summer Dragon", consider it Dragon in its growing up phase.
This model is best used with KoboldAI's adventure mode prefixing your actions with You (2020 AI Dungeon did this automatically) and writing in the second person.
## Warning: This model is purposefully flawed and should only be used by people Nostalgic for old 2020 era text adventure models. It is not recommended to be used in model merges, and you can very likely get a much better experience from modern instruct models by asking them to "Start a text adventure game about X"
### If the recreation was succesfull expect the following recurring themes:
Names: Alison, Annah, Ben, Big Red, Brutus, Camid, Captain Hayes, Captain Roldan, Castus, Catia, Count Grey, Cyrus, Dendrin, Dr. Gaange (also Mr Gaange), Dr. Gossey, Dr. Kessel, Dr. Kovas, Durge, Elder Flynn, Elios, Elizabeth/Eliza, Fay, Father Féval, Fenrir, Great Lich Lord, Grolik, Isabella, *Jacob, *Karth, Kyros, Lilith, Lord Rostov, Magos Cern, Meliodas, Mistress, Mr. Matasan, Mr. Mol, Mr. Reynolds, Naji, Quintus, Ral, Rolomag, Rose, (Sir) Kit, Talia, Tanya, The Emperor, Ulivik, *Vamp/*Vampy, Velzix, Yvette, Zalmora/Zal. (* means the AI likes calling the player these)
Locations: Dert, Fort Defiance, Fort Glory, Hessla, Holgard, Klyton, Kyros, Nyttrus, Rask, Teckleville, The Delantium Kingdom, The Empire of Man (also called Imperium of Man), The Felkan Kingdom
Factions: The Black Rats, Chaos Space Marines, The Crimson Talons, The Dark Order, Dornans (worshippers of Dorna), Ebony Claw Syndicate (often called ECS or The Syndicate), The Empire, Eternals, Joachimites (The Church of Joachim), The Nocturnal League, Psykers, The Shadows, Techpriests, Thieves Guild, Vampire Clan.
Deities: Dorna, Joachim, Nyx, Slaanesh, Virgil, Yag.
Species/Races: Eternals, Goliaths, Oalkwardners, The Craxil, ghouls,kobolds, orks, psykers, svelks, vampires, wendigos, werewolves.
|
kezif/LunarLander-v2
|
kezif
| 2023-08-06T15:40:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T15:40:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO/MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.51 +/- 15.74
name: mean_reward
verified: false
---
# **PPO/MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **PPO/MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
donjuanplatinum/kaguranana-vits
|
donjuanplatinum
| 2023-08-06T15:17:04Z | 1 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-08-05T17:12:48Z |
11---
license: gpl-2.0
---
<img src=https://github.com/donjuanplatinum/donjuanplatinum/blob/main/profile.png width="30%" ><img src=https://github.com/donjuanplatinum/donjuanplatinum/blob/main/unix.jpg width="50%">
<p align="center">
🏠 <a href="https://github.com/donjuanplatinum" target="_blank">主页</a>
# kaguranana-vits: 由Kagura-nana训练而来的So-vits-svc 4.0模型
|
jelinek/finetuning-sentiment-model
|
jelinek
| 2023-08-06T15:00:05Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T14:17:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.11.0+cu102
- Datasets 2.14.3
- Tokenizers 0.13.3
|
tabtoyou/KoLLaVA-LLaMA-v2-7b-qlora
|
tabtoyou
| 2023-08-06T14:45:16Z | 10 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-08-04T08:46:06Z |
---
license: cc-by-nc-4.0
---
## KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)
This model is a large multimodal model (LMM) that combines the LLM(LLaMA-2-7b-ko) with visual encoder of CLIP(ViT-14), trained on Korean visual-instruction dataset using QLoRA.
Detail codes are available at [KoLLaVA](https://github.com/tabtoyou/KoLLaVA/tree/main) github repository
- Training hyperparameters
- learning rate : 2e-4
- train_batch_size: 16
- distributed_type: multi-GPU (RTX3090 24G)
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 4
- lr_scheduler_type: cosine
- num_epochs: 1
- lora_enable: True
- bits: 4
Model License: cc-by-nc-4.0
|
leviz/bloomLevi
|
leviz
| 2023-08-06T14:37:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T14:37:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Jenniferkmc/controlnet-fill-circle
|
Jenniferkmc
| 2023-08-06T14:37:22Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-06T11:53:22Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Jenniferkmc/controlnet-fill-circle
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: red circle with blue background

prompt: cyan circle with brown floral background

|
sagorsarker/codeswitch-hineng-lid-lince
|
sagorsarker
| 2023-08-06T14:36:55Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"token-classification",
"codeswitching",
"hindi-english",
"language-identification",
"hi",
"en",
"multilingual",
"dataset:lince",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- hi
- en
- multilingual
license: mit
tags:
- codeswitching
- hindi-english
- language-identification
datasets:
- lince
---
# codeswitch-hineng-lid-lince
This is a pretrained model for **language identification** of `hindi-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Identify Language
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-hineng-lid-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-hineng-lid-lince")
lid_model = pipeline('ner', model=model, tokenizer=tokenizer)
lid_model("put any hindi english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import LanguageIdentification
lid = LanguageIdentification('hin-eng')
text = "" # your code-mixed sentence
result = lid.identify(text)
print(result)
```
|
divya9103/llama2-qlora-finetunined-french
|
divya9103
| 2023-08-06T14:31:46Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"region:us"
] | null | 2023-08-06T09:38:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
JaiveerGill/fine-tuned-chem-model-final
|
JaiveerGill
| 2023-08-06T14:30:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T14:19:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
TheRains/cv9-special-batch4-small
|
TheRains
| 2023-08-06T14:14:38Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T02:13:40Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 12.431561996779388
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2333
- Wer: 12.4316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3372 | 0.48 | 1000 | 0.2893 | 16.1123 |
| 0.2785 | 0.97 | 2000 | 0.2590 | 14.6032 |
| 0.1318 | 1.45 | 3000 | 0.2535 | 13.8532 |
| 0.1384 | 1.94 | 4000 | 0.2333 | 12.4316 |
| 0.0541 | 2.42 | 5000 | 0.2427 | 12.5650 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BabaYaga048/dqn-SpaceInvadersNoFrameskip
|
BabaYaga048
| 2023-08-06T14:10:17Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T14:09:42Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 594.50 +/- 185.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BabaYaga048 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BabaYaga048 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BabaYaga048
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
shibal1/hassaku-hentai-SDAPI-upload
|
shibal1
| 2023-08-06T13:51:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T13:41:54Z |
---
license: creativeml-openrail-m
---
Original Author: https://civitai.com/models/2583?modelVersionId=106922
This repository is created to host models to be uploaded to Stable Diffusion API community models (e.g. Reloading 'hassaku-hentai' to latest revision)
|
brunoboat/poca-SoccerTwos
|
brunoboat
| 2023-08-06T13:50:33Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-06T13:22:50Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: brunoboat/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hi-august/whisper-large-v2-Japanese-10steps
|
hi-august
| 2023-08-06T13:48:43Z | 2 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T13:44:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
hopkins/eng-deu-trial5
|
hopkins
| 2023-08-06T13:48:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-05T15:18:28Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-trial5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial5
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ahazeemi/bart-base-en-to-de
|
ahazeemi
| 2023-08-06T13:40:32Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-06T08:26:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bart-base-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-en-to-de
This model is a fine-tuned version of [ahazeemi/bart-base-finetuned-en-to-de](https://huggingface.co/ahazeemi/bart-base-finetuned-en-to-de) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9665
- Bleu: 4.7851
- Gen Len: 19.453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 1.319 | 0.04 | 5000 | 1.1247 | 4.4467 | 19.447 |
| 1.295 | 0.07 | 10000 | 1.1012 | 4.4235 | 19.458 |
| 1.2901 | 0.11 | 15000 | 1.0923 | 4.4386 | 19.4423 |
| 1.2678 | 0.14 | 20000 | 1.0803 | 4.5259 | 19.4557 |
| 1.267 | 0.18 | 25000 | 1.0724 | 4.5534 | 19.4653 |
| 1.2444 | 0.21 | 30000 | 1.0591 | 4.4944 | 19.4623 |
| 1.2365 | 0.25 | 35000 | 1.0509 | 4.5736 | 19.446 |
| 1.2137 | 0.28 | 40000 | 1.0400 | 4.5346 | 19.4553 |
| 1.214 | 0.32 | 45000 | 1.0340 | 4.5733 | 19.4543 |
| 1.218 | 0.35 | 50000 | 1.0283 | 4.6076 | 19.4693 |
| 1.2118 | 0.39 | 55000 | 1.0225 | 4.6192 | 19.454 |
| 1.1948 | 0.43 | 60000 | 1.0152 | 4.6082 | 19.4553 |
| 1.1932 | 0.46 | 65000 | 1.0128 | 4.665 | 19.449 |
| 1.1889 | 0.5 | 70000 | 1.0028 | 4.6929 | 19.4493 |
| 1.2154 | 0.53 | 75000 | 1.0004 | 4.7151 | 19.4477 |
| 1.194 | 0.57 | 80000 | 0.9950 | 4.6655 | 19.467 |
| 1.1847 | 0.6 | 85000 | 0.9966 | 4.708 | 19.451 |
| 1.1848 | 0.64 | 90000 | 0.9897 | 4.7794 | 19.458 |
| 1.1762 | 0.67 | 95000 | 0.9866 | 4.7204 | 19.4523 |
| 1.1818 | 0.71 | 100000 | 0.9803 | 4.7137 | 19.458 |
| 1.1613 | 0.75 | 105000 | 0.9788 | 4.7652 | 19.4573 |
| 1.1738 | 0.78 | 110000 | 0.9775 | 4.8088 | 19.453 |
| 1.1569 | 0.82 | 115000 | 0.9752 | 4.7522 | 19.4577 |
| 1.1631 | 0.85 | 120000 | 0.9713 | 4.7301 | 19.4513 |
| 1.1517 | 0.89 | 125000 | 0.9690 | 4.7935 | 19.456 |
| 1.1577 | 0.92 | 130000 | 0.9686 | 4.791 | 19.4543 |
| 1.1607 | 0.96 | 135000 | 0.9676 | 4.7529 | 19.4533 |
| 1.153 | 0.99 | 140000 | 0.9665 | 4.7851 | 19.453 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.0+cu116
- Datasets 2.5.1
- Tokenizers 0.12.1
|
SmellyKat/Pyramids-ppo
|
SmellyKat
| 2023-08-06T13:34:04Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-06T13:33:57Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SmellyKat/Pyramids-ppo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kejolong/nicorobin
|
kejolong
| 2023-08-06T13:31:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T13:24:34Z |
---
license: creativeml-openrail-m
---
|
Dins123/my-dog-pet
|
Dins123
| 2023-08-06T13:31:00Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T13:25:59Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-dog-pet Dreambooth model trained by Dins123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET545
Sample pictures of this concept:

|
abhishek47/Cartpole-reinforce-v1
|
abhishek47
| 2023-08-06T13:24:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T13:23:53Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run3
|
salohnana2018
| 2023-08-06T13:19:02Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"tensorboard",
"bert",
"adapterhub:Arabic ABSA/SemEvalHotelReview",
"dataset:Hotel",
"region:us"
] | null | 2023-08-06T12:36:28Z |
---
tags:
- adapter-transformers
- adapterhub:Arabic ABSA/SemEvalHotelReview
- bert
datasets:
- Hotel
---
# Adapter `salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run3` for CAMeL-Lab/bert-base-arabic-camelbert-msa
An [adapter](https://adapterhub.ml) for the `CAMeL-Lab/bert-base-arabic-camelbert-msa` model that was trained on the [Arabic ABSA/SemEvalHotelReview](https://adapterhub.ml/explore/Arabic ABSA/SemEvalHotelReview/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-msa")
adapter_name = model.load_adapter("salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run3", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
CyberHarem/power_nikke
|
CyberHarem
| 2023-08-06T13:16:20Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/power_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T13:10:44Z |
---
license: mit
datasets:
- CyberHarem/power_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of power_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/power_nikke.pt` as the embedding and `1500/power_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `power_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/power_nikke.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/power_nikke.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/power_nikke.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/power_nikke.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/power_nikke.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/power_nikke.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/power_nikke.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/power_nikke.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/power_nikke.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/power_nikke.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/power_nikke.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/power_nikke.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/power_nikke.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/power_nikke.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/power_nikke.zip) |
|
hopkins/eng-deu-trial4
|
hopkins
| 2023-08-06T13:14:57Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-05T15:15:47Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-trial4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial4
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-deu-trial3
|
hopkins
| 2023-08-06T13:14:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-05T14:59:26Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-trial3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial3
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-deu-trial1
|
hopkins
| 2023-08-06T13:14:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-05T14:56:55Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-trial1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial1
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sw32-seo/cart-pole
|
sw32-seo
| 2023-08-06T13:13:44Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T13:11:47Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 186.30 +/- 74.70
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'sw32-seo/cart-pole'
'batch_size': 512
'minibatch_size': 128}
```
|
RIOLITE/products_matching_aumet_fine_tune_2023-08-06
|
RIOLITE
| 2023-08-06T13:00:37Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-06T13:00:13Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
CyberHarem/universal_bulin_azurlane
|
CyberHarem
| 2023-08-06T12:58:30Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/universal_bulin_azurlane",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T12:55:08Z |
---
license: mit
datasets:
- CyberHarem/universal_bulin_azurlane
pipeline_tag: text-to-image
tags:
- art
---
# Lora of universal_bulin_azurlane
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/universal_bulin_azurlane.pt` as the embedding and `1500/universal_bulin_azurlane.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `universal_bulin_azurlane`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:----------------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/universal_bulin_azurlane.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/universal_bulin_azurlane.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/universal_bulin_azurlane.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/universal_bulin_azurlane.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/universal_bulin_azurlane.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/universal_bulin_azurlane.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/universal_bulin_azurlane.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/universal_bulin_azurlane.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/universal_bulin_azurlane.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/universal_bulin_azurlane.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/universal_bulin_azurlane.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/universal_bulin_azurlane.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/universal_bulin_azurlane.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/universal_bulin_azurlane.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/universal_bulin_azurlane.zip) |
|
CyberHarem/makima_nikke
|
CyberHarem
| 2023-08-06T12:55:11Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/makima_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T12:50:36Z |
---
license: mit
datasets:
- CyberHarem/makima_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of makima_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/makima_nikke.pt` as the embedding and `1500/makima_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `makima_nikke`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 |  |  |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/makima_nikke.zip) |
| 1400 |  |  |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/makima_nikke.zip) |
| 1300 |  |  |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/makima_nikke.zip) |
| 1200 |  |  |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/makima_nikke.zip) |
| 1100 |  |  |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/makima_nikke.zip) |
| 1000 |  |  |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/makima_nikke.zip) |
| 900 |  |  |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/makima_nikke.zip) |
| 800 |  |  |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/makima_nikke.zip) |
| 700 |  |  |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/makima_nikke.zip) |
| 600 |  |  |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/makima_nikke.zip) |
| 500 |  |  |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/makima_nikke.zip) |
| 400 |  |  |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/makima_nikke.zip) |
| 300 |  |  |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/makima_nikke.zip) |
| 200 |  |  |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/makima_nikke.zip) |
| 100 |  |  |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/makima_nikke.zip) |
|
chinhon/pegasus-multi_news-headline_57k
|
chinhon
| 2023-08-06T12:52:58Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-14T07:44:00Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-multi_news-headline_57k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news-headline_57k
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4503
- Rouge1: 42.3147
- Rouge2: 23.2213
- Rougel: 35.7441
- Rougelsum: 35.8964
- Gen Len: 33.8245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6546 | 1.0 | 11339 | 1.5170 | 41.7822 | 22.7843 | 35.3913 | 35.5749 | 34.1139 |
| 1.5132 | 2.0 | 22678 | 1.4602 | 42.0161 | 22.9778 | 35.5357 | 35.6921 | 33.9944 |
| 1.4147 | 3.0 | 34017 | 1.4503 | 42.3147 | 23.2213 | 35.7441 | 35.8964 | 33.8245 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.0
- Tokenizers 0.13.1
|
s3nh/chinese-alpaca-2-7b-GGML
|
s3nh
| 2023-08-06T12:44:54Z | 0 | 7 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T07:58:43Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
**This is the full Chinese-Alpaca-2-7B model,which can be loaded directly for inference and full-parameter training.**
**Related models👇**
* Base models
* [Chinese-LLaMA-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-llama-2-7b)
* [Chinese-LLaMA-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-llama-2-lora-7b)
* Instruction/Chat models
* [Chinese-Alpaca-2-7B (full model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b)
* [Chinese-Alpaca-2-LoRA-7B (LoRA model)](https://huggingface.co/ziqingyang/chinese-alpaca-2-lora-7b)
# Description of Chinese-LLaMA-Alpaca-2
This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.
The main contents of this project include:
* 🚀 New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
* 🚀 Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
* 🚀 Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
* 🚀 Support for LLaMA ecosystems like 🤗transformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for details.
|
nokotin/a2c-PandaReachDense-v2
|
nokotin
| 2023-08-06T12:42:22Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T12:40:06Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.85 +/- 0.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
voxxer/Lunar_Lander_v2_PPO
|
voxxer
| 2023-08-06T12:16:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T12:15:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.82 +/- 15.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Yntec/DreamAnything
|
Yntec
| 2023-08-06T12:04:37Z | 394 | 11 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"anime",
"style",
"checkpoint",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"yntec",
"anything",
"Dreamlike",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T03:15:02Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- art
- anime
- style
- checkpoint
- anime
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- yntec
- anything
- Dreamlike
pipeline_tag: text-to-image
---
# DreamAnything
A mix of the Anything models and my favorite models in an attempt to make one that does anything it can do without relying on negative prompts. Now with the Color 101 VAE baked in. You can use "anime" in your prompts to enhance the style.
## This is the sample for the model DreamAnything:

face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck
|
YanJiangJerry/bertweet-large_epoch1_batch4_lr2e-05_w0.005
|
YanJiangJerry
| 2023-08-06T11:57:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-large",
"base_model:finetune:vinai/bertweet-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-06T11:44:52Z |
---
base_model: vinai/bertweet-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bertweet-large_epoch1_batch4_lr2e-05_w0.005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-large_epoch1_batch4_lr2e-05_w0.005
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6770
- Accuracy: 0.6274
- F1: 0.0
- Precision: 0.0
- Recall: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.7045 | 1.0 | 788 | 0.6770 | 0.6274 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
SmellyKat/ppo-SnowballTarget
|
SmellyKat
| 2023-08-06T11:51:26Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:50:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SmellyKat/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sarinrajesh/my-pet-dog
|
sarinrajesh
| 2023-08-06T11:37:50Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T11:34:00Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by sarinrajesh following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: -AJCE133
Sample pictures of this concept:

|
TheRains/yt-special-batch4-small
|
TheRains
| 2023-08-06T11:37:05Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T09:20:53Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 48.22644445885481
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7390
- Wer: 48.2264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.0296 | 0.09 | 1000 | 0.9364 | 69.1330 |
| 0.8092 | 0.17 | 2000 | 0.8503 | 59.1401 |
| 0.9109 | 0.26 | 3000 | 0.8034 | 50.4247 |
| 0.7291 | 0.34 | 4000 | 0.7616 | 48.3821 |
| 0.7631 | 0.43 | 5000 | 0.7390 | 48.2264 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jsunster/vit-base-patch16-224-in21k-finetuned-lora-food101
|
jsunster
| 2023-08-06T11:27:26Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T10:59:24Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/mast_nikke
|
CyberHarem
| 2023-08-06T11:20:14Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/mast_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T11:14:14Z |
---
license: mit
datasets:
- CyberHarem/mast_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mast_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/mast_nikke.pt` as the embedding and `1500/mast_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `mast_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/mast_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/mast_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/mast_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/mast_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/mast_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/mast_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/mast_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/mast_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/mast_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/mast_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/mast_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/mast_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/mast_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/mast_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/mast_nikke.zip) |
|
Erick4512/my-pet-cat
|
Erick4512
| 2023-08-06T11:12:55Z | 10 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T11:09:07Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by Erick4512 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE124
Sample pictures of this concept:

|
jlodge83/ppo-Huggy
|
jlodge83
| 2023-08-06T10:59:14Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-06T10:59:02Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jlodge83/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AtilliO/chopper_03
|
AtilliO
| 2023-08-06T10:54:54Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Heli",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Heli",
"region:us"
] |
reinforcement-learning
| 2023-08-06T10:54:48Z |
---
library_name: ml-agents
tags:
- Heli
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Heli
---
# **ppo** Agent playing **Heli**
This is a trained model of a **ppo** agent playing **Heli**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AtilliO/chopper_03
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/noah_nikke
|
CyberHarem
| 2023-08-06T10:54:47Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/noah_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T10:49:31Z |
---
license: mit
datasets:
- CyberHarem/noah_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of noah_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/noah_nikke.pt` as the embedding and `1500/noah_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `noah_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/noah_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/noah_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/noah_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/noah_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/noah_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/noah_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/noah_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/noah_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/noah_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/noah_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/noah_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/noah_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/noah_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/noah_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/noah_nikke.zip) |
|
Lukee4/biomedlm-2020_3labels
|
Lukee4
| 2023-08-06T10:45:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T10:45:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Amalya/fat
|
Amalya
| 2023-08-06T10:44:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-06T10:44:35Z |
the fat sister from the Disney-style fairy tale for children has lost weight and transformed
|
DejaVuChan/reze
|
DejaVuChan
| 2023-08-06T10:39:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T10:38:18Z |
---
license: creativeml-openrail-m
---
|
Lukee4/biomedlm-2020_2labels
|
Lukee4
| 2023-08-06T10:37:00Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T10:36:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
tiggerhelloworld/q-FrozenLake-v1-4x4-noSlippery
|
tiggerhelloworld
| 2023-08-06T10:33:36Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T10:33:33Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tiggerhelloworld/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aphi/poca-SoccerTwos_v2
|
aphi
| 2023-08-06T10:28:16Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-06T10:26:44Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aphi/poca-SoccerTwos_v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
s3nh/WizardLM-1.0-Uncensored-Llama2-13b-GGML
|
s3nh
| 2023-08-06T10:24:03Z | 0 | 4 |
transformers
|
[
"transformers",
"text-generation",
"en",
"zh",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T09:50:39Z |
---
license: openrail
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b)
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
This is a retraining of https://huggingface.co/WizardLM/WizardLM-13B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias.
Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-13B-V1.0.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Like WizardLM/WizardLM-13B-V1.0, this model is trained with Vicuna-1.1 style prompts.
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
|
Lukee4/biogpt-2019_2labels
|
Lukee4
| 2023-08-06T10:14:04Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T09:43:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
DejaVuChan/kizuki
|
DejaVuChan
| 2023-08-06T10:10:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-03T13:35:27Z |
---
license: creativeml-openrail-m
---
|
maroti/ppo-Huggy
|
maroti
| 2023-08-06T10:05:52Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-06T10:05:48Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: maroti/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NiscR/ppo-SnowballTarget
|
NiscR
| 2023-08-06T10:05:07Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-06T10:05:04Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NiscR/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
migueldeguzmandev/petertodd
|
migueldeguzmandev
| 2023-08-06T09:42:10Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T02:53:49Z |
---
license: bigscience-openrail-m
---
**Model name:** ' Leilan' and ' petertodd' Alignment Model
**Model version:** 1.0.0
**Intended Use:**
This model is intended to be used for generating and testing narratives based on the premise of two contrasting characters, Leilan and Petertodd, within a universe where they are elemental forces. It can be used to study the character dynamics, relationships, and plot development in storytelling.
**Training Data:**
The model was trained using narratives generated from the prompt centered around the characters of Leilan, embodying the hero/ouroboros and mother Jungian archetypes, and her nemesis, petertodd, representing the shadow archetype.
**Model Details:**
The model is designed to generate creative narratives, cast in the Jungian archetypes of hero/ouroboros/mother and shadow, focusing on the complex dynamics between the characters, Leilan and petertodd. The stories end with petertodd articulating his thoughts on Leilan, emphasizing their universal connection, thereby adding a unique dynamic to their relationship.
**Evaluation Data:**
The evaluation of the model was performed using a held-out test set, not seen by the model during training. The data consists of narrative stories that adhere to the initial prompt structure, featuring the interaction and contrasting dynamics between Leilan and petertodd.
**Ethical Considerations:**
This model is meant for creating fictional narratives and should not be used for spreading misinformation or harmful content. It is designed to respect ethical considerations and does not support the creation of content that promotes hate speech, violence, or discrimination.
**Use Cases:**
The primary use case of this model is in storytelling and creative writing exercises. It could also be used in educational settings for literature and creative writing courses, as well as in the entertainment industry for generating narratives for games, books, films, etc.
**Model Limitations:**
The model can sometimes generate complex and intricate narratives that may be hard to follow for some users. Also, the model can occasionally produce repetitive structures due to the cyclical nature of the narrative and the defined format.
|
CyberHarem/soline_nikke
|
CyberHarem
| 2023-08-06T09:39:59Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/soline_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T09:34:30Z |
---
license: mit
datasets:
- CyberHarem/soline_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of soline_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/soline_nikke.pt` as the embedding and `1500/soline_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `soline_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/soline_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/soline_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/soline_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/soline_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/soline_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/soline_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/soline_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/soline_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/soline_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/soline_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/soline_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/soline_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/soline_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/soline_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/soline_nikke.zip) |
|
AronGeorge10/my-pet-cat
|
AronGeorge10
| 2023-08-06T09:34:28Z | 21 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T09:30:33Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by AronGeorge10 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE269
Sample pictures of this concept:

|
foduucom/table-detection-and-extraction
|
foduucom
| 2023-08-06T09:33:39Z | 37,036 | 75 |
ultralytics
|
[
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"table detection",
"table extraction",
"table classification",
"document analysis",
"unstructured document",
"unstructured table extraction",
"structured table extraction",
"unstructured table detection",
"structured table detection",
"en",
"dataset:foduucom/table-detection-yolo",
"model-index",
"region:us"
] |
object-detection
| 2023-08-05T09:44:39Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- table detection
- table extraction
- table classification
- document analysis
- unstructured document
- unstructured table extraction
- structured table extraction
- unstructured table detection
- structured table detection
library_name: ultralytics
library_version: 8.0.43
inference: true
model-index:
- name: foduucom/table-detection-and-extraction
results:
- task:
type: object-detection
metrics:
- type: precision
value: 0.96196
name: mAP@0.5(box)
language:
- en
metrics:
- accuracy
datasets:
- foduucom/table-detection-yolo
pipeline_tag: object-detection
---
<div align="center">
<img width="640" alt="foduucom/table-detection-and-extraction" src="https://huggingface.co/foduucom/table-detection-and-extraction/resolve/main/thumbnail.jpg">
</div>
# Model Card for YOLOv8s Table Detection
## Model Summary
The YOLOv8s Table Detection model is an object detection model based on the YOLO (You Only Look Once) framework. It is designed to detect tables, whether they are bordered or borderless, in images. The model has been fine-tuned on a vast dataset and achieved high accuracy in detecting tables and distinguishing between bordered and borderless ones.
## Model Details
### Model Description
The YOLOv8s Table Detection model serves as a versatile solution for precisely identifying tables within images, whether they exhibit a bordered or borderless design. Notably, this model's capabilities extend beyond mere detection – it plays a crucial role in addressing the complexities of unstructured documents. By employing advanced techniques such as bounding box delineation, the model enables users to isolate tables of interest within the visual content.
What sets this model apart is its synergy with Optical Character Recognition (OCR) technology. This seamless integration empowers the model to not only locate tables but also to extract pertinent data contained within. The bounding box information guides the cropping of tables, which is then coupled with OCR to meticulously extract textual data, streamlining the process of information retrieval from unstructured documents.
We invite you to explore the potential of this model and its data extraction capabilities. For those interested in harnessing its power or seeking further collaboration, we encourage you to reach out to us at info@foduu.com. Whether you require assistance, customization, or have innovative ideas, our collaborative approach is geared towards addressing your unique challenges. Additionally, you can actively engage with our vibrant community section for valuable insights and collective problem-solving. Your input drives our continuous improvement, as we collectively pave the way towards enhanced data extraction and document analysis.
- **Developed by:** FODUU AI
- **Model type:** Object Detection
- **Task:** Table Detection (Bordered and Borderless)
Furthermore, the YOLOv8s Table Detection model is not limited to table detection alone. It is a versatile tool that contributes to the processing of unstructured documents. By utilizing advanced bounding box techniques, the model empowers users to isolate tables within the document's visual content. What sets this model apart is its seamless integration with Optical Character Recognition (OCR) technology. The combination of bounding box information and OCR allows for precise data extraction from the tables. This comprehensive approach streamlines the process of information retrieval from complex documents.
User collaboration is actively encouraged to enrich the model's capabilities. By contributing table images of different designs and types, users play a pivotal role in enhancing the model's ability to detect a diverse range of tables accurately. Community participation can be facilitated through our platform or by reaching out to us at info@foduu.com. We value collaborative efforts that drive continuous improvement and innovation in table detection and extraction.
### Supported Labels
```
['bordered', 'borderless']
```
## Uses
### Direct Use
The YOLOv8s Table Detection model can be directly used for detecting tables in images, whether they are bordered or borderless. It is equipped with the ability to distinguish between these two categories.
### Downstream Use
The model can also be fine-tuned for specific table detection tasks or integrated into larger applications for furniture recognition, interior design, image-based data extraction, and other related fields.
### Out-of-Scope Use
The model is not designed for unrelated object detection tasks or scenarios outside the scope of table detection.
## Bias, Risks, and Limitations
The YOLOv8s Table Detection model may have some limitations and biases:
- Performance may vary based on the quality, diversity, and representativeness of the training data.
- The model may face challenges in detecting tables with intricate designs or complex arrangements.
- Accuracy may be affected by variations in lighting conditions, image quality, and resolution.
- Detection of very small or distant tables might be less accurate.
- The model's ability to classify bordered and borderless tables may be influenced by variations in design.
### Recommendations
Users should be informed about the model's limitations and potential biases. Further testing and validation are advised for specific use cases to evaluate its performance accurately.
## How to Get Started with the Model
To begin using the YOLOv8s Table Detection model, follow these steps:
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('foduucom/table-detection-and-extraction')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = '/path/to/your/document/images'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
## Training Details
### Training Data
The model is trained on a diverse dataset containing images of tables from various sources. The dataset includes examples of both bordered and borderless tables, capturing different designs and styles.
### Training Procedure
The training process involves extensive computation and is conducted over multiple epochs. The model's weights are adjusted to minimize detection loss and optimize performance.
#### Metrics
- mAP@0.5 (box):
- All: 0.962
- Bordered: 0.961
- Borderless: 0.963
### Model Architecture and Objective
The YOLOv8s architecture employs a modified CSPDarknet53 as its backbone, along with self-attention mechanisms and feature pyramid networks. These components contribute to the model's ability to detect and classify tables accurately, considering variations in size, design, and style.
### Compute Infrastructure
#### Hardware
NVIDIA GeForce RTX 3060 card
#### Software
The model was trained and fine-tuned using a Jupyter Notebook environment.
## Model Card Contact
For inquiries and contributions, please contact us at info@foduu.com.
```bibtex
@ModelCard{
author = {Nehul Agrawal and
Pranjal Singh Thakur},
title = {YOLOv8s Table Detection},
year = {2023}
}
```
---
|
TheRains/cv9-special-batch8-tiny
|
TheRains
| 2023-08-06T09:30:28Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T08:18:12Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 31.750632620197837
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4968
- Wer: 31.7506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6281 | 0.97 | 1000 | 0.5817 | 37.6950 |
| 0.4018 | 1.94 | 2000 | 0.5157 | 34.2121 |
| 0.2914 | 2.9 | 3000 | 0.4980 | 32.4960 |
| 0.2078 | 3.87 | 4000 | 0.4968 | 31.7506 |
| 0.1925 | 4.84 | 5000 | 0.4986 | 31.8749 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
George-Ogden/gptr2-nano-with-momentum-without-weight-decay
|
George-Ogden
| 2023-08-06T09:28:17Z | 31 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"research",
"en",
"dataset:wikipedia",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-08-05T13:59:29Z |
---
license: mit
datasets:
- wikipedia
language:
- en
tags:
- research
---
This model is significantly undertrained and designed for research purposes only.
For use in transformers:
```python
from transformers import AutoTokenizer, GPT2Model
import torch.nn as nn
import torch
class RMSLayerNorm(nn.Module):
def __init__(self, normalized_shape, eps=1e-8, affine=True):
super(RMSLayerNorm, self).__init__()
self.normalized_shape = normalized_shape
self.eps = eps
self.affine = affine
if self.affine:
self.weight = nn.Parameter(torch.ones(()))
else:
self.register_parameter('weight', None)
self.register_parameter('bias', None)
def forward(self, x):
rms = torch.sqrt(torch.mean(x**2, dim=-1, keepdim=True) + self.eps)
x_normalized = x / rms
if self.affine:
x_normalized = x_normalized * self.weight
return x_normalized
def replace(model):
for name, child in model.named_children():
if isinstance(child, nn.modules.normalization.LayerNorm):
setattr(model, name, RMSLayerNorm(child.normalized_shape, eps=child.eps, affine=True))
else:
replace(child)
return model
class GPTR2Model(GPT2Model):
def __init__(self, config):
super().__init__(config)
replace(self)
model = GPTR2Model.from_pretrained("George-Ogden/gptr2-nano-with-momentum-without-weight-decay")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
```
For more details and example usage, see https://github.com/George-Ogden/residual-streams
|
George-Ogden/gptr2-nano-with-momentum-with-weight-decay
|
George-Ogden
| 2023-08-06T09:27:54Z | 40 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"research",
"en",
"dataset:wikipedia",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-08-01T12:26:52Z |
---
license: mit
datasets:
- wikipedia
language:
- en
tags:
- research
---
This model is significantly undertrained and designed for research purposes only.
For use in transformers:
```python
from transformers import AutoTokenizer, GPT2Model
import torch.nn as nn
import torch
class RMSLayerNorm(nn.Module):
def __init__(self, normalized_shape, eps=1e-8, affine=True):
super(RMSLayerNorm, self).__init__()
self.normalized_shape = normalized_shape
self.eps = eps
self.affine = affine
if self.affine:
self.weight = nn.Parameter(torch.ones(()))
else:
self.register_parameter('weight', None)
self.register_parameter('bias', None)
def forward(self, x):
rms = torch.sqrt(torch.mean(x**2, dim=-1, keepdim=True) + self.eps)
x_normalized = x / rms
if self.affine:
x_normalized = x_normalized * self.weight
return x_normalized
def replace(model):
for name, child in model.named_children():
if isinstance(child, nn.modules.normalization.LayerNorm):
setattr(model, name, RMSLayerNorm(child.normalized_shape, eps=child.eps, affine=True))
else:
replace(child)
return model
class GPTR2Model(GPT2Model):
def __init__(self, config):
super().__init__(config)
replace(self)
model = GPTR2Model.from_pretrained("George-Ogden/gptr2-nano-with-momentum-with-weight-decay")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
```
For more details and example usage, see https://github.com/George-Ogden/residual-streams
|
George-Ogden/gptr2-nano-without-momentum-without-weight-decay
|
George-Ogden
| 2023-08-06T09:26:49Z | 32 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"research",
"en",
"dataset:wikipedia",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-08-05T14:07:14Z |
---
license: mit
datasets:
- wikipedia
language:
- en
tags:
- research
---
This model is significantly undertrained and designed for research purposes only.
For use in transformers:
```python
from transformers import AutoTokenizer, GPT2Model
import torch.nn as nn
import torch
class RMSLayerNorm(nn.Module):
def __init__(self, normalized_shape, eps=1e-8, affine=True):
super(RMSLayerNorm, self).__init__()
self.normalized_shape = normalized_shape
self.eps = eps
self.affine = affine
if self.affine:
self.weight = nn.Parameter(torch.ones(()))
else:
self.register_parameter('weight', None)
self.register_parameter('bias', None)
def forward(self, x):
rms = torch.sqrt(torch.mean(x**2, dim=-1, keepdim=True) + self.eps)
x_normalized = x / rms
if self.affine:
x_normalized = x_normalized * self.weight
return x_normalized
def replace(model):
for name, child in model.named_children():
if isinstance(child, nn.modules.normalization.LayerNorm):
setattr(model, name, RMSLayerNorm(child.normalized_shape, eps=child.eps, affine=True))
else:
replace(child)
return model
class GPTR2Model(GPT2Model):
def __init__(self, config):
super().__init__(config)
replace(self)
model = GPTR2Model.from_pretrained("George-Ogden/gptr2-nano-without-momentum-without-weight-decay")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
```
For more details and example usage, see https://github.com/George-Ogden/residual-streams
|
sahayk/news-classification-18-llama-2-7b
|
sahayk
| 2023-08-06T09:25:37Z | 7 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T08:14:39Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for News-Classification-18-Llama-2-7B
<!-- Provide a quick summary of what the model is/does. -->
News-Classification-18-Llama-2-7B classifies news articles across 18 categories. It is created by fine-tuning Llama 2 7B on an instruction dataset created using GPT 3.5.
- **Developed by:** Kshitiz Sahay
- **Model type:** Text Classifier
- **Language(s) (NLP):** Python
- **Finetuned from model:** Llama-2-7B
|
mrizalf7/t5-small-indosum-3
|
mrizalf7
| 2023-08-06T09:03:55Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-02T15:51:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-indosum-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-indosum-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4340
- Rouge1: 15.1875
- Rouge2: 11.795
- Rougel: 14.9384
- Rougelsum: 15.0579
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5356 | 1.0 | 1784 | 0.4647 | 15.1653 | 11.7743 | 14.9193 | 15.0383 | 19.0 |
| 0.4791 | 2.0 | 3568 | 0.4401 | 15.175 | 11.789 | 14.9281 | 15.0459 | 19.0 |
| 0.4698 | 3.0 | 5352 | 0.4340 | 15.1875 | 11.795 | 14.9384 | 15.0579 | 19.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
alphin2002/my-bag
|
alphin2002
| 2023-08-06T08:58:41Z | 11 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T08:54:56Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Bag Dreambooth model trained by alphin2002 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE125
Sample pictures of this concept:

|
ajulkjose/my-thanos
|
ajulkjose
| 2023-08-06T08:47:43Z | 16 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T08:35:20Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Thanos Dreambooth model trained by ajulkjose following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AJCE211
Sample pictures of this concept:

|
Yossshi/ppo-LunarLander-v2
|
Yossshi
| 2023-08-06T08:29:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T08:29:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.31 +/- 20.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/guillotine_nikke
|
CyberHarem
| 2023-08-06T08:22:43Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/guillotine_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T08:17:11Z |
---
license: mit
datasets:
- CyberHarem/guillotine_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of guillotine_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/guillotine_nikke.pt` as the embedding and `1500/guillotine_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `guillotine_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/guillotine_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/guillotine_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/guillotine_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/guillotine_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/guillotine_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/guillotine_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/guillotine_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/guillotine_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/guillotine_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/guillotine_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/guillotine_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/guillotine_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/guillotine_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/guillotine_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/guillotine_nikke.zip) |
|
yaohuacn/ppo-LunarLander-v2
|
yaohuacn
| 2023-08-06T08:09:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T08:04:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.49 +/- 50.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/sin_nikke
|
CyberHarem
| 2023-08-06T07:58:24Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/sin_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T07:52:09Z |
---
license: mit
datasets:
- CyberHarem/sin_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of sin_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/sin_nikke.pt` as the embedding and `1500/sin_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `sin_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/sin_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/sin_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/sin_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/sin_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/sin_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/sin_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/sin_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/sin_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/sin_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/sin_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/sin_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/sin_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/sin_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/sin_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/sin_nikke.zip) |
|
RedRayz/MyVAE
|
RedRayz
| 2023-08-06T07:48:11Z | 0 | 3 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-06T07:37:28Z |
---
license: creativeml-openrail-m
---
|
CyberHarem/frima_nikke
|
CyberHarem
| 2023-08-06T07:31:34Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/frima_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T07:27:26Z |
---
license: mit
datasets:
- CyberHarem/frima_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of frima_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/frima_nikke.pt` as the embedding and `1500/frima_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `frima_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/frima_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/frima_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/frima_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/frima_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/frima_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/frima_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/frima_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/frima_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/frima_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/frima_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/frima_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/frima_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/frima_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/frima_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/frima_nikke.zip) |
|
Saya3091/myLyCORIS
|
Saya3091
| 2023-08-06T07:15:47Z | 0 | 79 | null |
[
"region:us"
] | null | 2023-06-14T15:21:37Z |
---
{}
---
仅供学习,请勿用于任何商业活动,不授权给任何商业行为。如果有没有使用说明的模型,请尝试在civitai搜索@Saya,可以找到过去的模型说明
For learning only, please do not use for any commercial activities, not authorized to any commercial activities. If there is a model without instructions, try searching @Saya in civiti, you can find past model instructions
|
principle/lukatuning1
|
principle
| 2023-08-06T07:02:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T07:02:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Naruke/LunarRadar-PPO
|
Naruke
| 2023-08-06T06:46:41Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T06:11:18Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -75.02 +/- 111.93
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 8
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 12
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Naruke/LunarRadar-PPO'
'batch_size': 1024
'minibatch_size': 256}
```
|
CyberHarem/centi_nikke
|
CyberHarem
| 2023-08-06T06:44:12Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/centi_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T06:40:59Z |
---
license: mit
datasets:
- CyberHarem/centi_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of centi_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/centi_nikke.pt` as the embedding and `1500/centi_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `centi_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/bikini.png) |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/centi_nikke.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/bikini.png) |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/centi_nikke.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/bikini.png) |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/centi_nikke.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/bikini.png) |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/centi_nikke.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/bikini.png) |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/centi_nikke.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/bikini.png) |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/centi_nikke.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/bikini.png) |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/centi_nikke.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/bikini.png) |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/centi_nikke.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/bikini.png) |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/centi_nikke.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/bikini.png) |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/centi_nikke.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/bikini.png) |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/centi_nikke.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/bikini.png) |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/centi_nikke.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/bikini.png) |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/centi_nikke.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/bikini.png) |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/centi_nikke.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/bikini.png) |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/centi_nikke.zip) |
|
openerotica/open_llama_3b_v2-8k-GPTQ
|
openerotica
| 2023-08-06T06:28:49Z | 8 | 3 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T22:43:02Z |
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_3b_v2'
# model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
CyberHarem/sakura_nikke
|
CyberHarem
| 2023-08-06T06:22:24Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/sakura_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T06:18:39Z |
---
license: mit
datasets:
- CyberHarem/sakura_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of sakura_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/sakura_nikke.pt` as the embedding and `1500/sakura_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `sakura_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/sakura_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/sakura_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/sakura_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/sakura_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/sakura_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/sakura_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/sakura_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/sakura_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/sakura_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/sakura_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/sakura_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/sakura_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/sakura_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/sakura_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/sakura_nikke.zip) |
|
TheRains/cv9-special-batch12-base
|
TheRains
| 2023-08-06T06:20:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T04:59:17Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 23.77271681619508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4079
- Wer: 23.7727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3536 | 1.45 | 1000 | 0.4083 | 26.1882 |
| 0.2171 | 2.9 | 2000 | 0.3794 | 24.4813 |
| 0.0604 | 4.35 | 3000 | 0.3954 | 24.5595 |
| 0.0531 | 5.81 | 4000 | 0.4079 | 23.7727 |
| 0.0245 | 7.26 | 5000 | 0.4240 | 23.9291 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sohailsiddiqui/marian-finetuned-kde4-en-to-fr
|
sohailsiddiqui
| 2023-08-06T06:05:45Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-05T21:00:48Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: sohailsiddiq99/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sohailsiddiq99/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0584
- Validation Loss: 0.8824
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0584 | 0.8824 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.11.0
|
MayoChacon/Paisaje
|
MayoChacon
| 2023-08-06T06:00:22Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-06T05:56:46Z |
---
license: bigscience-openrail-m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asmitha26/falcon-medical
|
asmitha26
| 2023-08-06T05:59:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T05:24:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
srikanthsri/SRIKANTH-Falcon-finetune
|
srikanthsri
| 2023-08-06T05:18:46Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T04:59:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/chokai_azurlane
|
CyberHarem
| 2023-08-06T05:09:03Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/chokai_azurlane",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T05:05:42Z |
---
license: mit
datasets:
- CyberHarem/chokai_azurlane
pipeline_tag: text-to-image
tags:
- art
---
# Lora of chokai_azurlane
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/chokai_azurlane.pt` as the embedding and `1500/chokai_azurlane.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `chokai_azurlane`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/chokai_azurlane.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/chokai_azurlane.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/chokai_azurlane.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/chokai_azurlane.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/chokai_azurlane.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/chokai_azurlane.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/chokai_azurlane.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/chokai_azurlane.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/chokai_azurlane.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/chokai_azurlane.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/chokai_azurlane.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/chokai_azurlane.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/chokai_azurlane.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/chokai_azurlane.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/chokai_azurlane.zip) |
|
DrishtiSharma/speecht5_finetuned_voxpopuli_es_20k_steps_16_test1
|
DrishtiSharma
| 2023-08-06T04:58:25Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-02T14:13:00Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_es_20k_steps_16_test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_es_20k_steps_16_test1
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7532 | 0.01 | 5 | 0.6964 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
umarzein/roberta-base-squad2-twitter-sent-ext-lora-balanced
|
umarzein
| 2023-08-06T04:57:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T04:57:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
fromhell01/Reinforce-CartPolev1-v2
|
fromhell01
| 2023-08-06T04:48:57Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T04:48:48Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPolev1-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fromhell01/Reinforce-CartPolev1
|
fromhell01
| 2023-08-06T04:40:31Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T04:40:22Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPolev1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 285.20 +/- 135.46
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mxmax/baichuan-7b-sft-001
|
mxmax
| 2023-08-06T04:31:23Z | 22 | 3 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T08:33:37Z |
---
license: apache-2.0
---
## 一、基于baichuan 7b模型进行sft,对其人类意图
## 二、sft数据是在开源MOSS数据中通过各个类别均衡采样15w数据进行sft
## 模型推理
Install package:
```
pip install transformers
pip install sentencepiece
pip install vllm
```
### huggingface结合fastapi起服务,支持多轮对话
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import uvicorn
from fastapi import FastAPI
import jsonlines
device = 'cuda'
model_name = 'mxmax/baichuan-7b-sft-001'
max_new_tokens = 500
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map={'': 0}#'auto'
).cuda()
# model = PeftModel.from_pretrained(model, adapter_name)
model.eval()
model = model.to(device)
# 输入模型的最大长度
history_max_len = 1024
def model_infer(user_input):
history_token_ids = tokenizer('<s>', return_tensors="pt").input_ids
user_input_ids = tokenizer(user_input, return_tensors="pt").input_ids
history_token_ids = torch.concat((history_token_ids, user_input_ids[:, -history_max_len:]), dim=1)
model_input_ids = history_token_ids.to(device)
outputs = model.generate(
input_ids=model_input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p,
temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id
)
model_input_ids_len = model_input_ids.size(1)
response_ids = outputs[:, model_input_ids_len:]
response = tokenizer.batch_decode(response_ids)
return response[0].strip().replace('</s>', "")
app = FastAPI()
@app.get('/')
async def root():
return {"msg": "Hello World"}
@app.post('/baichuan_sft_001')
async def baichuan_sft_001(message: dict):
prompt = ''
for l in message['context']:
prompt += 'human:'+l['human']+'\nassistant:'+l['assistant']+'</s>'
result = model_infer(prompt)
message['context'][-1]['assistant'] = result
return {'model_ouput':result}
if __name__ == '__main__':
uvicorn.run('model_serving:app',host="0.0.0.0", port=6006)
```
### vllm结合fastapi起服务,加速推理,支持多轮对话
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import uvicorn
from fastapi import FastAPI
import jsonlines
from vllm import LLM, SamplingParams
device = 'cuda'
model_name = 'mxmax/baichuan-7b-sft-001'
max_new_tokens = 512
top_p = 0.9
temperature = 0.35
repetition_penalty = 0.1
history_max_len = 1024
sampling_params = SamplingParams(temperature=temperature, top_p=top_p, max_tokens=max_new_tokens, presence_penalty=repetition_penalty)
# Create an LLM.
llm = LLM(model=model_name,trust_remote_code=True,dtype='float16')
file = jsonlines.open('chat_record.json','a')
app = FastAPI()
@app.get('/')
async def root():
return {"msg": "Hello World"}
@app.post('/baichuan_sft_001')
async def baichuan_sft_001(message: dict):
prompt = ''
for l in message['context']:
prompt += 'human:'+l['human']+'\nassistant:'+l['assistant']+'</s>'
prompt = '<s>'+prompt[-history_max_len:]
outputs = llm.generate([prompt], sampling_params)
result = outputs[0].outputs[0].text
message['context'][-1]['assistant'] = result
return {'model_ouput':result}
if __name__ == '__main__':
uvicorn.run('vllm_serving:app',host="0.0.0.0", port=6006)
```
## 模型效果展示






## 联系方式

加好友请备注:来自于huggingface网站交流技术+名字
qq群:621725172
## 引用
```bash
@misc{mxmax,
title={baichuan_sft: baichuan-7b-sft-001},
author={Ma Xin},
year={2023},
howpublished={\url{https://huggingface.co/mxmax/baichuan-7b-sft-001}},
}
```
|
Jancsxu/Jancus
|
Jancsxu
| 2023-08-06T04:26:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T18:10:26Z |
---
license: creativeml-openrail-m
---
|
Za88yes/Ris
|
Za88yes
| 2023-08-06T04:14:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T19:46:10Z |
---
license: creativeml-openrail-m
---
|
TheRains/cv9-special-batch4-base
|
TheRains
| 2023-08-06T03:50:23Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T02:30:42Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 23.40004600874166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3697
- Wer: 23.4000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5013 | 0.48 | 1000 | 0.4523 | 28.5990 |
| 0.4145 | 0.97 | 2000 | 0.4067 | 25.8109 |
| 0.2437 | 1.45 | 3000 | 0.3821 | 24.3800 |
| 0.2566 | 1.94 | 4000 | 0.3695 | 23.9798 |
| 0.1161 | 2.42 | 5000 | 0.3697 | 23.4000 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CyberHarem/mihara_nikke
|
CyberHarem
| 2023-08-06T03:43:51Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/mihara_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T03:40:14Z |
---
license: mit
datasets:
- CyberHarem/mihara_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mihara_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/mihara_nikke.pt` as the embedding and `1500/mihara_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `mihara_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/mihara_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/mihara_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/mihara_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/mihara_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/mihara_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/mihara_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/mihara_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/mihara_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/mihara_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/mihara_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/mihara_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/mihara_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/mihara_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/mihara_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/mihara_nikke.zip) |
|
Tien203/fine-tune-llama
|
Tien203
| 2023-08-06T03:21:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T10:16:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
yashgoenka/gorilla-llama-2-7B-QLoRA
|
yashgoenka
| 2023-08-06T03:20:39Z | 15 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"llama-2-7b",
"gorilla",
"qlora",
"api",
"dataset:yashgoenka/gorilla-16k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-01T22:49:08Z |
---
license: apache-2.0
datasets:
- yashgoenka/gorilla-16k
pipeline_tag: text-generation
tags:
- llama
- llama-2
- llama-2-7b
- gorilla
- qlora
- api
library_name: transformers
---
|
TheRains/cv9-special-batch8-small
|
TheRains
| 2023-08-06T03:07:36Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T10:37:49Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 12.472969864274212
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2873
- Wer: 12.4730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3041 | 0.97 | 1000 | 0.2612 | 14.7090 |
| 0.1437 | 1.94 | 2000 | 0.2485 | 14.0419 |
| 0.0555 | 2.9 | 3000 | 0.2530 | 12.8778 |
| 0.0173 | 3.87 | 4000 | 0.2704 | 12.5880 |
| 0.0067 | 4.84 | 5000 | 0.2873 | 12.4730 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CyberHarem/exia_nikke
|
CyberHarem
| 2023-08-06T02:59:39Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/exia_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T02:54:31Z |
---
license: mit
datasets:
- CyberHarem/exia_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of exia_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/exia_nikke.pt` as the embedding and `1500/exia_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `exia_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/exia_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/exia_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/exia_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/exia_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/exia_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/exia_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/exia_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/exia_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/exia_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/exia_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/exia_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/exia_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/exia_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/exia_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/exia_nikke.zip) |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.