modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 00:42:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 00:42:38
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
CyberHarem/yulha_nikke
|
CyberHarem
| 2023-08-06T02:37:54Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yulha_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T02:32:32Z |
---
license: mit
datasets:
- CyberHarem/yulha_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yulha_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/yulha_nikke.pt` as the embedding and `1500/yulha_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `yulha_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/yulha_nikke.zip) |
| 1400 |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/yulha_nikke.zip) |
| 1300 |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/yulha_nikke.zip) |
| 1200 |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/yulha_nikke.zip) |
| 1100 |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/yulha_nikke.zip) |
| 1000 |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/yulha_nikke.zip) |
| 900 |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/yulha_nikke.zip) |
| 800 |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/yulha_nikke.zip) |
| 700 |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/yulha_nikke.zip) |
| 600 |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/yulha_nikke.zip) |
| 500 |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/yulha_nikke.zip) |
| 400 |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/yulha_nikke.zip) |
| 300 |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/yulha_nikke.zip) |
| 200 |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/yulha_nikke.zip) |
| 100 |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/yulha_nikke.zip) |
|
frncscp/patacoswin_v2
|
frncscp
| 2023-08-06T02:25:14Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-tiny-patch4-window16-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window16-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-06T01:18:22Z |
---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window16-256
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: patacoswin_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patacoswin_v2
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0328
- Accuracy: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6055 | 0.95 | 13 | 0.2709 | 0.9615 |
| 0.2812 | 1.96 | 27 | 0.0866 | 0.9683 |
| 0.1426 | 2.98 | 41 | 0.0584 | 0.9796 |
| 0.07 | 4.0 | 55 | 0.0268 | 0.9932 |
| 0.0579 | 4.95 | 68 | 0.0451 | 0.9864 |
| 0.091 | 5.96 | 82 | 0.0300 | 0.9887 |
| 0.0247 | 6.98 | 96 | 0.0387 | 0.9864 |
| 0.0323 | 8.0 | 110 | 0.0456 | 0.9887 |
| 0.032 | 8.95 | 123 | 0.0475 | 0.9864 |
| 0.0187 | 9.45 | 130 | 0.0328 | 0.9910 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
fromhell01/dqn-SpaceInvadersNoFrameskip-v4
|
fromhell01
| 2023-08-06T02:17:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T00:53:43Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 473.50 +/- 181.08
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fromhell01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fromhell01 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga fromhell01
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
CyberHarem/laplace_nikke
|
CyberHarem
| 2023-08-06T01:53:06Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/laplace_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T01:49:33Z |
---
license: mit
datasets:
- CyberHarem/laplace_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of laplace_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/laplace_nikke.pt` as the embedding and `1500/laplace_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `laplace_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/laplace_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/laplace_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/laplace_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/laplace_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/laplace_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/laplace_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/laplace_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/laplace_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/laplace_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/laplace_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/laplace_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/laplace_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/laplace_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/laplace_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/laplace_nikke.zip) |
|
thisiskeithkwan/whisper-small-canto
|
thisiskeithkwan
| 2023-08-06T01:41:31Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:thisiskeithkwan/canto",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T05:41:08Z |
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- thisiskeithkwan/canto
model-index:
- name: whisper-small-canto
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-canto
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the thisiskeithkwan/canto dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5061
- Cer: 0.4485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5909 | 0.76 | 500 | 1.6890 | 0.7769 |
| 1.2636 | 1.52 | 1000 | 1.4067 | 0.7641 |
| 0.7889 | 2.27 | 1500 | 1.3118 | 0.5474 |
| 0.6929 | 3.03 | 2000 | 1.2825 | 0.5516 |
| 0.4827 | 3.79 | 2500 | 1.2360 | 0.5446 |
| 0.236 | 4.55 | 3000 | 1.3457 | 0.5044 |
| 0.0982 | 5.31 | 3500 | 1.4736 | 0.4841 |
| 0.064 | 6.07 | 4000 | 1.5103 | 0.4809 |
| 0.035 | 6.82 | 4500 | 1.5110 | 0.4563 |
| 0.0103 | 7.58 | 5000 | 1.5061 | 0.4485 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Angry-Wizard/map-training
|
Angry-Wizard
| 2023-08-06T01:21:52Z | 40 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T01:18:35Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Map_Training Dreambooth model trained by Angry-Wizard with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CyberHarem/jackal_nikke
|
CyberHarem
| 2023-08-06T01:11:06Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/jackal_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T01:06:58Z |
---
license: mit
datasets:
- CyberHarem/jackal_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of jackal_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/jackal_nikke.pt` as the embedding and `1500/jackal_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `jackal_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/jackal_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/jackal_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/jackal_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/jackal_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/jackal_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/jackal_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/jackal_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/jackal_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/jackal_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/jackal_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/jackal_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/jackal_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/jackal_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/jackal_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/jackal_nikke.zip) |
|
ymkgr/Ichikishima_Mizuha_Re_Stage
|
ymkgr
| 2023-08-06T00:33:01Z | 0 | 0 | null |
[
"anime",
"game character",
"license:wtfpl",
"region:us"
] | null | 2023-08-06T00:17:07Z |
---
license: wtfpl
tags:
- anime
- game character
metrics:
- character
---
模型类型/Model type: LoRA
---
模型信息/Model Details:
- from Japanese multimedia project: Re:Stage! - Unit: KiRaRe - character name: Ichikishima Mizuha./来自 日本多媒体企划:Re:Stage! - 组合:KiRaRe - 角色名:市杵岛瑞叶。
- 建议权重/Recommended weight:0.8~0.9
- 触发词/Trigger Words * 请自行在"("和")"的前面添加\符号,这个页面似乎不能将\符号与其它符号连在一起显示/Please add the \ symbol before "(" and ")" yourself. It seems that the Model card cannot display the \ symbol together with other symbols:
ichikishima mizuha \(re:stage!\), black hair, very long hair, darkmagenta eyes, kimono \(mizuha seifuku\),
示例/Example:



闭关修炼了一段时间,得出结论:舞台服暂时不练了。\(--)/
|
jrzhang/CSKG_Roberta_large
|
jrzhang
| 2023-08-06T00:32:26Z | 0 | 1 | null |
[
"en",
"license:openrail++",
"region:us"
] | null | 2023-08-06T00:31:35Z |
---
license: openrail++
language:
- en
---
|
CyberHarem/liter_nikke
|
CyberHarem
| 2023-08-06T00:28:52Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/liter_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T00:25:10Z |
---
license: mit
datasets:
- CyberHarem/liter_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of liter_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/liter_nikke.pt` as the embedding and `1500/liter_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `liter_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/liter_nikke.zip) |
| 1400 |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/liter_nikke.zip) |
| 1300 |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/liter_nikke.zip) |
| 1200 |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/liter_nikke.zip) |
| 1100 |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/liter_nikke.zip) |
| 1000 |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/liter_nikke.zip) |
| 900 |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/liter_nikke.zip) |
| 800 |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/liter_nikke.zip) |
| 700 |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/liter_nikke.zip) |
| 600 |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/liter_nikke.zip) |
| 500 |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/liter_nikke.zip) |
| 400 |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/liter_nikke.zip) |
| 300 |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/liter_nikke.zip) |
| 200 |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/liter_nikke.zip) |
| 100 |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/liter_nikke.zip) |
|
mikful/llama-v2-7b-8bit-mmlu-finetune-no-calib
|
mikful
| 2023-08-06T00:18:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T00:17:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
sandeep12345/Biofilm_LLAMA_Finetune
|
sandeep12345
| 2023-08-06T00:15:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-06T00:01:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/drake_nikke
|
CyberHarem
| 2023-08-06T00:06:07Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/drake_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-06T00:02:28Z |
---
license: mit
datasets:
- CyberHarem/drake_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of drake_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/drake_nikke.pt` as the embedding and `1500/drake_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `drake_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/drake_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/drake_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/drake_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/drake_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/drake_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/drake_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/drake_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/drake_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/drake_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/drake_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/drake_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/drake_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/drake_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/drake_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/drake_nikke.zip) |
|
RichBro/squad_gpt2
|
RichBro
| 2023-08-05T23:46:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T04:24:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/snow_white_nikke
|
CyberHarem
| 2023-08-05T23:43:58Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/snow_white_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T23:39:49Z |
---
license: mit
datasets:
- CyberHarem/snow_white_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of snow_white_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/snow_white_nikke.pt` as the embedding and `1500/snow_white_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `snow_white_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/snow_white_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/snow_white_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/snow_white_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/snow_white_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/snow_white_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/snow_white_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/snow_white_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/snow_white_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/snow_white_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/snow_white_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/snow_white_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/snow_white_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/snow_white_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/snow_white_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/snow_white_nikke.zip) |
|
timjwhite/whisper-tiny-dv
|
timjwhite
| 2023-08-05T23:43:44Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T11:31:30Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[-19%:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3484562066792691
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-dv
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7263
- Wer Ortho: 0.3483
- Wer: 0.3485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0008 | 17.24 | 500 | 0.6662 | 0.3483 | 0.3491 |
| 0.0002 | 34.48 | 1000 | 0.7263 | 0.3483 | 0.3485 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
sandeep12345/new_biofilm_LLM
|
sandeep12345
| 2023-08-05T23:39:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T23:38:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
sandeep1chataut/biofilm_custom_llama_finetune
|
sandeep1chataut
| 2023-08-05T23:25:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T23:24:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Za88yes/Ri5
|
Za88yes
| 2023-08-05T23:18:05Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T18:10:21Z |
---
license: creativeml-openrail-m
---
|
CyberHarem/privaty_nikke
|
CyberHarem
| 2023-08-05T22:38:56Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/privaty_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T22:35:19Z |
---
license: mit
datasets:
- CyberHarem/privaty_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of privaty_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/privaty_nikke.pt` as the embedding and `1500/privaty_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `privaty_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/privaty_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/privaty_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/privaty_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/privaty_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/privaty_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/privaty_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/privaty_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/privaty_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/privaty_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/privaty_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/privaty_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/privaty_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/privaty_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/privaty_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/privaty_nikke.zip) |
|
breadlicker45/MuseNeo
|
breadlicker45
| 2023-08-05T22:28:40Z | 125 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neo",
"text-generation",
"dataset:breadlicker45/midi-music-codes",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-23T19:56:48Z |
---
license: mit
datasets:
- breadlicker45/midi-music-codes
---
use https://mrcheeze.github.io/musenet-midi/ to make the midi file from the musenet encoding.
---
---
this is a 84k step model of MuseNeo. MuseNeo is trained on 393 midi songs.
here is a python 3.9 ui to run it. https://github.com/breadbrowser/MuseNeo-ui
---
this will not be trained any more. MusePy will be trained now.
---
|
eliorcohen/ppo-Huggy
|
eliorcohen
| 2023-08-05T22:22:03Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-05T22:21:59Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eliorcohen/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aphi/poca-SoccerTwos
|
aphi
| 2023-08-05T22:21:45Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-05T22:20:20Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aphi/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/modernia_nikke
|
CyberHarem
| 2023-08-05T22:18:28Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/modernia_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T22:15:16Z |
---
license: mit
datasets:
- CyberHarem/modernia_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of modernia_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/modernia_nikke.pt` as the embedding and `1500/modernia_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `modernia_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/modernia_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/modernia_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/modernia_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/modernia_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/modernia_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/modernia_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/modernia_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/modernia_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/modernia_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/modernia_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/modernia_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/modernia_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/modernia_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/modernia_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/modernia_nikke.zip) |
|
salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run2
|
salohnana2018
| 2023-08-05T22:07:34Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"tensorboard",
"bert",
"adapterhub:Arabic ABSA/SemEvalHotelReview",
"dataset:Hotel",
"region:us"
] | null | 2023-08-05T21:24:15Z |
---
tags:
- adapterhub:Arabic ABSA/SemEvalHotelReview
- adapter-transformers
- bert
datasets:
- Hotel
---
# Adapter `salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run2` for CAMeL-Lab/bert-base-arabic-camelbert-msa
An [adapter](https://adapterhub.ml) for the `CAMeL-Lab/bert-base-arabic-camelbert-msa` model that was trained on the [Arabic ABSA/SemEvalHotelReview](https://adapterhub.ml/explore/Arabic ABSA/SemEvalHotelReview/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-msa")
adapter_name = model.load_adapter("salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run2", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
CyberHarem/noir_nikke
|
CyberHarem
| 2023-08-05T21:36:48Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/noir_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T21:33:15Z |
---
license: mit
datasets:
- CyberHarem/noir_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of noir_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/noir_nikke.pt` as the embedding and `1500/noir_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `noir_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/noir_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/noir_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/noir_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/noir_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/noir_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/noir_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/noir_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/noir_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/noir_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/noir_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/noir_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/noir_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/noir_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/noir_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/noir_nikke.zip) |
|
YieldInc/cacti-7b-1k
|
YieldInc
| 2023-08-05T21:23:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T21:22:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
CyberHarem/soda_nikke
|
CyberHarem
| 2023-08-05T21:17:03Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/soda_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T21:13:26Z |
---
license: mit
datasets:
- CyberHarem/soda_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of soda_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/soda_nikke.pt` as the embedding and `1500/soda_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `soda_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/soda_nikke.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/soda_nikke.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/soda_nikke.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/soda_nikke.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/soda_nikke.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/soda_nikke.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/soda_nikke.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/soda_nikke.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/soda_nikke.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/soda_nikke.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/soda_nikke.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/soda_nikke.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/soda_nikke.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/soda_nikke.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/soda_nikke.zip) |
|
Henk717/spring-dragon-qlora
|
Henk717
| 2023-08-05T21:06:52Z | 6 | 7 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-08-05T20:59:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ClementXie/whisper-tiny
|
ClementXie
| 2023-08-05T20:58:22Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T17:07:54Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3504106374657802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6037
- Wer Ortho: 0.3514
- Wer: 0.3504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0236 | 5.0 | 500 | 0.6037 | 0.3514 | 0.3504 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.13.1
- Datasets 2.14.3
- Tokenizers 0.13.2
|
pillocode/LunaLander-vGOD
|
pillocode
| 2023-08-05T20:40:07Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T20:39:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.60 +/- 17.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/blanc_nikke
|
CyberHarem
| 2023-08-05T20:37:48Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/blanc_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T20:34:33Z |
---
license: mit
datasets:
- CyberHarem/blanc_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of blanc_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/blanc_nikke.pt` as the embedding and `1500/blanc_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `blanc_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/blanc_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/blanc_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/blanc_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/blanc_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/blanc_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/blanc_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/blanc_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/blanc_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/blanc_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/blanc_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/blanc_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/blanc_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/blanc_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/blanc_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/blanc_nikke.zip) |
|
TransformerTales/llama-2-7b-8bit-nested
|
TransformerTales
| 2023-08-05T20:31:25Z | 22 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-07-31T01:20:44Z |
---
license: mit
---
I used Google Colab to quantize/nest the Llama 2 7B model. Should help out those who wish to use Llama 2 7B on a low-end computer.
GPU is still recomended...
|
SargeZT/controlnet-v1e-sdxl-depth
|
SargeZT
| 2023-08-05T20:10:22Z | 76 | 36 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-29T10:16:56Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-SargeZT/controlnet-v1e-sdxl-depth
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with depth maps. Note that the input depth maps are perceptually mapped from ZoeDepth.
You can find some example images below.
prompt: nightmare construction worker, unsettling

prompt: android warrior, unsettling

## License
[SDXL 1.0 License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
|
arpan-das-astrophysics/a2c-AntBulletEnv-v0
|
arpan-das-astrophysics
| 2023-08-05T19:51:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T19:50:35Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1294.48 +/- 215.33
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
o33iemars/Gpt
|
o33iemars
| 2023-08-05T19:44:14Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-05T19:41:46Z |
---
license: bigscience-openrail-m
---
|
Surya-Teja-Menta/PPO-LunarLander-v2
|
Surya-Teja-Menta
| 2023-08-05T19:41:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T19:05:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MLPpolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.64 +/- 14.87
name: mean_reward
verified: false
---
# **MLPpolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MLPpolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kejolong/police
|
kejolong
| 2023-08-05T19:35:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T19:30:51Z |
---
license: creativeml-openrail-m
---
|
arhamk/a2c-AntBulletEnv-v0
|
arhamk
| 2023-08-05T19:27:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T19:26:33Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 925.76 +/- 168.74
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/brid_nikke
|
CyberHarem
| 2023-08-05T19:19:38Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/brid_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T19:15:31Z |
---
license: mit
datasets:
- CyberHarem/brid_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of brid_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/brid_nikke.pt` as the embedding and `1500/brid_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `brid_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/brid_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/brid_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/brid_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/brid_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/brid_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/brid_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/brid_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/brid_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/brid_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/brid_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/brid_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/brid_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/brid_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/brid_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/brid_nikke.zip) |
|
EdJ1234/finetuned_llama
|
EdJ1234
| 2023-08-05T18:58:57Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T18:57:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
tilyupo/t5-base-trivia-ca2q
|
tilyupo
| 2023-08-05T18:45:13Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-04T08:15:43Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-trivia-v2-ca2q
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-base-trivia-v2-ca2q
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2541
- Validation Loss: 0.3480
- Epoch: 2
<pre>
{'eval_loss': 1.2103511095046997,
'eval_bleu': 19.63270019311908,
'eval_rouge1': 57.01,
'eval_rouge2': 33.76,
'eval_rougeL': 49.73,
'eval_rougeLsum': 49.74,
'eval_exact': 0.022446798173161014,
'eval_runtime': 224.6161,
'eval_samples_per_second': 45.816,
'eval_steps_per_second': 1.434}
</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5159 | 0.3420 | 0 |
| 0.3061 | 0.3373 | 1 |
| 0.2541 | 0.3480 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
VicBeltran/dqn-SpaceInvadersNoFrameskip-v4
|
VicBeltran
| 2023-08-05T18:44:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T18:41:04Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 332.50 +/- 92.99
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VicBeltran -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VicBeltran -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga VicBeltran
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TheRains/yt-special-batch88
|
TheRains
| 2023-08-05T18:31:52Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T01:35:26Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: yt-special-batch88
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: train
args: id
metrics:
- name: Wer
type: wer
value: 5.357219480798112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yt-special-batch88
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2602
- Wer: 5.3572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 37.1656 | 1.58 | 1000 | 31.4152 | 569.1440 |
| 15.0344 | 3.17 | 2000 | 13.2072 | 144.3489 |
| 7.6075 | 4.75 | 3000 | 5.8946 | 42.3836 |
| 2.5225 | 6.34 | 4000 | 2.0158 | 19.5430 |
| 0.5364 | 7.92 | 5000 | 0.2602 | 5.3572 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
EdJ1234/lora-peft-v1
|
EdJ1234
| 2023-08-05T18:31:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-04T18:52:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
openflamingo/OpenFlamingo-9B-vitl-mpt7b
|
openflamingo
| 2023-08-05T18:27:50Z | 0 | 41 | null |
[
"en",
"dataset:laion2b",
"arxiv:2308.01390",
"arxiv:2210.08402",
"arxiv:2304.06939",
"region:us"
] | null | 2023-06-13T21:22:51Z |
---
language: en
datasets:
- laion2b
---
# OpenFlamingo-9B (CLIP ViT-L/14, MPT-7B)
[Paper](https://arxiv.org/abs/2308.01390) | [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
This 9B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) language model.
## Model Details
We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402) and [Multimodal C4](https://arxiv.org/abs/2304.06939).
This model has cross-attention modules inserted in *every fourth* decoder block. It was trained using DistributedDataParallel across 64 A100 80GB GPUs at automatic BF16 mixed precision.
To use these MPT weights, OpenFlamingo must be initialized using revision `68e1a8e0ebb9b30f3c45c1ef6195980f29063ae2` of the MPT-7B modeling code. We suggest using [this copy of the model](https://huggingface.co/anas-awadalla/mpt-7b) to ensure the code is loaded at that commit.
## Uses
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
### Initialization
``` python
from open_flamingo import create_model_and_transforms
model, image_processor, tokenizer = create_model_and_transforms(
clip_vision_encoder_path="ViT-L-14",
clip_vision_encoder_pretrained="openai",
lang_encoder_path="anas-awadalla/mpt-7b",
tokenizer_path="anas-awadalla/mpt-7b",
cross_attn_every_n_layers=4
)
# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch
checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-9B-vitl-mpt7b", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)
```
### Generation example
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
``` python
from PIL import Image
import requests
"""
Step 1: Load images
"""
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
stream=True
).raw
)
"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
batch_size x num_media x num_frames x channels x height x width.
In this case batch_size = 1, num_media = 3, num_frames = 1,
channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)
"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
We also expect an <|endofchunk|> special token to indicate the end of the text
portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
"""
Step 4: Generate text
"""
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", tokenizer.decode(generated_text[0]))
```
### Bias, Risks, and Limitations
OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
In an effort to mitigate current potential biases and harms, we have deployed a text content filter on model outputs in the OpenFlamingo demo. We continue to red-team the model to understand and improve its safety.
## Evaluation
<table>
<tr>
<th></th>
<th>0-shot</th>
<th>4-shot</th>
<th>8-shot</th>
<th>16-shot</th>
<th>32-shot</th>
</tr>
<tr>
<th>COCO (CIDEr)</th>
<td>79.5 (0.2)</td>
<td>89.0 (0.3)</td>
<td>96.3 (0.1)</td>
<td>98.8 (0.7)</td>
<td>99.5 (0.1)</td>
</tr>
<tr>
<th>VQAv2 (Accuracy)</th>
<td>50.3 (0.7)</td>
<td>50.5 (0.5)</td>
<td>52.8 (0.3)</td>
<td>52.3 (0.3)</td>
<td>50.5 (0.0)</td>
</tr>
<tr>
<th>Flickr-30K (CIDEr)</th>
<td>59.5 (1.0)</td>
<td>65.8 (0.6)</td>
<td>62.9 (1.0)</td>
<td>62.8 (1.0)</td>
<td>61.3 (0.7)</td>
</tr>
<tr>
<th>OK-VQA (Accuracy)</th>
<td>34.7 (0.1)</td>
<td>34.3 (0.1)</td>
<td>38.4 (0.0)</td>
<td>39.5 (0.1)</td>
<td>38.1 (0.0)</td>
</tr>
<tr>
<th>TextVQA (Accuracy)</th>
<td>24.2 (0.5)</td>
<td>28.2 (0.4)</td>
<td>29.1 (0.1)</td>
<td>27.3 (0.1)</td>
<td>23.8 (0.2)</td>
</tr>
<tr>
<th>Vizwiz (Accuracy)</th>
<td>17.7 (0.7)</td>
<td>23.1 (0.9)</td>
<td>31.6 (1.5)</td>
<td>38.0 (1.1)</td>
<td>40.2 (0.7)</td>
</tr>
<tr>
<th>Hateful Memes (ROC AUC)</th>
<td>50.8 (4.7)</td>
<td>47.5 (2.2)</td>
<td>45.2 (2.7)</td>
<td>46.9 (3.8)</td>
<td>52.0 (2.1)</td>
</tr>
</table
|
openflamingo/OpenFlamingo-3B-vitl-mpt1b
|
openflamingo
| 2023-08-05T18:27:20Z | 0 | 11 | null |
[
"en",
"dataset:laion2b",
"arxiv:2308.01390",
"arxiv:2210.08402",
"arxiv:2304.06939",
"region:us"
] | null | 2023-06-13T21:22:05Z |
---
language: en
datasets:
- laion2b
---
# OpenFlamingo-3B (CLIP ViT-L/14, MPT-1B)
[Paper](https://arxiv.org/abs/2308.01390) | [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
This 3B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b) language model.
## Model Details
We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402) and [Multimodal C4](https://arxiv.org/abs/2304.06939).
This model has cross-attention modules inserted in *every* decoder block. It was trained using DistributedDataParallel across 64 A100 80GB GPUs at FP32 precision.
The [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b) modeling code does not accept the `labels` kwarg and compute cross-entropy loss within `forward()`. To train with the OpenFlamingo codebase, we suggest a version with the `labels` kwarg [here](https://huggingface.co/anas-awadalla/mpt-1b-redpajama-200b).
## Uses
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
### Initialization
``` python
from open_flamingo import create_model_and_transforms
model, image_processor, tokenizer = create_model_and_transforms(
clip_vision_encoder_path="ViT-L-14",
clip_vision_encoder_pretrained="openai",
lang_encoder_path="anas-awadalla/mpt-1b-redpajama-200b",
tokenizer_path="anas-awadalla/mpt-1b-redpajama-200b",
cross_attn_every_n_layers=1
)
# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch
checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-3B-vitl-mpt1b", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)
```
### Generation example
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
``` python
from PIL import Image
import requests
"""
Step 1: Load images
"""
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
stream=True
).raw
)
"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
batch_size x num_media x num_frames x channels x height x width.
In this case batch_size = 1, num_media = 3, num_frames = 1,
channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)
"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
We also expect an <|endofchunk|> special token to indicate the end of the text
portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
"""
Step 4: Generate text
"""
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", tokenizer.decode(generated_text[0]))
```
### Bias, Risks, and Limitations
OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
In an effort to mitigate current potential biases and harms, we have deployed a text content filter on model outputs in the OpenFlamingo demo. We continue to red-team the model to understand and improve its safety.
## Evaluation
<table>
<tr>
<th></th>
<th>0-shot</th>
<th>4-shot</th>
<th>8-shot</th>
<th>16-shot</th>
<th>32-shot</th>
</tr>
<tr>
<th>COCO (CIDEr)</th>
<td>74.9 (0.2)</td>
<td>77.3 (0.3)</td>
<td>85.9 (0.6)</td>
<td>89.8 (0.2)</td>
<td>93.0 (0.6)</td>
</tr>
<tr>
<th>Flickr-30K (CIDEr)</th>
<td>52.3 (1.0)</td>
<td>57.2 (0.4)</td>
<td>58.6 (1.1)</td>
<td>59.2 (0.5)</td>
<td>61.1 (1.3)</td>
</tr>
<tr>
<th>VQAv2 (Accuracy)</th>
<td>44.6 (0.7)</td>
<td>45.9 (0.7)</td>
<td>45.8 (0.5)</td>
<td>45.5 (0.2)</td>
<td>45.8 (0.4)</td>
</tr>
<tr>
<th>OK-VQA (Accuracy)</th>
<td>26.8 (0.3)</td>
<td>27.6 (0.2)</td>
<td>27.7 (0.1)</td>
<td>28.4 (0.1)</td>
<td>29.3 (0.2)</td>
</tr>
<tr>
<th>TextVQA (Accuracy)</th>
<td>22.8 (0.2)</td>
<td>25.8 (0.2)</td>
<td>24.7 (0.1)</td>
<td>25.2 (0.2)</td>
<td>26.3 (0.2)</td>
</tr>
<tr>
<th>Vizwiz (Accuracy)</th>
<td>18.3 (0.6)</td>
<td>23.3 (1.1)</td>
<td>31.8 (0.7)</td>
<td>38.4 (1.1)</td>
<td>42.1 (0.6)</td>
</td>
</tr>
<tr>
<th>Hateful Memes (ROC AUC)</th>
<td>51.4 (3.3)</td>
<td>51.4 (0.6)</td>
<td>52.1 (0.7)</td>
<td>51.6 (1.1)</td>
<td>51.6 (1.6)</td>
</tr>
</table>
|
psxjp5/mt5-small_mid_lr_mid_decay
|
psxjp5
| 2023-08-05T18:08:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-05T15:21:02Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: mt5-small_mid_lr_mid_decay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small_mid_lr_mid_decay
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7428
- Rouge1: 43.12
- Rouge2: 37.6639
- Rougel: 41.8367
- Rougelsum: 41.904
- Bleu: 31.957
- Gen Len: 12.1285
- Meteor: 0.3936
- No ans accuracy: 22.29
- Av cosine sim: 0.7406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 9
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | Meteor | No ans accuracy | Av cosine sim |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:------:|:---------------:|:-------------:|
| 3.1455 | 1.0 | 175 | 0.9832 | 18.7107 | 15.4897 | 18.1977 | 18.2212 | 7.0634 | 7.6229 | 0.1626 | 22.4000 | 0.3949 |
| 1.1623 | 1.99 | 350 | 0.8542 | 38.7675 | 32.704 | 37.3557 | 37.3949 | 27.4323 | 12.5135 | 0.3487 | 17.9900 | 0.6992 |
| 0.9431 | 2.99 | 525 | 0.8017 | 41.6216 | 35.6002 | 40.2386 | 40.2881 | 30.7994 | 12.8117 | 0.3755 | 18.37 | 0.7304 |
| 0.8119 | 3.98 | 700 | 0.7787 | 43.5805 | 37.4117 | 42.1059 | 42.155 | 32.9646 | 13.2176 | 0.3947 | 17.7400 | 0.7582 |
| 0.7235 | 4.98 | 875 | 0.7477 | 43.4124 | 37.2017 | 41.8468 | 41.9097 | 32.9345 | 13.116 | 0.3946 | 18.92 | 0.7561 |
| 0.6493 | 5.97 | 1050 | 0.7266 | 40.4764 | 34.9927 | 39.0999 | 39.1711 | 29.0601 | 11.748 | 0.3687 | 22.6500 | 0.7071 |
| 0.5871 | 6.97 | 1225 | 0.7284 | 43.3812 | 37.5544 | 42.0405 | 42.0865 | 32.8345 | 12.6063 | 0.3949 | 21.05 | 0.7485 |
| 0.5453 | 7.96 | 1400 | 0.7389 | 43.4549 | 37.76 | 42.1025 | 42.215 | 32.6726 | 12.4537 | 0.3965 | 21.44 | 0.7496 |
| 0.5038 | 8.96 | 1575 | 0.7428 | 43.12 | 37.6639 | 41.8367 | 41.904 | 31.957 | 12.1285 | 0.3936 | 22.29 | 0.7406 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
xzuyn/Pygmalion-V3-6B-GGML
|
xzuyn
| 2023-08-05T18:08:10Z | 0 | 7 | null |
[
"gptj",
"gpt-j",
"region:us"
] | null | 2023-05-23T00:55:19Z |
---
tags:
- gptj
- gpt-j
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/PygmalionAI/pygmalion-6b
|
CyberHarem/rapi_nikke
|
CyberHarem
| 2023-08-05T18:00:29Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/rapi_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T17:56:30Z |
---
license: mit
datasets:
- CyberHarem/rapi_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of rapi_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/rapi_nikke.pt` as the embedding and `1500/rapi_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `rapi_nikke`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  | [<NSFW, click to see>](1500/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/rapi_nikke.zip) |
| 1400 |  | [<NSFW, click to see>](1400/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/rapi_nikke.zip) |
| 1300 |  | [<NSFW, click to see>](1300/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/rapi_nikke.zip) |
| 1200 |  | [<NSFW, click to see>](1200/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/rapi_nikke.zip) |
| 1100 |  | [<NSFW, click to see>](1100/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/rapi_nikke.zip) |
| 1000 |  | [<NSFW, click to see>](1000/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/rapi_nikke.zip) |
| 900 |  | [<NSFW, click to see>](900/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/rapi_nikke.zip) |
| 800 |  | [<NSFW, click to see>](800/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/rapi_nikke.zip) |
| 700 |  | [<NSFW, click to see>](700/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/rapi_nikke.zip) |
| 600 |  | [<NSFW, click to see>](600/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/rapi_nikke.zip) |
| 500 |  | [<NSFW, click to see>](500/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/rapi_nikke.zip) |
| 400 |  | [<NSFW, click to see>](400/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/rapi_nikke.zip) |
| 300 |  | [<NSFW, click to see>](300/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/rapi_nikke.zip) |
| 200 |  | [<NSFW, click to see>](200/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/rapi_nikke.zip) |
| 100 |  | [<NSFW, click to see>](100/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/rapi_nikke.zip) |
|
SujitShelar/bloom-1b1-lora-tagger
|
SujitShelar
| 2023-08-05T17:53:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T17:46:47Z |
---
library_name: peft
---
Followed the Sam Witteveen on Youtube for this
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
arhamk/ppo-Pyramids
|
arhamk
| 2023-08-05T17:50:26Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-05T17:38:13Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: arhamk/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Indra99-01/food_semeval_bigscience_bloomz-560m_PROMPT_TUNING_CAUSAL_LM_v1_50.pt
|
Indra99-01
| 2023-08-05T17:48:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T17:48:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Indra99-01/food_semeval_bigscience_bloomz-560m_PROMPT_TUNING_CAUSAL_LM_v1.pt50
|
Indra99-01
| 2023-08-05T17:46:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T17:46:52Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
asparius/my_new_modelasd
|
asparius
| 2023-08-05T17:43:14Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-05T17:43:03Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# asparius/my_new_modelasd
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('asparius/my_new_modelasd')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('asparius/my_new_modelasd')
model = AutoModel.from_pretrained('asparius/my_new_modelasd')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=asparius/my_new_modelasd)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
MattStammers/a2c-PandaReachDense-v2-take2
|
MattStammers
| 2023-08-05T17:41:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T14:31:08Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.97 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/anis_nikke
|
CyberHarem
| 2023-08-05T17:41:09Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/anis_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T17:36:33Z |
---
license: mit
datasets:
- CyberHarem/anis_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of anis_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/anis_nikke.pt` as the embedding and `1500/anis_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `anis_nikke`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/bikini.png) |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/anis_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/bikini.png) |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/anis_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/bikini.png) |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/anis_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/bikini.png) |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/anis_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/bikini.png) |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/anis_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/bikini.png) |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/anis_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/bikini.png) |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/anis_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/bikini.png) |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/anis_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/bikini.png) |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/anis_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/bikini.png) |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/anis_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/bikini.png) |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/anis_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/bikini.png) |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/anis_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/bikini.png) |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/anis_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/bikini.png) |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/anis_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/bikini.png) |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/anis_nikke.zip) |
|
louie27/llama2-qlora-finetunined-french
|
louie27
| 2023-08-05T17:28:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T17:28:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
TheRains/cv9-special-batch12-small
|
TheRains
| 2023-08-05T17:21:46Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T14:31:27Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 12.716816195077065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3176
- Wer: 12.7168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1925 | 1.45 | 1000 | 0.2543 | 14.2213 |
| 0.0624 | 2.9 | 2000 | 0.2487 | 12.8410 |
| 0.016 | 4.35 | 3000 | 0.2944 | 12.8594 |
| 0.0052 | 5.81 | 4000 | 0.3085 | 12.9653 |
| 0.0019 | 7.26 | 5000 | 0.3176 | 12.7168 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CyberHarem/alice_nikke
|
CyberHarem
| 2023-08-05T17:21:15Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/alice_nikke",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-05T17:15:18Z |
---
license: mit
datasets:
- CyberHarem/alice_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of alice_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/alice_nikke.pt` as the embedding and `1500/alice_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `alice_nikke`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  | [<NSFW, click to see>](1500/previews/pattern_3.png) | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/alice_nikke.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  | [<NSFW, click to see>](1400/previews/pattern_3.png) | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/alice_nikke.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  | [<NSFW, click to see>](1300/previews/pattern_3.png) | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/alice_nikke.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  | [<NSFW, click to see>](1200/previews/pattern_3.png) | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/alice_nikke.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  | [<NSFW, click to see>](1100/previews/pattern_3.png) | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/alice_nikke.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  | [<NSFW, click to see>](1000/previews/pattern_3.png) | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/alice_nikke.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  | [<NSFW, click to see>](900/previews/pattern_3.png) | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/alice_nikke.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  | [<NSFW, click to see>](800/previews/pattern_3.png) | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/alice_nikke.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  | [<NSFW, click to see>](700/previews/pattern_3.png) | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/alice_nikke.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  | [<NSFW, click to see>](600/previews/pattern_3.png) | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/alice_nikke.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  | [<NSFW, click to see>](500/previews/pattern_3.png) | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/alice_nikke.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  | [<NSFW, click to see>](400/previews/pattern_3.png) | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/alice_nikke.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  | [<NSFW, click to see>](300/previews/pattern_3.png) | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/alice_nikke.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  | [<NSFW, click to see>](200/previews/pattern_3.png) | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/alice_nikke.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  | [<NSFW, click to see>](100/previews/pattern_3.png) | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/alice_nikke.zip) |
|
Eitanli/distilbert-qa-checkpoint-v4
|
Eitanli
| 2023-08-05T17:20:44Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-05T17:06:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-qa-checkpoint-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-qa-checkpoint-v4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0541 | 1.0 | 1083 | 0.9490 |
| 0.0494 | 2.0 | 2166 | 0.9200 |
| 0.0913 | 3.0 | 3249 | 0.6719 |
| 0.0935 | 4.0 | 4332 | 0.6882 |
| 0.0768 | 5.0 | 5415 | 0.6854 |
| 0.0732 | 6.0 | 6498 | 0.7032 |
| 0.0768 | 7.0 | 7581 | 0.6902 |
| 0.0755 | 8.0 | 8664 | 0.8092 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Chat-Error/testing01
|
Chat-Error
| 2023-08-05T16:59:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-05T16:27:29Z |
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: other
---
# Model Card: Nous-Hermes-Llama-2-13b-LIMARP-Lora-Merged
This is a Llama 2-based model consisting of Nous Hermes Llama 2 13b (https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) merged with LIMARP Lora (https://huggingface.co/lemonilia/limarp-llama2) using the now-updated standard lora adapter for LIMARP (July 28, 2023).
The intended objective was to combine NH-L2's reasoning and instruction-following capabilities with LIMARP's character roleplay capabilities.
added_tokens.json was padded with dummy tokens to reach 32 added tokens in order to allow GGML conversion in llama.cpp without error due to vocab size mismatch.
## Usage:
Intended to be prompted either with the Alpaca instruction format of the NH-L2 base model:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
Or the LIMARP lora instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the base model and lora for details.
|
nokotin/pyramids
|
nokotin
| 2023-08-05T16:46:54Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-05T16:46:46Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nokotin/pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
VicBeltran/taxi-V3-QlearningModel
|
VicBeltran
| 2023-08-05T16:46:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T16:46:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-V3-QlearningModel
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="VicBeltran/taxi-V3-QlearningModel", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YoavWigelman/ppo-LunarLander-v2
|
YoavWigelman
| 2023-08-05T16:30:21Z | 1 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-05-06T10:53:21Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 94.06 +/- 61.36
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'unit8-ppo-LunarLander-v2.2'
'env_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timestamps': 500000
'torch_deterministic': True
'cuda': True
'track': True
'wandb_project_name': 'ppo-implementation-details'
'wandb_entity': None
'capture_video': True
'num_envs': 8
'num_steps': 256
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 8
'update_epochs': 8
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'YoavWigelman/ppo-LunarLander-v2'
'batch_size': 2048
'minibatch_size': 256}
```
|
w11wo/sundanese-bert-base-emotion-classifier
|
w11wo
| 2023-08-05T16:06:54Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"sundanese-bert-base-emotion-classifier",
"su",
"arxiv:1810.04805",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: su
tags:
- sundanese-bert-base-emotion-classifier
license: mit
widget:
- text: "Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah"
---
## Sundanese BERT Base Emotion Classifier
Sundanese BERT Base Emotion Classifier is an emotion-text-classification model based on the [BERT](https://arxiv.org/abs/1810.04805) model. The model was originally the pre-trained [Sundanese BERT Base Uncased](https://hf.co/luche/bert-base-sundanese-uncased) model trained by [`@luche`](https://hf.co/luche), which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets.
10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 96.82% and F1-macro of 96.75%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------- | ------- | --------- | ------------------------------- |
| `sundanese-bert-base-emotion-classifier` | 110M | BERT Base | Sundanese Twitter dataset |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.759800 | 0.263913 | 0.924603 | 0.925042 | 0.928426 | 0.926130 |
| 2 | 0.213100 | 0.456022 | 0.908730 | 0.906732 | 0.924141 | 0.907846 |
| 3 | 0.091900 | 0.204323 | 0.956349 | 0.955896 | 0.956226 | 0.956248 |
| 4 | 0.043800 | 0.219143 | 0.956349 | 0.955705 | 0.955848 | 0.956392 |
| 5 | 0.013700 | 0.247289 | 0.960317 | 0.959734 | 0.959477 | 0.960782 |
| 6 | 0.004800 | 0.286636 | 0.956349 | 0.955540 | 0.956519 | 0.956615 |
| 7 | 0.000200 | 0.243408 | 0.960317 | 0.959085 | 0.959145 | 0.959310 |
| 8 | 0.001500 | 0.232138 | 0.960317 | 0.959451 | 0.959427 | 0.959997 |
| 9 | 0.000100 | 0.215523 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
| 10 | 0.000100 | 0.216533 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sundanese-bert-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah")
```
## Disclaimer
Do consider the biases which come from both the pre-trained BERT model and the Sundanese Twitter dataset that may be carried over into the results of this model.
## Author
Sundanese BERT Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
```
|
nokotin/SnowballTarget
|
nokotin
| 2023-08-05T16:06:23Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-05T16:06:16Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nokotin/SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
anniedong/projectile-flan-t5-v1
|
anniedong
| 2023-08-05T15:54:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T15:48:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
xuqinyang/baichuan-13b-chat-ggml-int4
|
xuqinyang
| 2023-08-05T15:47:28Z | 0 | 6 | null |
[
"text-generation",
"doi:10.57967/hf/0963",
"region:us"
] |
text-generation
| 2023-07-12T04:25:34Z |
---
pipeline_tag: text-generation
---
详细用法请查看:https://github.com/ouwei2013/baichuan13b.cpp
|
hopkins/eng-deu-trial6
|
hopkins
| 2023-08-05T15:32:57Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-05T15:18:31Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-trial6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial6
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tommilyjones/bert-base-uncased-finetuned-hateful-meme
|
tommilyjones
| 2023-08-05T15:24:08Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-05T15:18:02Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-hateful-meme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-hateful-meme
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0538
- Accuracy: 0.544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5795 | 1.0 | 532 | 0.7869 | 0.564 |
| 0.5101 | 2.0 | 1064 | 0.8646 | 0.56 |
| 0.4455 | 3.0 | 1596 | 0.9011 | 0.538 |
| 0.3926 | 4.0 | 2128 | 1.1856 | 0.542 |
| 0.3387 | 5.0 | 2660 | 1.1351 | 0.552 |
| 0.3056 | 6.0 | 3192 | 1.3704 | 0.55 |
| 0.2942 | 7.0 | 3724 | 1.7288 | 0.538 |
| 0.2665 | 8.0 | 4256 | 1.7215 | 0.544 |
| 0.2498 | 9.0 | 4788 | 1.8634 | 0.542 |
| 0.2357 | 10.0 | 5320 | 2.0538 | 0.544 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
hannnnni/piggy
|
hannnnni
| 2023-08-05T15:18:03Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-07-14T11:50:03Z |
# 🐖-rvc-v2-model
原先使用 sovits4.1 的 pretrained model
重新 train 了一個 rvc-v2 的 model 電子音減少了很多
https://colab.research.google.com/drive/1r4IRL0UA7JEoZ0ZK8PKfMyTIBHKpyhcw
進入 colab 執行第一個 cell

點選public url

進入download model 頁面貼上 model 網址
https://huggingface.co/hannnnni/piggy/resolve/main/tone-voice.zip
or
https://huggingface.co/hannnnni/piggy/resolve/main/dong-voice.zip
dong-voice.zip 只 train 了 150 個 epochs,有點懶得再train下去
進入 inference 頁面上傳欲轉換的 audio
建議單一 auido 長度30秒

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64a7d6cf76d0a6cbbc3fff36/zSLZrHuzxj8rrM0ICqOd1.wav"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64a7d6cf76d0a6cbbc3fff36/7W7pVBCAXQ842990u4ByU.wav"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64a7d6cf76d0a6cbbc3fff36/sNxy1oJ2_gLIzsH16Bci1.wav"></audio>
vocal remover:
分離 instrumental vocal
https://ultimatevocalremover.com/
|
arhamk/ppo-SnowballTarget
|
arhamk
| 2023-08-05T15:17:29Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-05T15:17:23Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: arhamk/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tommilyjones/distilbert-base-uncased-finetuned-hateful-meme
|
tommilyjones
| 2023-08-05T15:16:19Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-05T15:12:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-hateful-meme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hateful-meme
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8740
- Accuracy: 0.542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5786 | 1.0 | 532 | 0.7902 | 0.56 |
| 0.5077 | 2.0 | 1064 | 0.8275 | 0.566 |
| 0.4534 | 3.0 | 1596 | 0.9469 | 0.544 |
| 0.3998 | 4.0 | 2128 | 1.1139 | 0.538 |
| 0.3527 | 5.0 | 2660 | 1.2128 | 0.542 |
| 0.3219 | 6.0 | 3192 | 1.2232 | 0.546 |
| 0.3051 | 7.0 | 3724 | 1.5492 | 0.538 |
| 0.2789 | 8.0 | 4256 | 1.6341 | 0.542 |
| 0.267 | 9.0 | 4788 | 1.7046 | 0.54 |
| 0.2521 | 10.0 | 5320 | 1.8740 | 0.542 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
mrkushrz/Llama2_PA_FRA-UAS-FAQ-v2
|
mrkushrz
| 2023-08-05T15:11:08Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:abhishek/llama-2-7b-hf-small-shards",
"base_model:finetune:abhishek/llama-2-7b-hf-small-shards",
"region:us"
] | null | 2023-08-04T10:19:58Z |
---
base_model: abhishek/llama-2-7b-hf-small-shards
tags:
- generated_from_trainer
model-index:
- name: Llama2_PA_FRA-UAS-FAQ-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2_PA_FRA-UAS-FAQ-v2
This model is a fine-tuned version of [abhishek/llama-2-7b-hf-small-shards](https://huggingface.co/abhishek/llama-2-7b-hf-small-shards) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 93
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
DavidGetter1/falcon_horror_small
|
DavidGetter1
| 2023-08-05T15:01:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T15:00:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
zhyzzz/autotrain-logic_form_generation3-80243141417
|
zhyzzz
| 2023-08-05T15:00:19Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:zhyzzz/autotrain-data-logic_form_generation3",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-08-05T14:52:46Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- zhyzzz/autotrain-data-logic_form_generation3
co2_eq_emissions:
emissions: 4.762311061342113
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 80243141417
- CO2 Emissions (in grams): 4.7623
## Validation Metrics
- Loss: 0.051
- Rouge1: 75.016
- Rouge2: 71.587
- RougeL: 74.901
- RougeLsum: 74.879
- Gen Len: 16.407
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zhyzzz/autotrain-logic_form_generation3-80243141417
```
|
LinkSoul/LLaSM-Cllama2
|
LinkSoul
| 2023-08-05T14:52:34Z | 27 | 48 |
transformers
|
[
"transformers",
"pytorch",
"llaaa",
"text-generation",
"zh",
"en",
"dataset:LinkSoul/LLaSM-Audio-Instructions",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T02:39:03Z |
---
license: openrail
datasets:
- LinkSoul/LLaSM-Audio-Instructions
language:
- zh
- en
---
# LLaSM: Large Language and Speech Model
开源,可商用的**中英文双语语音-语言助手 LLaSM 以及中英文语音 SFT 数据集 LLaSM-Audio-Instructions**,第一个支持中英文语音-文本多模态对话的开源可商用对话模型。
<!--
<div align="center">
<img src="https://huggingface.co/LinkSoul/LLaSM-Cllama2/blob/main/meta/preview.jpg" width="40%">
</div>
-->

## 基础演示

## 在线试玩
> Talk is cheap, Show you the Demo.
- [Demo 地址 / HuggingFace Spaces](https://huggingface.co/spaces/LinkSoul/LLaSM)
## 资源下载
- 模型:
- [LLaSM-Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/LLaSM-Cllama2)
- [LLaSM-Baichuan-7B](https://huggingface.co/LinkSoul/LLaSM-Baichuan)
- 百度网盘下载:
- [LLaSM-Chinese-Llama-2-7B](https://pan.baidu.com/s/1PaipNDfqV7f3W1-tl5rwzA?pwd=2549)
- [LLaSM-Baichuan-7B](https://pan.baidu.com/s/1QZrXA8IJXclN77T4jM7tEw?pwd=y2p7)
- 语言模型:
- [Chinese-Llama-2-7b](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b)
- [Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
- 数据集:[LLaSM-Audio-Instructions](https://huggingface.co/datasets/LinkSoul/LLaSM-Audio-Instructions)
## 环境安装
```shell
# clone the repository
git clone https://github.com/LinkSoul-AI/LLaSM
cd LLaSM
# install package
conda create -n llasm python=3.10 -y
conda activate llasm
pip install --upgrade pip
pip install -e .
```
## 快速测试
```shell
export LLASM_DEVICE="cuda:0"
python infer.py \
--input_audio_file PATH/TO/YOUR/AUDIO \
--llasm_model PATH/TO/LLaSM/MODEL \
--llasm_audio_tower PATH/TO/WHISPER/MODEL \
--llm_type "Chinese_llama2" or "baichuan" \
```
## TODO
- 如何训练
- int4 量化
- docker 部署
## 相关项目
- [Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b)
- [Whisper](https://ai.meta.com/llama/)
- [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
## 项目协议
[Apache-2.0 license](https://github.com/LinkSoul-AI/LLaSM/blob/main/LICENSE)
## 微信交流群
<!--
<img src="meta/QRcode.jpg" alt="微信交流群" width="300"/>
-->
欢迎加入[微信群](meta/QRcode.jpg)
|
LinkSoul/Chinese-LLaVA-Cllama2
|
LinkSoul
| 2023-08-05T14:50:31Z | 17 | 18 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"zh",
"en",
"dataset:LinkSoul/Chinese-LLaVA-Vision-Instructions",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-30T04:41:44Z |
---
license: openrail
datasets:
- LinkSoul/Chinese-LLaVA-Vision-Instructions
language:
- zh
- en
---
# Chinese LLaVA
开源,可商用的**中英文双语视觉-语言助手 Chinese-LLaVA 以及中英文视觉 SFT 数据集 Chinese-LLaVA-Vision-Instructions**,支持中英文视觉-文本多模态对话的开源可商用对话模型。
<!--
<p align="center">
<img src="meta/preview.jpg" width="40%">
</p>
-->

## 基础演示

## 在线试玩
> Talk is cheap, Show you the Demo.
- [Demo 地址 / HuggingFace Spaces](https://huggingface.co/spaces/LinkSoul/Chinese-LLaVA)
## 资源下载
- 模型:
- [Chinese-LLaVA-Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/Chinese-LLaVA-Cllama2)
- [Chinese-LLaVA-Baichuan-7B](https://huggingface.co/LinkSoul/Chinese-LLaVA-Baichuan)
- 百度网盘下载:
- [Chinese-LLaVA-Chinese-Llama-2-7B](https://pan.baidu.com/s/16e_LEacMy2bqOYanIFWy8Q?pwd=9j61)
- [Chinese-LLaVA-Baichuan-7B](https://pan.baidu.com/s/1WuYPrIaul0i6KA-to98cHw?pwd=6jwz)
- 语言模型:
- [Chinese-Llama-2-7b](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b)
- [Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
- 数据集:[Chinese-LLaVA-Vision-Instructions](https://huggingface.co/datasets/LinkSoul/Chinese-LLaVA-Vision-Instructions)
## 环境安装
```shell
# clone the repository
git clone https://github.com/LinkSoul-AI/Chinese-LLaVA
cd Chinese-LLaVA
# install package
conda create -n Cllava python=3.10 -y
conda activate Cllava
pip install --upgrade pip
pip install -e .
```
## 快速测试
```shell
python infer.py \
--model-name PATH/TO/THE/CHINESE_LLAVA_MODEL \
--llm-type "Chinese_llama2" or "baichuan" \
--image-file PATH/TO/THE/INPUT/IMAGE \
--query QUERY/PROMPT
```
## TODO
- 如何训练
- int4 量化
- docker 部署
## 相关项目
- [LLaVA](https://llava-vl.github.io/)
- [Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b)
- [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
## 项目协议
[Apache-2.0 license](https://github.com/LinkSoul-AI/Chinese-LLaVA/blob/main/LICENSE)
## 微信交流群
<!--
<img src=".github/QRcode.jpg" alt="微信交流群" width="300"/>
-->
欢迎加入[微信群](meta/QRcode.jpg)
|
capeie/capeie-llama-openorca-lora
|
capeie
| 2023-08-05T14:46:15Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T14:46:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Lukee4/test-2019
|
Lukee4
| 2023-08-05T14:13:49Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T14:13:47Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jointitor/model-2
|
jointitor
| 2023-08-05T13:47:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-05T13:37:26Z |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==",
"iv":"Cvr0hgCSCQAAAm5x",
"context":"tP5ZeKk+wRKQTrK3ULygOHuvgUvM108QjYPdji7LenknvA71y7X+XIANgo63VbN9BiRfw5y9kgyyP17YZIURC783MYLY3+77t50Ls15Jyf3j7v1eXFJiYeyC/BnGhD/zuoBLtVHOKjZepXZdWhlcfv0IjWbVXPHgSjmeP0kCTwRbRNPefal28+lO8JjZzqjAeOHEtiB6AcBotWMDWjFA8IOUncfQpFkRBYm2dRGGjM6Tn2CuTamv0DyB+swfYT3ROtcg7RWZjbaNGhLk+ixpQtQPIBtQ2gHAI3qZFN7Mj3UbTtrVOfc40/bQs3ZoCakIN2I8Lx6EjIDx0qT3vvhNZQ2IAsKLKs4ZEQV0U5rlqeBZjb3IRswHrQ=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html>
|
AtilliO/x02
|
AtilliO
| 2023-08-05T13:42:16Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Heli",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Heli",
"region:us"
] |
reinforcement-learning
| 2023-08-05T13:42:14Z |
---
library_name: ml-agents
tags:
- Heli
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Heli
---
# **ppo** Agent playing **Heli**
This is a trained model of a **ppo** agent playing **Heli**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AtilliO/x02
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
halatmit/learnRL
|
halatmit
| 2023-08-05T13:27:58Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-05T13:27:58Z |
---
license: cc-by-nc-sa-4.0
---
|
abyrush/cepio48
|
abyrush
| 2023-08-05T12:54:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T12:54:30Z |
---
license: creativeml-openrail-m
---
|
ShynBui/s19
|
ShynBui
| 2023-08-05T12:47:52Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-04T16:15:24Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: s19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s19
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
taohoang/whisper-tiny-en-US
|
taohoang
| 2023-08-05T12:45:19Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T12:26:21Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-US
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3435655253837072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-US
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6286
- Wer Ortho: 0.3430
- Wer: 0.3436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 10
- training_steps: 225
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 3.2798 | 0.25 | 14 | 0.9783 | 0.7218 | 0.6889 |
| 0.6283 | 0.5 | 28 | 0.5667 | 0.4479 | 0.4427 |
| 0.5574 | 0.75 | 42 | 0.5307 | 0.4812 | 0.4858 |
| 0.501 | 1.0 | 56 | 0.5130 | 0.3800 | 0.3813 |
| 0.2296 | 1.25 | 70 | 0.5057 | 0.3479 | 0.3436 |
| 0.2296 | 1.5 | 84 | 0.5515 | 0.3572 | 0.3512 |
| 0.2207 | 1.75 | 98 | 0.5356 | 0.3578 | 0.3530 |
| 0.1928 | 2.0 | 112 | 0.5288 | 0.3226 | 0.3200 |
| 0.0795 | 2.25 | 126 | 0.5532 | 0.3257 | 0.3259 |
| 0.0651 | 2.5 | 140 | 0.5833 | 0.3504 | 0.3512 |
| 0.0719 | 2.75 | 154 | 0.5931 | 0.3467 | 0.3501 |
| 0.0722 | 3.0 | 168 | 0.5994 | 0.3498 | 0.3477 |
| 0.0231 | 3.25 | 182 | 0.6030 | 0.3270 | 0.3264 |
| 0.0433 | 3.5 | 196 | 0.6059 | 0.3214 | 0.3200 |
| 0.0663 | 3.75 | 210 | 0.6262 | 0.3646 | 0.3648 |
| 0.0396 | 4.0 | 224 | 0.6286 | 0.3430 | 0.3436 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
sarexer/ppo-LunarLander-v2
|
sarexer
| 2023-08-05T12:39:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T12:38:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 83.94 +/- 131.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
helamri/taxiagent
|
helamri
| 2023-08-05T12:26:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T12:26:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxiagent
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="helamri/taxiagent", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
helamri/q-FrozenLake-v1-4x4-noSlippery
|
helamri
| 2023-08-05T12:23:35Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T12:23:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="helamri/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YanJiangJerry/bertweet-large_epoch6_batch4_lr2e-05_w0.01
|
YanJiangJerry
| 2023-08-05T12:16:14Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-large",
"base_model:finetune:vinai/bertweet-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-05T09:57:02Z |
---
base_model: vinai/bertweet-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bertweet-large_epoch6_batch4_lr2e-05_w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-large_epoch6_batch4_lr2e-05_w0.01
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7423
- Accuracy: 0.6274
- F1: 0.0
- Precision: 0.0
- Recall: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.6851 | 1.0 | 788 | 0.6628 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.678 | 2.0 | 1576 | 0.6763 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6778 | 3.0 | 2364 | 0.6613 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6732 | 4.0 | 3152 | 0.7288 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6631 | 5.0 | 3940 | 0.6935 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6456 | 6.0 | 4728 | 0.7423 | 0.6274 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Aspik101/30B-Lazarus-instruct-PL-lora_GGML
|
Aspik101
| 2023-08-05T12:12:18Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-08-05T11:17:09Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
SaudxInu/PPO-Huggy
|
SaudxInu
| 2023-08-05T12:00:30Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-05T12:00:25Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SaudxInu/PPO-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fromhell01/MyQtaxi
|
fromhell01
| 2023-08-05T11:35:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T11:35:23Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: MyQtaxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fromhell01/MyQtaxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fromhell01/q-FrozenLake-v1-4x4-noSlippery
|
fromhell01
| 2023-08-05T11:34:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T11:34:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fromhell01/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jointitor/model-b
|
jointitor
| 2023-08-05T11:33:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-05T11:31:22Z |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==",
"iv":"Cvr0kQCQQgAAAloB",
"context":"KNjViXxpiGYwgsWoJuQln7b3edSGQZsHUYYwqAWwXs9bxqLj/PsEFmFTrCvn3dj4+yHtA30KSk2sSAsGDe2bln6rlVmMB3e5tM/PjW3nG3E1o016fBAdKpfDE8OqFSq/Nlbn9Yv68z/glHWPFeGPRf2M3VgLuimgRi7FDofab1oCQo8F47TnllSnJffGQR2t4ohHx0OXGfNAZuyOY180zO0gAQ9MoDEJFWIp10afQfrrHC8EsZ4SYaBAScVJRWxIF93bbbFyJpWlyEVveveKJecEd/IDfIYe+nwAIb+8pAytFuL54OO0EiqwHwmNXqcUqljEN59cRHvRaOZbmigX1jcNWNsIiF4P5Vxr1CkeFy6Or6lwds3zHQ=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html>
|
SigmaJDN/animals
|
SigmaJDN
| 2023-08-05T11:30:00Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-05T11:29:53Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: animals
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9821428656578064
---
# animals
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### cow

#### dog

#### horse

#### lion

|
Shivdutta/llama2-qlora-finetunined-french
|
Shivdutta
| 2023-08-05T11:23:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-05T11:23:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
MattStammers/a2c-PandaReachDense-v2
|
MattStammers
| 2023-08-05T11:08:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T09:41:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.37 +/- 1.32
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fromhell01/ppo-LunarLander-v2
|
fromhell01
| 2023-08-05T10:54:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-05T10:54:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.76 +/- 19.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.