modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 06:26:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LarryAIDraw/erinanakiri1
|
LarryAIDraw
| 2023-08-22T04:31:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-21T22:07:40Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/96916/erina-nakiri-food-wars
|
LarryAIDraw/ErinaNakiri
|
LarryAIDraw
| 2023-08-22T04:30:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-21T22:03:27Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/13500/erina-nakiri-food-wars-lora
|
JordanWHLewis/base-model-with-warmup-fulldata-LR-3e4-fairseq-V1
|
JordanWHLewis
| 2023-08-22T04:17:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T22:55:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: base-model-with-warmup-fulldata-LR-3e4-fairseq-V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-model-with-warmup-fulldata-LR-3e4-fairseq-V1
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 12.0385 | 2.05 | 200 | 4.6950 |
| 3.9271 | 4.1 | 400 | 4.2602 |
| 3.8802 | 6.15 | 600 | 4.2946 |
| 3.9082 | 8.21 | 800 | 4.6729 |
| 3.8298 | 10.26 | 1000 | 5.2657 |
| 3.8352 | 12.31 | 1200 | 5.0217 |
| 3.8661 | 14.36 | 1400 | 4.6284 |
| 3.8028 | 16.41 | 1600 | 4.6804 |
| 3.8147 | 18.46 | 1800 | 4.6496 |
| 3.8209 | 20.51 | 2000 | 4.7289 |
| 3.8015 | 22.56 | 2200 | 4.7908 |
| 3.8048 | 24.62 | 2400 | 4.4793 |
| 3.7978 | 26.67 | 2600 | 4.4383 |
| 3.8001 | 28.72 | 2800 | 4.5666 |
| 3.795 | 30.77 | 3000 | 4.7433 |
| 3.7983 | 32.82 | 3200 | 4.5482 |
| 3.797 | 34.87 | 3400 | 4.5401 |
| 3.7963 | 36.92 | 3600 | 4.5661 |
| 3.7927 | 38.97 | 3800 | 4.6994 |
| 3.7938 | 41.03 | 4000 | 4.5958 |
| 4.1155 | 43.08 | 4200 | 4.6279 |
| 3.7862 | 45.13 | 4400 | 4.6126 |
| 3.7934 | 47.18 | 4600 | 4.5489 |
| 3.7851 | 49.23 | 4800 | 4.6024 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
thongphan061/myLLAMAsentiment
|
thongphan061
| 2023-08-22T04:17:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T04:17:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
magurotaisa/ppo-LunarLander-v2_3
|
magurotaisa
| 2023-08-22T04:16:35Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T03:44:07Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -11.27 +/- 131.03
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.25
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'magurotaisa/ppo-LunarLander-v2_3'
'batch_size': 512
'minibatch_size': 128}
```
|
Onutoa/20230822120451
|
Onutoa
| 2023-08-22T04:11:53Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T03:05:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822120451'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822120451
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7866
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 13.5886 | 0.5271 |
| 18.7749 | 2.0 | 624 | 13.1889 | 0.4729 |
| 18.7749 | 3.0 | 936 | 12.7687 | 0.4729 |
| 17.8689 | 4.0 | 1248 | 12.3773 | 0.4729 |
| 17.738 | 5.0 | 1560 | 12.5498 | 0.4729 |
| 17.738 | 6.0 | 1872 | 12.3920 | 0.4729 |
| 17.7159 | 7.0 | 2184 | 12.3910 | 0.4729 |
| 17.7159 | 8.0 | 2496 | 12.3585 | 0.4729 |
| 17.6431 | 9.0 | 2808 | 12.3978 | 0.4729 |
| 17.5993 | 10.0 | 3120 | 12.2603 | 0.4729 |
| 17.5993 | 11.0 | 3432 | 12.1054 | 0.4729 |
| 17.5276 | 12.0 | 3744 | 12.1379 | 0.5271 |
| 17.4675 | 13.0 | 4056 | 12.0354 | 0.5271 |
| 17.4675 | 14.0 | 4368 | 12.0828 | 0.5271 |
| 17.4824 | 15.0 | 4680 | 11.9830 | 0.5271 |
| 17.4824 | 16.0 | 4992 | 12.0574 | 0.4729 |
| 17.4065 | 17.0 | 5304 | 12.7325 | 0.5271 |
| 17.4328 | 18.0 | 5616 | 12.0570 | 0.4729 |
| 17.4328 | 19.0 | 5928 | 12.0770 | 0.4729 |
| 17.3925 | 20.0 | 6240 | 12.0314 | 0.5271 |
| 17.3467 | 21.0 | 6552 | 11.9670 | 0.5271 |
| 17.3467 | 22.0 | 6864 | 12.1346 | 0.5271 |
| 17.3575 | 23.0 | 7176 | 12.4856 | 0.4729 |
| 17.3575 | 24.0 | 7488 | 12.8699 | 0.4729 |
| 17.3374 | 25.0 | 7800 | 11.9199 | 0.5307 |
| 17.3162 | 26.0 | 8112 | 11.9558 | 0.5271 |
| 17.3162 | 27.0 | 8424 | 11.9757 | 0.5271 |
| 17.307 | 28.0 | 8736 | 12.2557 | 0.4729 |
| 17.2934 | 29.0 | 9048 | 11.8987 | 0.4729 |
| 17.2934 | 30.0 | 9360 | 12.1451 | 0.5271 |
| 17.2734 | 31.0 | 9672 | 11.9358 | 0.5271 |
| 17.2734 | 32.0 | 9984 | 11.9698 | 0.5271 |
| 17.2631 | 33.0 | 10296 | 11.9269 | 0.4729 |
| 17.2612 | 34.0 | 10608 | 11.9251 | 0.5271 |
| 17.2612 | 35.0 | 10920 | 11.9818 | 0.4729 |
| 17.2473 | 36.0 | 11232 | 12.0614 | 0.4729 |
| 17.2419 | 37.0 | 11544 | 11.8218 | 0.5271 |
| 17.2419 | 38.0 | 11856 | 11.8899 | 0.4729 |
| 17.2188 | 39.0 | 12168 | 11.8847 | 0.5271 |
| 17.2188 | 40.0 | 12480 | 11.8971 | 0.4729 |
| 17.2216 | 41.0 | 12792 | 11.8868 | 0.5271 |
| 17.2037 | 42.0 | 13104 | 11.8386 | 0.4729 |
| 17.2037 | 43.0 | 13416 | 11.8261 | 0.4729 |
| 17.2027 | 44.0 | 13728 | 11.8480 | 0.4729 |
| 17.181 | 45.0 | 14040 | 11.9217 | 0.5271 |
| 17.181 | 46.0 | 14352 | 11.8834 | 0.4729 |
| 17.1823 | 47.0 | 14664 | 11.8595 | 0.4729 |
| 17.1823 | 48.0 | 14976 | 11.8201 | 0.5271 |
| 17.1721 | 49.0 | 15288 | 11.8889 | 0.4729 |
| 17.168 | 50.0 | 15600 | 11.8029 | 0.5271 |
| 17.168 | 51.0 | 15912 | 11.8118 | 0.4729 |
| 17.1493 | 52.0 | 16224 | 11.7825 | 0.4729 |
| 17.1493 | 53.0 | 16536 | 11.8072 | 0.5271 |
| 17.1493 | 54.0 | 16848 | 11.8041 | 0.5271 |
| 17.1256 | 55.0 | 17160 | 11.8140 | 0.4729 |
| 17.1256 | 56.0 | 17472 | 11.8077 | 0.5271 |
| 17.1315 | 57.0 | 17784 | 11.8012 | 0.5271 |
| 17.1204 | 58.0 | 18096 | 11.7970 | 0.4729 |
| 17.1204 | 59.0 | 18408 | 11.7870 | 0.5271 |
| 17.1129 | 60.0 | 18720 | 11.7866 | 0.4729 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Onutoa/20230822120608
|
Onutoa
| 2023-08-22T04:07:49Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T03:06:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822120608'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822120608
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 19.9899
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 27.0766 | 0.5271 |
| 29.447 | 2.0 | 624 | 24.0887 | 0.4729 |
| 29.447 | 3.0 | 936 | 23.9640 | 0.5271 |
| 27.7172 | 4.0 | 1248 | 22.0260 | 0.4729 |
| 26.4345 | 5.0 | 1560 | 22.0502 | 0.4729 |
| 26.4345 | 6.0 | 1872 | 22.9337 | 0.5271 |
| 27.0832 | 7.0 | 2184 | 21.2859 | 0.5271 |
| 27.0832 | 8.0 | 2496 | 21.4709 | 0.4729 |
| 25.6523 | 9.0 | 2808 | 20.3539 | 0.5271 |
| 25.5288 | 10.0 | 3120 | 21.2982 | 0.5271 |
| 25.5288 | 11.0 | 3432 | 22.0599 | 0.5271 |
| 25.9846 | 12.0 | 3744 | 22.1000 | 0.5271 |
| 26.609 | 13.0 | 4056 | 24.1133 | 0.4729 |
| 26.609 | 14.0 | 4368 | 22.4392 | 0.4729 |
| 26.7751 | 15.0 | 4680 | 22.0514 | 0.4729 |
| 26.7751 | 16.0 | 4992 | 21.4413 | 0.5271 |
| 25.8484 | 17.0 | 5304 | 21.6759 | 0.5271 |
| 25.7937 | 18.0 | 5616 | 21.2726 | 0.5271 |
| 25.7937 | 19.0 | 5928 | 21.2489 | 0.5271 |
| 25.6479 | 20.0 | 6240 | 21.1881 | 0.5271 |
| 25.6144 | 21.0 | 6552 | 21.0354 | 0.5271 |
| 25.6144 | 22.0 | 6864 | 21.0688 | 0.4729 |
| 25.4368 | 23.0 | 7176 | 21.2154 | 0.4729 |
| 25.4368 | 24.0 | 7488 | 21.2348 | 0.4729 |
| 25.5564 | 25.0 | 7800 | 21.1510 | 0.5271 |
| 25.5495 | 26.0 | 8112 | 21.3992 | 0.5271 |
| 25.5495 | 27.0 | 8424 | 21.4035 | 0.4729 |
| 25.4536 | 28.0 | 8736 | 20.9643 | 0.5271 |
| 25.3641 | 29.0 | 9048 | 20.7780 | 0.4729 |
| 25.3641 | 30.0 | 9360 | 21.4761 | 0.5271 |
| 25.4089 | 31.0 | 9672 | 21.1053 | 0.4729 |
| 25.4089 | 32.0 | 9984 | 21.1557 | 0.5271 |
| 25.6056 | 33.0 | 10296 | 21.0180 | 0.5271 |
| 25.5078 | 34.0 | 10608 | 21.1026 | 0.4729 |
| 25.5078 | 35.0 | 10920 | 21.3723 | 0.4729 |
| 25.6607 | 36.0 | 11232 | 21.4309 | 0.4729 |
| 25.9641 | 37.0 | 11544 | 21.4083 | 0.5271 |
| 25.9641 | 38.0 | 11856 | 21.2875 | 0.5271 |
| 25.6756 | 39.0 | 12168 | 21.4538 | 0.5271 |
| 25.6756 | 40.0 | 12480 | 21.1870 | 0.4729 |
| 25.4709 | 41.0 | 12792 | 21.0796 | 0.5271 |
| 25.2913 | 42.0 | 13104 | 20.9412 | 0.5271 |
| 25.2913 | 43.0 | 13416 | 20.8932 | 0.5271 |
| 25.1541 | 44.0 | 13728 | 20.9172 | 0.4729 |
| 25.0679 | 45.0 | 14040 | 20.6787 | 0.5271 |
| 25.0679 | 46.0 | 14352 | 20.6308 | 0.4729 |
| 24.965 | 47.0 | 14664 | 20.5240 | 0.5271 |
| 24.965 | 48.0 | 14976 | 20.6378 | 0.4729 |
| 24.8969 | 49.0 | 15288 | 20.5030 | 0.4729 |
| 24.8319 | 50.0 | 15600 | 20.3257 | 0.5271 |
| 24.8319 | 51.0 | 15912 | 20.2990 | 0.5271 |
| 24.7301 | 52.0 | 16224 | 20.3661 | 0.4729 |
| 24.6644 | 53.0 | 16536 | 20.2088 | 0.5271 |
| 24.6644 | 54.0 | 16848 | 20.1543 | 0.5271 |
| 24.5917 | 55.0 | 17160 | 20.0860 | 0.4729 |
| 24.5917 | 56.0 | 17472 | 20.0672 | 0.5271 |
| 24.5505 | 57.0 | 17784 | 20.0518 | 0.5271 |
| 24.5065 | 58.0 | 18096 | 20.0036 | 0.5271 |
| 24.5065 | 59.0 | 18408 | 19.9939 | 0.5271 |
| 24.4773 | 60.0 | 18720 | 19.9899 | 0.5271 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jimmyofdoom/Reinforce-CartPole-v1
|
jimmyofdoom
| 2023-08-22T03:50:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T03:50:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dkqjrm/20230822105333
|
dkqjrm
| 2023-08-22T03:42:42Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T01:53:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822105333'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822105333
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3480
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 2.0240 | 0.5271 |
| 1.1081 | 2.0 | 624 | 0.8435 | 0.5271 |
| 1.1081 | 3.0 | 936 | 0.4636 | 0.4729 |
| 1.109 | 4.0 | 1248 | 0.3964 | 0.4729 |
| 0.9629 | 5.0 | 1560 | 0.3803 | 0.5271 |
| 0.9629 | 6.0 | 1872 | 0.3630 | 0.5271 |
| 0.8211 | 7.0 | 2184 | 0.5683 | 0.5271 |
| 0.8211 | 8.0 | 2496 | 0.3645 | 0.4729 |
| 0.8143 | 9.0 | 2808 | 0.4972 | 0.5271 |
| 0.8375 | 10.0 | 3120 | 0.4557 | 0.4729 |
| 0.8375 | 11.0 | 3432 | 0.4497 | 0.5271 |
| 0.7522 | 12.0 | 3744 | 0.4278 | 0.4729 |
| 0.7584 | 13.0 | 4056 | 0.5233 | 0.5271 |
| 0.7584 | 14.0 | 4368 | 0.4097 | 0.5271 |
| 0.6684 | 15.0 | 4680 | 0.4749 | 0.4729 |
| 0.6684 | 16.0 | 4992 | 0.7626 | 0.5271 |
| 0.6637 | 17.0 | 5304 | 0.6379 | 0.5271 |
| 0.5907 | 18.0 | 5616 | 0.3496 | 0.5271 |
| 0.5907 | 19.0 | 5928 | 0.4018 | 0.5271 |
| 0.5618 | 20.0 | 6240 | 0.3606 | 0.5271 |
| 0.5539 | 21.0 | 6552 | 0.3596 | 0.4729 |
| 0.5539 | 22.0 | 6864 | 0.4662 | 0.5271 |
| 0.537 | 23.0 | 7176 | 0.3488 | 0.5271 |
| 0.537 | 24.0 | 7488 | 0.8345 | 0.4729 |
| 0.5337 | 25.0 | 7800 | 0.3486 | 0.5271 |
| 0.5058 | 26.0 | 8112 | 0.3496 | 0.5271 |
| 0.5058 | 27.0 | 8424 | 0.5283 | 0.4729 |
| 0.5239 | 28.0 | 8736 | 0.3566 | 0.5271 |
| 0.4835 | 29.0 | 9048 | 0.3810 | 0.4729 |
| 0.4835 | 30.0 | 9360 | 0.4577 | 0.5271 |
| 0.4672 | 31.0 | 9672 | 0.4612 | 0.4729 |
| 0.4672 | 32.0 | 9984 | 0.4667 | 0.5271 |
| 0.4699 | 33.0 | 10296 | 0.3585 | 0.5271 |
| 0.4637 | 34.0 | 10608 | 0.3518 | 0.5271 |
| 0.4637 | 35.0 | 10920 | 0.4995 | 0.4729 |
| 0.4539 | 36.0 | 11232 | 0.3777 | 0.4729 |
| 0.4465 | 37.0 | 11544 | 0.3492 | 0.5271 |
| 0.4465 | 38.0 | 11856 | 0.3486 | 0.5271 |
| 0.4446 | 39.0 | 12168 | 0.3482 | 0.5271 |
| 0.4446 | 40.0 | 12480 | 0.3776 | 0.4729 |
| 0.437 | 41.0 | 12792 | 0.3485 | 0.5271 |
| 0.4309 | 42.0 | 13104 | 0.3481 | 0.5271 |
| 0.4309 | 43.0 | 13416 | 0.3657 | 0.5271 |
| 0.424 | 44.0 | 13728 | 0.3484 | 0.5271 |
| 0.4165 | 45.0 | 14040 | 0.3492 | 0.5271 |
| 0.4165 | 46.0 | 14352 | 0.3706 | 0.4729 |
| 0.4206 | 47.0 | 14664 | 0.3490 | 0.5271 |
| 0.4206 | 48.0 | 14976 | 0.3510 | 0.5271 |
| 0.4202 | 49.0 | 15288 | 0.3478 | 0.5271 |
| 0.4038 | 50.0 | 15600 | 0.3621 | 0.5271 |
| 0.4038 | 51.0 | 15912 | 0.3480 | 0.5271 |
| 0.3916 | 52.0 | 16224 | 0.4587 | 0.4729 |
| 0.3901 | 53.0 | 16536 | 0.3506 | 0.5271 |
| 0.3901 | 54.0 | 16848 | 0.3545 | 0.5271 |
| 0.3805 | 55.0 | 17160 | 0.3540 | 0.4729 |
| 0.3805 | 56.0 | 17472 | 0.3626 | 0.5271 |
| 0.3781 | 57.0 | 17784 | 0.3504 | 0.5271 |
| 0.3688 | 58.0 | 18096 | 0.3478 | 0.5271 |
| 0.3688 | 59.0 | 18408 | 0.3527 | 0.5271 |
| 0.3657 | 60.0 | 18720 | 0.3480 | 0.5271 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
shipilovya/souls_paul_v0.0.1
|
shipilovya
| 2023-08-22T03:40:17Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T03:40:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
nikinetrahutama/afx-ai-llama-chat-model-10
|
nikinetrahutama
| 2023-08-22T02:48:59Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-22T02:40:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
YYuX/Arknights_Skadi-the-corrupting-heart_JA_VITS
|
YYuX
| 2023-08-22T02:38:45Z | 5 | 0 |
transformers
|
[
"transformers",
"text-to-speech",
"VITS",
"ja",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-22T02:30:36Z |
---
license: other
tags:
- text-to-speech
- VITS
language:
- ja
---
# Terms of Use|使用规约
By using this model, you agree to the terms of use. If you violate it, there will be consequences.
使用本模型,视为同意使用规约。如果违反,后果自负。
1.This model aims to facilitate communication and learning. The dataset is copyrighted by HyperGryph.
本模型仅供交流与学习使用,数据集版权归鹰角网络所有。
2.Engaging in illegal activities, as well as religious and political activities, is strictly prohibited when using this model. If you disagree with this provision, the usage of the model is prohibited.
禁止使用本模型从事违法行为与宗教、政治等活动。不同意此条则禁止使用该模型。
3.Commercial and profitable use is prohibited.不得商用,不得用于盈利性内容。
But the following behaviors are exceptions|但以下行为例外:
Earn from platforms by the works (such as videos) that created using this model, but the works must not contain commercial content such as advertisements, and must not contain any behavior that encourages the audience to reward.
通过使用本模型的二创作品赚取来自平台的补贴,但稿件中不得包含广告,带货等商业内容,不得含有鼓动观众打赏的行为。例如:哔哩哔哩的创作激励是允许的。但稿件内不允许出现广告。并且UP主不得鼓励观众或粉丝为其充电。
4.Prohibit any form of secondary distribution of this model.禁止以任何形式二次分发本模型。
5.You bear full responsibility for any problems arising from the usage of this model, as well as any resulting consequences. The auther disclaims any association with or liability for the consequences.由使用本模型导致的各种问题,需自行承担全部责任和后果!与作者无关!
# How to use ?|如何使用?
1.Use MoeGoe. 你可以使用MoeGoe进行推理。
2.Or you can refer to:https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/README.md#inference-or-usage-currently-support-windows-only
或者参照:https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/README_ZH.md#%E6%9C%AC%E5%9C%B0%E8%BF%90%E8%A1%8C%E5%92%8C%E6%8E%A8%E7%90%86
|
Eaaven/lora-trained-rev-1e-5
|
Eaaven
| 2023-08-22T02:31:28Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:danbrown/RevAnimated-v1-2-2",
"base_model:adapter:danbrown/RevAnimated-v1-2-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-22T02:05:39Z |
---
license: creativeml-openrail-m
base_model: danbrown/RevAnimated-v1-2-2
instance_prompt: a photo of alice girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Eaaven/lora-trained-rev-1e-5
These are LoRA adaption weights for danbrown/RevAnimated-v1-2-2. The weights were trained on a photo of alice girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
pawankumar/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
pawankumar
| 2023-08-22T02:27:24Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T02:27:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Siyoun/kogpt2-lora
|
Siyoun
| 2023-08-22T02:21:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-14T10:13:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
akar49/deform_detr-crack-I
|
akar49
| 2023-08-22T02:14:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deformable_detr",
"object-detection",
"generated_from_trainer",
"dataset:crack_detection-merged",
"base_model:facebook/deformable-detr-box-supervised",
"base_model:finetune:facebook/deformable-detr-box-supervised",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-21T22:32:01Z |
---
license: apache-2.0
base_model: facebook/deformable-detr-box-supervised
tags:
- generated_from_trainer
datasets:
- crack_detection-merged
model-index:
- name: deform_detr-crack-I
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deform_detr-crack-I
This model is a fine-tuned version of [facebook/deformable-detr-box-supervised](https://huggingface.co/facebook/deformable-detr-box-supervised) on the crack_detection-merged dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
magurotaisa/ppo-LunarLander-v2_1
|
magurotaisa
| 2023-08-22T02:12:46Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T00:31:56Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 122.70 +/- 117.96
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 5000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'magurotaisa/ppo-LunarLander-v2_1'
'batch_size': 512
'minibatch_size': 128}
```
|
krishi/tartan2
|
krishi
| 2023-08-22T02:10:10Z | 29 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-22T01:52:13Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: tartan fabric
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - krishi/tartan2
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on tartan fabric using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
tbooy/a2c-PandaReachDense-v3
|
tbooy
| 2023-08-22T01:49:44Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T01:44:29Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Clakmann/t5-base-Clakmann-thesis-epoch10
|
Clakmann
| 2023-08-22T01:42:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-21T18:46:46Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-Clakmann-thesis-epoch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-Clakmann-thesis-epoch10
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5727
- Rouge1: 0.2268
- Rouge2: 0.0853
- Rougel: 0.215
- Rougelsum: 0.2157
- Gen Len: 14.2621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.8844 | 1.0 | 5029 | 1.6766 | 0.2148 | 0.0756 | 0.2044 | 0.2045 | 13.7397 |
| 1.7073 | 2.0 | 10058 | 1.6168 | 0.2196 | 0.0792 | 0.2099 | 0.2102 | 13.8238 |
| 1.6487 | 3.0 | 15087 | 1.5948 | 0.2199 | 0.0794 | 0.209 | 0.2091 | 14.3399 |
| 1.5773 | 4.0 | 20116 | 1.5800 | 0.2252 | 0.0816 | 0.2157 | 0.2164 | 13.9383 |
| 1.5114 | 5.0 | 25145 | 1.5770 | 0.2229 | 0.0798 | 0.212 | 0.2126 | 14.2567 |
| 1.4688 | 6.0 | 30174 | 1.5703 | 0.2255 | 0.0848 | 0.2158 | 0.2164 | 13.9973 |
| 1.4283 | 7.0 | 35203 | 1.5673 | 0.2237 | 0.0834 | 0.2125 | 0.2129 | 14.0966 |
| 1.4166 | 8.0 | 40232 | 1.5702 | 0.2276 | 0.0866 | 0.2153 | 0.2159 | 14.3453 |
| 1.3978 | 9.0 | 45261 | 1.5706 | 0.2274 | 0.0864 | 0.216 | 0.2166 | 14.2272 |
| 1.3688 | 10.0 | 50290 | 1.5727 | 0.2268 | 0.0853 | 0.215 | 0.2157 | 14.2621 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KoalaAI/Emoji-Suggester
|
KoalaAI
| 2023-08-22T01:40:43Z | 166 | 9 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"autotrain",
"emoji",
"sentiment",
"en",
"dataset:adorkin/extended_tweet_emojis",
"license:openrail",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-22T01:27:50Z |
---
tags:
- autotrain
- text-classification
- emoji
- sentiment
language:
- en
widget:
- text: I love apples
- text: I hate apples
- text: I hate it when they don't listen
- text: I hate it when they don't listen :(
- text: It's so cosy
- text: there's nothing like nature
co2_eq_emissions:
emissions: 0.6833689692559574
license: openrail
datasets:
- adorkin/extended_tweet_emojis
---
# Emoji Suggester
This model is a text generation model that can suggest emojis based on a given text. It uses the deberta-v3-base model as a backbone.
## Training Data
The dataset this was trained on has had it's emoji's replaced with the unicode characters rather than an index, which required a seperate file to map the indices to.
The dataset was further modified in the following ways:
* The "US" emoji was removed, as it serves very little purpose in general conversation.
* The dataset was deduped
* The amount of times each emoji appears in the dataset is more or less even to all the others; preventing the model from becoming heavily biased on the emojis that appear more often in training data.
## Intended uses & limitations
This model is intended to be used for fun and entertainment purposes, such as adding emojis to social media posts, messages, or emails. It is not intended to be used for any serious or sensitive applications, such as sentiment analysis, emotion recognition, or hate speech detection. The model may not be able to handle texts that are too long, complex, or ambiguous, and may generate inappropriate or irrelevant emojis in some cases. The model may also reflect the biases and stereotypes present in the training data, such as gender, race, or culture. Users are advised to use the model with caution and discretion.
## Model Training Info
- Problem type: Multi-class Classification
- CO2 Emissions (in grams): 0.6834
## Validation Metrics
- Loss: 2.339
- Accuracy: 0.216
- Macro F1: 0.136
- Micro F1: 0.216
- Weighted F1: 0.163
- Macro Precision: 0.126
- Micro Precision: 0.216
- Weighted Precision: 0.152
- Macro Recall: 0.179
- Micro Recall: 0.216
- Weighted Recall: 0.216
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love apples"}' https://api-inference.huggingface.co/models/KoalaAI/Emoji-Suggester
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KoalaAI/Emoji-Suggester", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("KoalaAI/Emoji-Suggester", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
CreatorPhan/Test_Q8_16
|
CreatorPhan
| 2023-08-22T01:39:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-22T01:37:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
JordanWHLewis/base-model-fairseq-V1
|
JordanWHLewis
| 2023-08-22T01:23:03Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T22:06:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: base-model-fairseq-V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-model-fairseq-V1
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.8808 | 4.04 | 200 | 4.6158 |
| 3.8357 | 8.08 | 400 | 4.5245 |
| 3.7928 | 12.12 | 600 | 4.7469 |
| 3.874 | 16.16 | 800 | 4.6975 |
| 3.7893 | 20.2 | 1000 | 4.7110 |
| 3.8368 | 24.24 | 1200 | 4.8101 |
| 3.7922 | 28.28 | 1400 | 4.7443 |
| 3.7842 | 32.32 | 1600 | 4.8832 |
| 3.7882 | 36.36 | 1800 | 4.7598 |
| 3.7777 | 40.4 | 2000 | 4.8261 |
| 3.834 | 44.44 | 2200 | 4.7754 |
| 3.7972 | 48.48 | 2400 | 4.7644 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Bhanu9Prakash/whisper-small-dv
|
Bhanu9Prakash
| 2023-08-22T01:11:22Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T23:08:06Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Bhanu9Prakash
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.097680564732064
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Bhanu9Prakash
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1691
- Wer Ortho: 62.1144
- Wer: 13.0977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1237 | 1.63 | 500 | 0.1691 | 62.1144 | 13.0977 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Bhanu9Prakash/whisper-tiny-en
|
Bhanu9Prakash
| 2023-08-22T00:53:13Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T23:45:08Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.24361948955916474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5973
- Wer Ortho: 0.2520
- Wer: 0.2436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.292 | 3.57 | 100 | 0.4495 | 0.2665 | 0.2512 |
| 0.0387 | 7.14 | 200 | 0.4957 | 0.2732 | 0.2604 |
| 0.0053 | 10.71 | 300 | 0.5469 | 0.2538 | 0.2448 |
| 0.0013 | 14.29 | 400 | 0.5758 | 0.2580 | 0.2506 |
| 0.0008 | 17.86 | 500 | 0.5973 | 0.2520 | 0.2436 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sandychoii/audio-classification
|
sandychoii
| 2023-08-22T00:32:52Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-22T00:27:36Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: audio-classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio-classification
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2132
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3207 | 1.0 | 25 | 0.2132 | 0.94 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
klosax/pythia-deduped-gguf
|
klosax
| 2023-08-22T00:31:45Z | 23 | 3 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-08-22T00:06:50Z |
Source models: https://huggingface.co/EleutherAI
Converted to GGML latest model file format gguf.
Warning: These models are currently not supported by llama.cpp
|
agustinl/a2c-PandaReachDense-v3
|
agustinl
| 2023-08-22T00:20:14Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-22T00:14:50Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.16 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nikinetrahutama/afx-ai-llama-chat-model-9
|
nikinetrahutama
| 2023-08-22T00:13:53Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-21T23:47:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
platzi/johao-vit_model
|
platzi
| 2023-08-22T00:11:58Z | 244 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-21T23:35:50Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: johao-vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# johao-vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0249
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1277 | 3.85 | 500 | 0.0249 | 0.9925 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
totally-not-an-llm/EverythingLM-13b-V2-peft
|
totally-not-an-llm
| 2023-08-22T00:01:03Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-21T23:51:37Z |
---
library_name: peft
---
20 epochs on V2 dataset
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
cartesinus/iva_mt_wslot-m2m100_418M-en-pl-plaintext
|
cartesinus
| 2023-08-21T23:55:44Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:iva_mt_wslot",
"doi:10.57967/hf/1044",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-21T15:58:55Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- iva_mt_wslot
metrics:
- bleu
model-index:
- name: iva_mt_wslot-m2m100_418M-en-pl-plaintext_10e
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: iva_mt_wslot
type: iva_mt_wslot
config: en-pl
split: validation
args: en-pl
metrics:
- name: Bleu
type: bleu
value: 41.3124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iva_mt_wslot-m2m100_418M-en-pl-plaintext_10e
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0169
- Bleu: 41.3124
- Gen Len: 15.5197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0169 | 1.0 | 5091 | 0.0162 | 36.663 | 15.6444 |
| 0.0124 | 2.0 | 10182 | 0.0151 | 38.36 | 15.6314 |
| 0.0086 | 3.0 | 15273 | 0.0150 | 39.3808 | 15.5507 |
| 0.0069 | 4.0 | 20364 | 0.0152 | 39.6307 | 15.5235 |
| 0.0049 | 5.0 | 25455 | 0.0156 | 40.4441 | 15.5911 |
| 0.0038 | 6.0 | 30546 | 0.0159 | 40.3781 | 15.47 |
| 0.0027 | 7.0 | 35637 | 0.0163 | 40.1339 | 15.4722 |
| 0.0021 | 8.0 | 40728 | 0.0166 | 41.4429 | 15.4906 |
| 0.0016 | 9.0 | 45819 | 0.0168 | 41.1024 | 15.5249 |
| 0.0012 | 10.0 | 50910 | 0.0169 | 41.3124 | 15.5197 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dn-gh/TQC-PandaReachDense-v2
|
dn-gh
| 2023-08-21T23:29:45Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-08T01:04:51Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.21 +/- 0.11
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaReachDense-v2**
This is a trained model of a **TQC** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
dn-gh/a2c-PandaReachDense-v2
|
dn-gh
| 2023-08-21T23:29:19Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-05T14:24:53Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.53 +/- 1.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
Bhanu9Prakash/a2c-PandaReachDense-v2
|
Bhanu9Prakash
| 2023-08-21T23:24:30Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-12T09:45:46Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.94 +/- 0.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
elbonaparte/asclepius_v1
|
elbonaparte
| 2023-08-21T23:21:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T09:30:22Z |
---
license: other
language:
- en
library_name: transformers
metrics:
- accuracy 85%
---
# Asclepius
#### Version: Initial (v1)
***
## Overview
Asclepius is a state-of-the-art Language Model developed with a special focus on healthcare queries.
It is as LLama-2-70b based model fine-tuned on a broad range of publicly available healthcare data. It can render timely and consistent answers to health-related queries with good accuracy.
## Setup Guide & Usage
The model is available on this huggingface repository. It supports up to 4-bit quantization with sustained performance.
To use the model you can use the usual HuggingFace Transformers API, like this:
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "elbonaparte/asclepius_v1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'Question: What is the most common cause of chest pain in men < 50? Answer:\n',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
The output would be:
```
The most common causes of chest pain in men < 50 are:
1. Gastroesophageal reflux disease (GERD): This occurs when stomach acid flows back into the esophagus, causing a burning sensation in the chest (heartburn).
2. Musculoskeletal issues: Strained muscles or inflammation in the chest wall can cause chest pain, especially during physical activity or after an injury.
3. Anxiety and stress-related chest pain: Anxiety or panic attacks can cause chest tightness and discomfort, mimicking heart-related pain.
4. Costochondritis: Inflammation of the cartilage that connects the ribs to the breastbone can cause sharp chest pain, often aggravated by physical activity or deep breathing.
```
Although this is plain, non-structured text, several prompt strategies can be used to obtain structured text as lists or JSON format from the LLM.
In the `asclepius_api_helpers.py` we provide several prompt-optimized functions for returning structured responses for the most common use cases: medications list, most/least common findings, diagnostic hypothesis, etc.
These functions can be used to create a server tasked with querying the model (API gateway) or integrated within the application (example provided on https://asclepiusv1.streamlit.app/).
#### Inference server
Alternatively, a RunPod image template is available pre-configured for text-generation-inference allowing you to get the model up and running in a few clicks following these steps:
1. Choose an instance to deploy. This model needs a GPU with at least 48gb of VRAM.

2. Choose the provided "Asclepius Template" image template from the dropdown list. It should be the first. Runpod will provision the VM and setup is done automatically.
It takes around 15 to 20 minutes to download the model.

3. The model server will be available 15 to 20 minutes after that. The URL will look like this: `[POD ID]-80.proxy.runpod.net`.
The POD ID is the string of text under the Pod Name in the top left.

4. A easy-to-use UI for interacting with Asclepius is available over https://asclepiusv1.streamlit.app. Change the Pod ID to the runnning POD ID and it will return the answers from that model.

The UI code is also accessible in this repository. It can be utilized with the installation of [streamlit.io](https://docs.streamlit.io/library/get-started/installation) and by running this command in the terminal inside the repo folder: `streamlit run app.py`.
The example app and use cases interact optimally with the model using input prompts engineered within the functions found in the `asclepius_api_helpers.py` file.
## Training Data
An extensive exploration of publicly available healthcare information datasets was conducted and these were considered as potential inputs:
| Dataset Name | URL | Description | Availability | Type | Data Quality | Data Preparation Effort |
| :----------------------------------------- | :--------------------------------------------- | :------------------------------------------------------------- | :--------------------------------------------- | :--------------- | :----------- | :---------------------- |
| icliniq-10k | icliniq/medical_dialog | conversation between patient and doctors | Easily accessible | Real world data | 7 | 6 |
| HealthCareMagic-100k | HMC/patient_doctor_convo | conversation between patient and doctors | Easily Accessible | Real world data | 7 | 6 |
| Medical Dialog | hf/medical_dialog | conversation between patient and doctors | Easily accessible | Real world data | 8 | 5 |
| Medical Notes 40 | rungalileo/medical_transcription_4 | Hospitalist Notes (PreOp, procedure, discharge summaries, etc) | Easily accessible | Real World Data | 9 | 5 |
| (MIMIC) Indiana University Medical Reports | Indiana_University_Medical_reports_original | Radiologic Reports and Clinical notes | Easily accessible | Real World Data | 9 | 5 |
| Medical Domain | argilla/medical-domain | Clinical Notes | Easily accessible | Real World Data | 8 | 5 |
| Medical Keyword | argilla/medical-keywords | Clinical Notes | Easily accessible | Real World Data | 8 | 5 |
| Medical QA | eswardivi/medical_qa | Question answering to patient doubts | Easily accessible | Real World Data | 9 | 4 |
| Medical Transcriptions | tchebonenko/MedicalTranscriptions | Clinical Notes | Easily accessible | Real World Data | 7 | 6 |
| Synthea | synthea.mitre.org | Synthetic health data generator | Easily accessible, graph-based logic necessary | Synthetic Data | 8 | 10 |
| MIMIC IV 2.0 | | Real world medical texts | Accessible after 14-modules online training | Real World Data | 9 | 7 |
| | | | | | | |
| Medline Plus | medlineplus.gov | Curated Medical Information | Needs web scrapping | Referential Data | 10 | 9 |
| CDC | cdc.gov | Curated Medical Information | Needs web scrapping | Referential Data | 10 | 9 |
| National Institutes of Health | nih.gov | Curated Medical Information | Needs web scrapping | Referential Data | 10 | 9 |
| WHO | who.int | Curated Medical Information | Needs web scrapping | Referential Data | 10 | 9 |
| Mayo Clinic | mayoclinic.org | Curated Medical Information | Needs web scrapping | Referential Data | 10 | 9 |
| Merck Manual | merckmanuals.com/professional | Curated Medical Information | Web scrapping unavailable | Referential Data | 10 | 9 |
| Open Medical Terms | gamino/wiki_medical_terms | Explanation of medical terms | Easily accessible | Referential Data | 8 | 4 |
| MedQA | medalpaca/medical_meadow_medqa | Medical question answering (USMLE) | Easily accessible | Referential Data | 10 | 4 |
| USMLE SA | medalpaca/medical_meadow_usmle_self_assessment | USMLE self-assesment questions and answers | Easily accessible | Referential Data | 10 | 5 |
| PubMed Health Advice | medalpaca/medical_meadow_health_advice | Extracted data from Pubmed articles | Easily accessible | Referential Data | 8 | 6 |
| Wikidoc explanations | medalpaca/medical_meadow_wikidoc | Explanation of medical conditions and procedures | Easily accesible | Referential Data | 10 | 6 |
| Medical Flashcard | medalpaca/medical_meadow_medical_flashcards | General Healthcare questions and answers | Easily accessible | Referential Data | 8 | 6 |
| Pubmed Causal | medalpaca/medical_meadow_pubmed_causal | Causality between health events | Easily accessible | Referential Data | 9 | 7 |
| Medical Questions DB | fhirfly/medicalquestions | A dataset containing general health questions without answers | Easily accessible | Referential Data | 8 | 9 |
## Evaluation
The model evaluation is based on its consistency and accuracy in responding to healthcare inquiries.
#### - Custom QA dataset (20 questions)
(1-shot)
| Model Name | Correct Answers (%) | Total Questions |
| :-------------------- | :------------------- | :-------------- |
| GPT-4 | 18 (90%) | 20 |
| *Asclepius* | 16 (80%) | 20 |
| Llama-2 | 15 (70%) | 20 |
| Falcon-40b | 11 (55%) | 20 |
| Others (GPT-J, T5, Graph-based) | < 8 | 20 |
#### - MMLU (Professional Medicine + Clinical Knowledge + College Medicine)
(5-shot)
| Model Name | Score |
| :-------------------- | :------------------- |
| GPT-4 | 88% |
| MedPalm-2 | 88% |
| *Asclepius* | 67.2% |
| GPT-3.5 | 67.2% |
| Llama-2 | 66.8% |
#### - MedQA-USMLE (1200 USMLE-style questions)
(5-shot)
| Model Name | Score |
| :-------------------- | :------------------- |
| GPT-4 | 86.1% |
| MedPalm-2 | 79.7% |
| *Asclepius* | 60.1% |
| Llama-2 | 58.9% |
| GPT-3.5 | 53.5% |
## Limitations
Despite its impressive capabilities in answering medical questions, Asclepius occasionally errs particularly in the section of clinical cases. Consequently, output from the model should not be used without the supervision of a professional healthcare practitioner.
As a tool for physicians, it has immense potential in suggesting diagnoses and prescriptions, in addition to reviewing notes. Nevertheless, potential harm to patients due to erroneous output needs consideration and studies on outcomes should be undertaken before utilization.
This is in agreement with the FDA's recommendation on the usage of AI, ML and Software as a Medical Device (https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device)
## License
Asclepius has been developed by Leonardo Canela Almeida, an independent contractor at Phire Health LLC. All rights are reserved by Phire Health LLC.
|
michael-daios/falcon-7b-800
|
michael-daios
| 2023-08-21T23:20:14Z | 0 | 0 | null |
[
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-08-12T19:32:31Z |
---
license: apache-2.0
pipeline_tag: text-generation
---
|
aant/my-car
|
aant
| 2023-08-21T23:15:13Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-21T23:11:51Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-car Dreambooth model trained by aant following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
PAIXAI/Astrid-1B-CPU
|
PAIXAI
| 2023-08-21T23:12:05Z | 181 | 25 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"PAIX.Cloud",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T03:48:59Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- PAIX.Cloud
inference: true
thumbnail: https://static.wixstatic.com/media/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png/v1/fill/w_192%2Ch_192%2Clg_1%2Cusm_0.66_1.00_0.01/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png
---
# Model Card
## Summary
This model, Astrid-1B-CPU, is a GPT-NeoX model for causal language modeling, designed to generate human-like text.
It's part of our mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance.
Trained in English, it's a versatile tool for a variety of applications.
This model is one of the many models available on our platform, and we currently have a 1B and 7B open-source model.
This model was trained by [PAIX.Cloud](https://www.paix.cloud/).
- Wait list: [Wait List](https://www.paix.cloud/join-waitlist)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.1
pip install accelerate==0.20.3
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="PAIXAI/Astrid-1B-CPU",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"PAIXAI/Astrid-1B-CPU",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"PAIXAI/Astrid-1B-CPU",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "PAIXAI/Astrid-1B-CPU" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2048)
(layers): ModuleList(
(0-15): 16 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2048, out_features=6144, bias=True)
(dense): Linear(in_features=2048, out_features=2048, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2048, out_features=8192, bias=True)
(dense_4h_to_h): Linear(in_features=8192, out_features=2048, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2048, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=PAIXAI/Astrid-1B-CPU --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
rishabh063/lora-trained-xl-car
|
rishabh063
| 2023-08-21T23:10:41Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-21T22:27:22Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks car
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rishabh063/lora-trained-xl-car
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks car using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
tobin2003/stuart-little
|
tobin2003
| 2023-08-21T23:06:16Z | 6 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-21T23:01:20Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Stuart-Little Dreambooth model trained by tobin2003 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SJCET-200
Sample pictures of this concept:
.jpg)
|
MichaelYangCA/ppo-LunarLander-v2
|
MichaelYangCA
| 2023-08-21T23:03:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T23:02:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -133.08 +/- 58.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Abhishek2003/my-pet-dog-qaz
|
Abhishek2003
| 2023-08-21T23:01:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-21T22:58:03Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-qaz Dreambooth model trained by Abhishek2003 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SJCET-191
Sample pictures of this concept:
.jpg)
|
PAIXAI/Astrid-1B
|
PAIXAI
| 2023-08-21T22:54:02Z | 142 | 24 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"PAIX.Cloud",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T03:24:42Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- PAIX.Cloud
inference: true
thumbnail: https://static.wixstatic.com/media/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png/v1/fill/w_192%2Ch_192%2Clg_1%2Cusm_0.66_1.00_0.01/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png
---
# Model Card
## Summary
This model, Astrid-1B, is a GPT-NeoX model for causal language modeling, designed to generate human-like text.
It's part of our mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance.
Trained in English, it's a versatile tool for a variety of applications.
This model is one of the many models available on our platform, and we currently have a 1B and 7B open-source model.
This model was trained by [PAIX.Cloud](https://www.paix.cloud/).
- Wait list: [Wait List](https://www.paix.cloud/join-waitlist)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.1
pip install accelerate==0.20.3
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="PAIXAI/Astrid-1B",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"PAIXAI/Astrid-1B",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"PAIXAI/Astrid-1B",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "PAIXAI/Astrid-1B" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2048)
(layers): ModuleList(
(0-15): 16 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2048, out_features=6144, bias=True)
(dense): Linear(in_features=2048, out_features=2048, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2048, out_features=8192, bias=True)
(dense_4h_to_h): Linear(in_features=8192, out_features=2048, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2048, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=PAIXAI/Astrid-1B --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
gmshuler95/ppo-LunarLander-v2
|
gmshuler95
| 2023-08-21T21:50:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T21:35:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.28 +/- 16.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thiagoms7/whisper-small-pt
|
thiagoms7
| 2023-08-21T21:49:26Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-18T04:03:38Z |
---
language:
- pt
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small pt - thiagoms
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 302.8603818223639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small pt - thiagoms
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2549
- Wer Ortho: 266.0002
- Wer: 302.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|
| 0.2453 | 0.28 | 500 | 0.2549 | 266.0002 | 302.8604 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AdirK/a2c-PandaReachDense-v3
|
AdirK
| 2023-08-21T21:39:39Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T21:34:00Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mccoole/ludwig-webinar
|
mccoole
| 2023-08-21T21:37:53Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-21T21:37:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
DeepaPeri/doctor
|
DeepaPeri
| 2023-08-21T21:24:23Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-21T21:22:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
shahukareem/distilhubert-finetuned-gtzan
|
shahukareem
| 2023-08-21T21:23:22Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-21T19:40:41Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5865
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0044 | 1.0 | 113 | 1.7867 | 0.59 |
| 1.297 | 2.0 | 226 | 1.2304 | 0.61 |
| 1.0292 | 3.0 | 339 | 0.9026 | 0.76 |
| 0.7938 | 4.0 | 452 | 0.8488 | 0.71 |
| 0.6081 | 5.0 | 565 | 0.6756 | 0.81 |
| 0.4367 | 6.0 | 678 | 0.6714 | 0.78 |
| 0.4993 | 7.0 | 791 | 0.6104 | 0.79 |
| 0.2011 | 8.0 | 904 | 0.5946 | 0.79 |
| 0.2715 | 9.0 | 1017 | 0.5696 | 0.8 |
| 0.1691 | 10.0 | 1130 | 0.5865 | 0.82 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
daochf/LudwigLlama2-PuceDS-v01
|
daochf
| 2023-08-21T21:19:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-21T21:19:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Msrisrujan/xlm-roberta-base-finetuned-panx-fr
|
Msrisrujan
| 2023-08-21T21:18:03Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T19:21:31Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8088309081786251
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0693
- F1: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1717 | 1.0 | 191 | 0.0918 | 0.7209 |
| 0.0785 | 2.0 | 382 | 0.0725 | 0.7850 |
| 0.0609 | 3.0 | 573 | 0.0693 | 0.8088 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Msrisrujan/xlm-roberta-base-finetuned-panx-de-fr
|
Msrisrujan
| 2023-08-21T21:04:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-21T19:04:51Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0479
- F1: 0.7951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1599 | 1.0 | 715 | 0.1363 | 0.4733 |
| 0.085 | 2.0 | 1430 | 0.0582 | 0.7456 |
| 0.0529 | 3.0 | 2145 | 0.0479 | 0.7951 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
baxterstockman/my_awesome_eli5_clm-model
|
baxterstockman
| 2023-08-21T20:57:46Z | 202 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-21T18:13:45Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8626 | 1.0 | 1145 | 3.7365 |
| 3.7894 | 2.0 | 2290 | 3.7213 |
| 3.7363 | 3.0 | 3435 | 3.7179 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_train_on_validated_cv_model__0035
|
bigmorning
| 2023-08-21T20:16:54Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T20:16:45Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_train_on_validated_cv_model__0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_train_on_validated_cv_model__0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0089
- Train Accuracy: 0.0821
- Train Wermet: 5.1607
- Validation Loss: 0.5637
- Validation Accuracy: 0.0725
- Validation Wermet: 7.3612
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 2.2992 | 0.0331 | 6.3187 | 1.9677 | 0.0372 | 7.2782 | 0 |
| 1.7447 | 0.0431 | 5.6345 | 1.7865 | 0.0408 | 6.3969 | 1 |
| 1.5036 | 0.0480 | 5.6512 | 1.4821 | 0.0468 | 5.1810 | 2 |
| 1.0955 | 0.0567 | 4.4606 | 1.0492 | 0.0557 | 3.7886 | 3 |
| 0.7076 | 0.0654 | 4.9157 | 0.7312 | 0.0627 | 4.2416 | 4 |
| 0.4703 | 0.0710 | 4.9749 | 0.5668 | 0.0664 | 4.8883 | 5 |
| 0.3552 | 0.0737 | 5.1823 | 0.4725 | 0.0685 | 4.8141 | 6 |
| 0.2865 | 0.0754 | 4.1024 | 0.4358 | 0.0694 | 3.8550 | 7 |
| 0.2380 | 0.0765 | 3.3689 | 0.3947 | 0.0704 | 2.2953 | 8 |
| 0.1998 | 0.0775 | 2.6219 | 0.3848 | 0.0707 | 3.0529 | 9 |
| 0.1687 | 0.0782 | 2.2111 | 0.3689 | 0.0711 | 1.8146 | 10 |
| 0.1417 | 0.0789 | 2.5284 | 0.3709 | 0.0713 | 1.9439 | 11 |
| 0.1190 | 0.0795 | 2.8048 | 0.3631 | 0.0716 | 3.0845 | 12 |
| 0.0983 | 0.0800 | 3.2668 | 0.3657 | 0.0717 | 3.6423 | 13 |
| 0.0815 | 0.0804 | 3.8567 | 0.3806 | 0.0717 | 6.1506 | 14 |
| 0.0659 | 0.0808 | 5.0931 | 0.3920 | 0.0718 | 6.4431 | 15 |
| 0.0520 | 0.0812 | 5.6397 | 0.3935 | 0.0720 | 5.1514 | 16 |
| 0.0409 | 0.0814 | 5.7797 | 0.4147 | 0.0720 | 4.2822 | 17 |
| 0.0330 | 0.0816 | 5.1017 | 0.4354 | 0.0719 | 6.2876 | 18 |
| 0.0257 | 0.0818 | 6.1581 | 0.4476 | 0.0720 | 7.0531 | 19 |
| 0.0212 | 0.0819 | 6.3234 | 0.4647 | 0.0720 | 7.4961 | 20 |
| 0.0183 | 0.0820 | 5.8886 | 0.4744 | 0.0721 | 5.6633 | 21 |
| 0.0141 | 0.0821 | 6.0894 | 0.5076 | 0.0718 | 5.6186 | 22 |
| 0.0130 | 0.0821 | 5.8770 | 0.5010 | 0.0721 | 6.2209 | 23 |
| 0.0123 | 0.0821 | 5.7417 | 0.5214 | 0.0720 | 6.8845 | 24 |
| 0.0115 | 0.0821 | 5.7680 | 0.5333 | 0.0720 | 7.2049 | 25 |
| 0.0091 | 0.0822 | 5.3959 | 0.5272 | 0.0723 | 4.1630 | 26 |
| 0.0097 | 0.0821 | 5.0201 | 0.5545 | 0.0720 | 5.1619 | 27 |
| 0.0100 | 0.0821 | 5.2278 | 0.5328 | 0.0724 | 5.7914 | 28 |
| 0.0069 | 0.0822 | 4.9319 | 0.5432 | 0.0723 | 3.8214 | 29 |
| 0.0083 | 0.0822 | 4.4749 | 0.5610 | 0.0722 | 3.6943 | 30 |
| 0.0075 | 0.0822 | 4.8208 | 0.5609 | 0.0724 | 5.1153 | 31 |
| 0.0066 | 0.0822 | 4.0023 | 0.5662 | 0.0724 | 3.1397 | 32 |
| 0.0067 | 0.0822 | 4.3423 | 0.5831 | 0.0723 | 5.5127 | 33 |
| 0.0089 | 0.0821 | 5.1607 | 0.5637 | 0.0725 | 7.3612 | 34 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
s0md3v/nudity-checker
|
s0md3v
| 2023-08-21T20:15:07Z | 0 | 12 | null |
[
"onnx",
"region:us"
] | null | 2023-06-18T10:48:26Z |
For more information, visit the github repository.
https://github.com/s0md3v/ifnude
|
cartesinus/iva_mt_wslot-m2m100_418M-en-es-plaintext_10e
|
cartesinus
| 2023-08-21T20:05:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:iva_mt_wslot",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-21T16:40:37Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- iva_mt_wslot
metrics:
- bleu
model-index:
- name: iva_mt_wslot-m2m100_418M-en-es-plaintext_10e
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: iva_mt_wslot
type: iva_mt_wslot
config: en-es
split: validation
args: en-es
metrics:
- name: Bleu
type: bleu
value: 51.1501
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iva_mt_wslot-m2m100_418M-en-es-plaintext_10e
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0116
- Bleu: 51.1501
- Gen Len: 12.6861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.012 | 1.0 | 2104 | 0.0109 | 47.9124 | 12.7523 |
| 0.0079 | 2.0 | 4208 | 0.0101 | 49.9897 | 12.6763 |
| 0.0059 | 3.0 | 6312 | 0.0101 | 50.5286 | 12.6435 |
| 0.0045 | 4.0 | 8416 | 0.0101 | 49.6821 | 12.5472 |
| 0.0033 | 5.0 | 10520 | 0.0104 | 50.3856 | 12.6638 |
| 0.0024 | 6.0 | 12624 | 0.0107 | 50.359 | 12.7418 |
| 0.0019 | 7.0 | 14728 | 0.0111 | 50.8234 | 12.709 |
| 0.0014 | 8.0 | 16832 | 0.0111 | 50.872 | 12.6671 |
| 0.0011 | 9.0 | 18936 | 0.0114 | 51.3014 | 12.6291 |
| 0.001 | 10.0 | 21040 | 0.0116 | 51.1501 | 12.6861 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
felixb85/ppo-LunarLander-v2
|
felixb85
| 2023-08-21T19:52:59Z | 0 | 0 | null |
[
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T19:45:05Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 57.03 +/- 121.77
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'gym_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 1000000
'torch_deterministic': True
'cuda': True
'capture_video': False
'num_envs': 4
'num_steps': 128
'batch_size': 512
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'normalize_advantages': True
'clip_coefficient': 0.2
'clip_value_loss': True
'entropy_coefficient': 0.01
'vf_coefficient': 0.5
'max_gradient_norm': 0.5
'repo_id': 'felixb85/ppo-LunarLander-v2'
'minibatch_size': 128
'env_id': 'LunarLander-v2'}
```
|
bigmorning/whisper_train_on_validated_cv_model__0030
|
bigmorning
| 2023-08-21T19:43:38Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T19:43:31Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_train_on_validated_cv_model__0030
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_train_on_validated_cv_model__0030
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0069
- Train Accuracy: 0.0822
- Train Wermet: 4.9319
- Validation Loss: 0.5432
- Validation Accuracy: 0.0723
- Validation Wermet: 3.8214
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 2.2992 | 0.0331 | 6.3187 | 1.9677 | 0.0372 | 7.2782 | 0 |
| 1.7447 | 0.0431 | 5.6345 | 1.7865 | 0.0408 | 6.3969 | 1 |
| 1.5036 | 0.0480 | 5.6512 | 1.4821 | 0.0468 | 5.1810 | 2 |
| 1.0955 | 0.0567 | 4.4606 | 1.0492 | 0.0557 | 3.7886 | 3 |
| 0.7076 | 0.0654 | 4.9157 | 0.7312 | 0.0627 | 4.2416 | 4 |
| 0.4703 | 0.0710 | 4.9749 | 0.5668 | 0.0664 | 4.8883 | 5 |
| 0.3552 | 0.0737 | 5.1823 | 0.4725 | 0.0685 | 4.8141 | 6 |
| 0.2865 | 0.0754 | 4.1024 | 0.4358 | 0.0694 | 3.8550 | 7 |
| 0.2380 | 0.0765 | 3.3689 | 0.3947 | 0.0704 | 2.2953 | 8 |
| 0.1998 | 0.0775 | 2.6219 | 0.3848 | 0.0707 | 3.0529 | 9 |
| 0.1687 | 0.0782 | 2.2111 | 0.3689 | 0.0711 | 1.8146 | 10 |
| 0.1417 | 0.0789 | 2.5284 | 0.3709 | 0.0713 | 1.9439 | 11 |
| 0.1190 | 0.0795 | 2.8048 | 0.3631 | 0.0716 | 3.0845 | 12 |
| 0.0983 | 0.0800 | 3.2668 | 0.3657 | 0.0717 | 3.6423 | 13 |
| 0.0815 | 0.0804 | 3.8567 | 0.3806 | 0.0717 | 6.1506 | 14 |
| 0.0659 | 0.0808 | 5.0931 | 0.3920 | 0.0718 | 6.4431 | 15 |
| 0.0520 | 0.0812 | 5.6397 | 0.3935 | 0.0720 | 5.1514 | 16 |
| 0.0409 | 0.0814 | 5.7797 | 0.4147 | 0.0720 | 4.2822 | 17 |
| 0.0330 | 0.0816 | 5.1017 | 0.4354 | 0.0719 | 6.2876 | 18 |
| 0.0257 | 0.0818 | 6.1581 | 0.4476 | 0.0720 | 7.0531 | 19 |
| 0.0212 | 0.0819 | 6.3234 | 0.4647 | 0.0720 | 7.4961 | 20 |
| 0.0183 | 0.0820 | 5.8886 | 0.4744 | 0.0721 | 5.6633 | 21 |
| 0.0141 | 0.0821 | 6.0894 | 0.5076 | 0.0718 | 5.6186 | 22 |
| 0.0130 | 0.0821 | 5.8770 | 0.5010 | 0.0721 | 6.2209 | 23 |
| 0.0123 | 0.0821 | 5.7417 | 0.5214 | 0.0720 | 6.8845 | 24 |
| 0.0115 | 0.0821 | 5.7680 | 0.5333 | 0.0720 | 7.2049 | 25 |
| 0.0091 | 0.0822 | 5.3959 | 0.5272 | 0.0723 | 4.1630 | 26 |
| 0.0097 | 0.0821 | 5.0201 | 0.5545 | 0.0720 | 5.1619 | 27 |
| 0.0100 | 0.0821 | 5.2278 | 0.5328 | 0.0724 | 5.7914 | 28 |
| 0.0069 | 0.0822 | 4.9319 | 0.5432 | 0.0723 | 3.8214 | 29 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
aymanosman/ppo-LunarLander-v2
|
aymanosman
| 2023-08-21T19:33:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T19:32:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.88 +/- 23.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Vraj567/snake
|
Vraj567
| 2023-08-21T19:26:54Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-21T19:26:04Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### snake Dreambooth model trained by Vraj567 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SBU16
Sample pictures of this concept:

|
YUYUE-Finnick/path_to_saved_model_prior
|
YUYUE-Finnick
| 2023-08-21T19:19:01Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-21T18:52:53Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - YUYUE-Finnick/path_to_saved_model_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
bedalton/C1-V11
|
bedalton
| 2023-08-21T19:18:27Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-17T20:22:24Z |
---
license: mit
---
# Creatures 1 - SD Checkpoint - V11
This model is meant to generate scenery like that in the game creatures 1. It is not meant to make objects or really do anything else.
Trained with Everydream2 on top of a Stable Diffusion 1.5
## Overtrained?
This model is definitely over trained, some prompts give really bad results - like anything involving a tree house, anything related to water,
and lots of things to do with caves
## Checkpoints
Included are the checkpoints at Epoch 70, 80 and 85.
Epoch 70 is the most general, but also tends to look powdery or like it was colored with pencils. It works well with negative prompts like "illustration" and "pencil"
Epoch 80 the objects are not powdery, but also more yellow or saturated (i.e. Overtrained), and sometimes look like cartoons. Also, this checkpoint tends to add branches
or blobs of brown at the top of the images it generates.
Neither checkpoint is really recommended. It is best to use the extracted LORAs
## Extracted LORAs
Recently added 2 LORAs extracted from checkpoints for epochs 70 and 80. These checkpoints have no activation words
If scenery image lacks the C1 straight on view, try adding cutaway to the prompt. Sometimes that helps, sometimes though it makes things much worse.
Images, like the actual Creatures 1 world, will mostly be tinted yellow
The LORAs work well when paired with Dreamlike-Diffusion, at around 75% strength.
Dreamlike diffusion with this LORA, tends to add a house or building in the middle of the image, even when you do not expect one.
|
KGSAGAR/distilhubert-finetuned-gtzan
|
KGSAGAR
| 2023-08-21T19:13:57Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-20T20:17:56Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5940
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9604 | 1.0 | 113 | 1.8896 | 0.47 |
| 0.9921 | 2.0 | 226 | 1.1632 | 0.65 |
| 0.9314 | 3.0 | 339 | 0.9269 | 0.73 |
| 0.7916 | 4.0 | 452 | 0.7033 | 0.84 |
| 0.4223 | 5.0 | 565 | 0.6700 | 0.79 |
| 0.2548 | 6.0 | 678 | 0.6467 | 0.85 |
| 0.2854 | 7.0 | 791 | 0.6092 | 0.82 |
| 0.1582 | 8.0 | 904 | 0.6272 | 0.86 |
| 0.1024 | 9.0 | 1017 | 0.6225 | 0.82 |
| 0.0345 | 10.0 | 1130 | 0.6064 | 0.84 |
| 0.0671 | 11.0 | 1243 | 0.5940 | 0.86 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
digiplay/Neg
|
digiplay
| 2023-08-21T19:13:56Z | 0 | 2 | null |
[
"license:other",
"region:us"
] | null | 2023-07-12T00:20:10Z |
---
license: other
---
[1]**New Negative v1.4**
file name:
**kkw-new-neg-v1.0.pt**
https://civitai.com/models/101046?modelVersionId=115645
💡download this file and upload it to the Embeddings tab on AUTOMATIC1111
💡and just using simple file name in your neg prompt textbox




https://civitai.com/models/101046?modelVersionId=115645
[2]&[3]**BadDream + UnrealisticDream**
file name:
**UnrealisticDream.pt**
**BadDream.pt**
intro:
https://civitai.com/models/72437
https://huggingface.co/digiplay/Neg/discussions/2
[4]**negative_hand-neg.pt**
intro:
https://civitai.com/models/56519/negativehand-negative-embedding
|
Waterhorse/ChessCLIP
|
Waterhorse
| 2023-08-21T19:11:04Z | 0 | 2 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-13T02:22:14Z |
---
license: apache-2.0
language:
- en
---
# ChessCLIP
ChessCLIP is a CLIP model trained to align (board, action) representation with natural language and calculate the similarity in Chess game.
## Model Details
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A CLIP model for chess.
# Quick Start
```bash
git clone https://github.com/waterhorse1/ChessGPT
```
Clone our codebase and install all dependencies according to our README.
You can also refer to https://github.com/waterhorse1/ChessGPT/blob/main/chessclip_demo.ipynb for a demo notebook of ChessCLIP.
## Inference
```python
import sys
sys.path.append('./chessclip/src')
import torch
import io
import chess.pgn
import numpy as np
from chess_ai.feature_converter import get_lc0_input_planes_tf
from chess_ai.datasets.tfds.pgn_base import generate_examples_from_game_no_comment
from open_clip.factory import get_tokenizer, load_checkpoint
# init
model_name = 'chessclip-quickgelu'
model = open_clip.create_model(model_name, pretrained='openai')
tokenizer = get_tokenizer(model_name)
# load model
load_checkpoint(model, './ChessCLIP/epoch_latest.pt')
# check parameters
model.eval()
context_length = model.text.context_length
vocab_size = model.text.vocab_size
print("Model parameters:", f"{np.sum([int(np.prod(p.shape)) for p in model.parameters()]):,}")
print("Context length:", context_length)
print("Vocab size:", vocab_size)
# generate board/action embedding based on pgn string
def generate_representation_for_final(pgn):
game = chess.pgn.read_game(io.StringIO(pgn))
data = list(generate_examples_from_game_no_comment(game))[-1]
for key in data.keys():
data[key] = np.array(data[key])
board = get_lc0_input_planes_tf(data).numpy()
action = data['probs']
return board, action
# Prepare input
prompt = "Black plays Sicilian Defense"
pgn_str = '1. e4 c5'
board, action = generate_representation_for_final(pgn_str)
text_tokens = tokenizer([prompt])
image_input = torch.from_numpy(np.stack([board], axis=0))
action_input = torch.from_numpy(np.stack([action], axis=0))
# infer
with torch.no_grad():
image_features = model.encode_image((image_input, action_input)).float()
text_features = model.encode_text(text_tokens).float()
image_features /= image_features.norm(dim=-1, keepdim=True) # n * dim
text_features /= text_features.norm(dim=-1, keepdim=True)# m * dim
similarity = text_features.cpu().numpy() @ image_features.cpu().numpy().T # m * n
print(similarity)
```
## Limitations
"ChessCLIP," like other CLIP-based models, has certain limitations that need to be taken into consideration. For instance, the model may produce incorrect similarities, especially when faced with complex, ambiguous, or language inputs that fall outside its training data.
We highly appreciate contributions from individuals and organizations to enhance the model's performance and stability. Specifically, we welcome annotated data, such as annotated PGN (Portable Game Notation), which can be utilized to train a more robust and reliable CLIP model.
## Benchmark
Please refer to our [paper](https://together.xyz) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results.
|
Clakmann/t5-base-Clakmann-thesis
|
Clakmann
| 2023-08-21T18:44:29Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-03T23:12:32Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-Clakmann-thesis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-Clakmann-thesis
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7647
- Rouge1: 19.9179
- Rouge2: 6.8159
- Rougel: 18.8425
- Rougelsum: 18.8407
- Gen Len: 14.3685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.8942 | 1.0 | 5029 | 1.7647 | 19.9179 | 6.8159 | 18.8425 | 18.8407 | 14.3685 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
bigmorning/whisper_train_on_validated_cv_model__0020
|
bigmorning
| 2023-08-21T18:37:12Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T18:37:04Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_train_on_validated_cv_model__0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_train_on_validated_cv_model__0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0257
- Train Accuracy: 0.0818
- Train Wermet: 6.1581
- Validation Loss: 0.4476
- Validation Accuracy: 0.0720
- Validation Wermet: 7.0531
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 2.2992 | 0.0331 | 6.3187 | 1.9677 | 0.0372 | 7.2782 | 0 |
| 1.7447 | 0.0431 | 5.6345 | 1.7865 | 0.0408 | 6.3969 | 1 |
| 1.5036 | 0.0480 | 5.6512 | 1.4821 | 0.0468 | 5.1810 | 2 |
| 1.0955 | 0.0567 | 4.4606 | 1.0492 | 0.0557 | 3.7886 | 3 |
| 0.7076 | 0.0654 | 4.9157 | 0.7312 | 0.0627 | 4.2416 | 4 |
| 0.4703 | 0.0710 | 4.9749 | 0.5668 | 0.0664 | 4.8883 | 5 |
| 0.3552 | 0.0737 | 5.1823 | 0.4725 | 0.0685 | 4.8141 | 6 |
| 0.2865 | 0.0754 | 4.1024 | 0.4358 | 0.0694 | 3.8550 | 7 |
| 0.2380 | 0.0765 | 3.3689 | 0.3947 | 0.0704 | 2.2953 | 8 |
| 0.1998 | 0.0775 | 2.6219 | 0.3848 | 0.0707 | 3.0529 | 9 |
| 0.1687 | 0.0782 | 2.2111 | 0.3689 | 0.0711 | 1.8146 | 10 |
| 0.1417 | 0.0789 | 2.5284 | 0.3709 | 0.0713 | 1.9439 | 11 |
| 0.1190 | 0.0795 | 2.8048 | 0.3631 | 0.0716 | 3.0845 | 12 |
| 0.0983 | 0.0800 | 3.2668 | 0.3657 | 0.0717 | 3.6423 | 13 |
| 0.0815 | 0.0804 | 3.8567 | 0.3806 | 0.0717 | 6.1506 | 14 |
| 0.0659 | 0.0808 | 5.0931 | 0.3920 | 0.0718 | 6.4431 | 15 |
| 0.0520 | 0.0812 | 5.6397 | 0.3935 | 0.0720 | 5.1514 | 16 |
| 0.0409 | 0.0814 | 5.7797 | 0.4147 | 0.0720 | 4.2822 | 17 |
| 0.0330 | 0.0816 | 5.1017 | 0.4354 | 0.0719 | 6.2876 | 18 |
| 0.0257 | 0.0818 | 6.1581 | 0.4476 | 0.0720 | 7.0531 | 19 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
MijoBa/Olaus_Magnus_Woodcut
|
MijoBa
| 2023-08-21T18:32:11Z | 0 | 0 | null |
[
"en",
"license:cc",
"region:us"
] | null | 2023-08-21T18:21:43Z |
---
license: cc
language:
- en
---
|
hglong16/lunar
|
hglong16
| 2023-08-21T18:18:22Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T17:26:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.87 +/- 20.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Onutoa/20230822011214
|
Onutoa
| 2023-08-21T18:08:54Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-21T16:12:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822011214'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822011214
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 13.1424
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 34.1366 | 0.4729 |
| 34.4899 | 2.0 | 624 | 31.6158 | 0.4982 |
| 34.4899 | 3.0 | 936 | 29.7502 | 0.4765 |
| 31.3598 | 4.0 | 1248 | 29.3626 | 0.5018 |
| 29.6767 | 5.0 | 1560 | 29.1220 | 0.4729 |
| 29.6767 | 6.0 | 1872 | 28.7672 | 0.5307 |
| 29.2217 | 7.0 | 2184 | 27.2268 | 0.5126 |
| 29.2217 | 8.0 | 2496 | 23.7819 | 0.4982 |
| 27.2285 | 9.0 | 2808 | 20.2651 | 0.5271 |
| 23.6907 | 10.0 | 3120 | 17.8350 | 0.5271 |
| 23.6907 | 11.0 | 3432 | 16.7909 | 0.4729 |
| 21.0475 | 12.0 | 3744 | 16.1897 | 0.4729 |
| 20.1309 | 13.0 | 4056 | 15.7234 | 0.4729 |
| 20.1309 | 14.0 | 4368 | 15.4084 | 0.4729 |
| 19.6553 | 15.0 | 4680 | 15.1657 | 0.4729 |
| 19.6553 | 16.0 | 4992 | 14.9716 | 0.5271 |
| 19.3496 | 17.0 | 5304 | 14.7880 | 0.5271 |
| 19.122 | 18.0 | 5616 | 14.6322 | 0.4729 |
| 19.122 | 19.0 | 5928 | 14.5424 | 0.4729 |
| 18.9517 | 20.0 | 6240 | 14.4178 | 0.5271 |
| 18.7994 | 21.0 | 6552 | 14.2725 | 0.4729 |
| 18.7994 | 22.0 | 6864 | 14.2138 | 0.5271 |
| 18.6835 | 23.0 | 7176 | 14.1064 | 0.5271 |
| 18.6835 | 24.0 | 7488 | 14.0401 | 0.4729 |
| 18.59 | 25.0 | 7800 | 13.9478 | 0.4729 |
| 18.504 | 26.0 | 8112 | 13.9156 | 0.4729 |
| 18.504 | 27.0 | 8424 | 13.8335 | 0.4729 |
| 18.4387 | 28.0 | 8736 | 13.7761 | 0.4729 |
| 18.3758 | 29.0 | 9048 | 13.7312 | 0.4729 |
| 18.3758 | 30.0 | 9360 | 13.6791 | 0.4729 |
| 18.3264 | 31.0 | 9672 | 13.6458 | 0.5271 |
| 18.3264 | 32.0 | 9984 | 13.5991 | 0.4729 |
| 18.2808 | 33.0 | 10296 | 13.5762 | 0.5271 |
| 18.2355 | 34.0 | 10608 | 13.5283 | 0.4729 |
| 18.2355 | 35.0 | 10920 | 13.4919 | 0.4729 |
| 18.2071 | 36.0 | 11232 | 13.4721 | 0.4729 |
| 18.1831 | 37.0 | 11544 | 13.4375 | 0.4729 |
| 18.1831 | 38.0 | 11856 | 13.4097 | 0.5271 |
| 18.1448 | 39.0 | 12168 | 13.4004 | 0.5271 |
| 18.1448 | 40.0 | 12480 | 13.3691 | 0.5271 |
| 18.1182 | 41.0 | 12792 | 13.3430 | 0.4729 |
| 18.1006 | 42.0 | 13104 | 13.3514 | 0.4729 |
| 18.1006 | 43.0 | 13416 | 13.3017 | 0.4729 |
| 18.0785 | 44.0 | 13728 | 13.2838 | 0.4729 |
| 18.0562 | 45.0 | 14040 | 13.2687 | 0.4729 |
| 18.0562 | 46.0 | 14352 | 13.2555 | 0.4729 |
| 18.0454 | 47.0 | 14664 | 13.2510 | 0.4729 |
| 18.0454 | 48.0 | 14976 | 13.2384 | 0.5271 |
| 18.0293 | 49.0 | 15288 | 13.2096 | 0.4729 |
| 18.0221 | 50.0 | 15600 | 13.2013 | 0.4729 |
| 18.0221 | 51.0 | 15912 | 13.1936 | 0.4729 |
| 17.9969 | 52.0 | 16224 | 13.1813 | 0.4729 |
| 17.9919 | 53.0 | 16536 | 13.1736 | 0.4729 |
| 17.9919 | 54.0 | 16848 | 13.1681 | 0.5271 |
| 17.9823 | 55.0 | 17160 | 13.1559 | 0.4729 |
| 17.9823 | 56.0 | 17472 | 13.1537 | 0.4729 |
| 17.9804 | 57.0 | 17784 | 13.1490 | 0.4729 |
| 17.9743 | 58.0 | 18096 | 13.1461 | 0.4729 |
| 17.9743 | 59.0 | 18408 | 13.1429 | 0.4729 |
| 17.9703 | 60.0 | 18720 | 13.1424 | 0.4729 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Onutoa/20230822011246
|
Onutoa
| 2023-08-21T18:05:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-21T16:13:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822011246'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822011246
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 12.0925
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 23.5855 | 0.5271 |
| 27.3295 | 2.0 | 624 | 15.7672 | 0.4729 |
| 27.3295 | 3.0 | 936 | 14.1816 | 0.5271 |
| 19.6736 | 4.0 | 1248 | 13.5811 | 0.4729 |
| 18.8481 | 5.0 | 1560 | 13.3851 | 0.4729 |
| 18.8481 | 6.0 | 1872 | 13.0199 | 0.4729 |
| 18.5899 | 7.0 | 2184 | 12.9497 | 0.4838 |
| 18.5899 | 8.0 | 2496 | 12.9961 | 0.4729 |
| 18.473 | 9.0 | 2808 | 12.8275 | 0.4729 |
| 18.3073 | 10.0 | 3120 | 12.6992 | 0.4729 |
| 18.3073 | 11.0 | 3432 | 13.5160 | 0.5271 |
| 18.2739 | 12.0 | 3744 | 12.6731 | 0.5307 |
| 18.1236 | 13.0 | 4056 | 12.6066 | 0.4729 |
| 18.1236 | 14.0 | 4368 | 12.5802 | 0.4729 |
| 18.1096 | 15.0 | 4680 | 12.6447 | 0.5271 |
| 18.1096 | 16.0 | 4992 | 13.3094 | 0.4729 |
| 18.1134 | 17.0 | 5304 | 13.0970 | 0.5271 |
| 18.1098 | 18.0 | 5616 | 12.7293 | 0.5271 |
| 18.1098 | 19.0 | 5928 | 12.6166 | 0.5271 |
| 18.0277 | 20.0 | 6240 | 12.5606 | 0.4729 |
| 18.0289 | 21.0 | 6552 | 12.5322 | 0.4729 |
| 18.0289 | 22.0 | 6864 | 12.7341 | 0.5271 |
| 18.0223 | 23.0 | 7176 | 12.5497 | 0.4729 |
| 18.0223 | 24.0 | 7488 | 12.4199 | 0.5271 |
| 17.9317 | 25.0 | 7800 | 12.7868 | 0.5271 |
| 17.9107 | 26.0 | 8112 | 12.3295 | 0.4729 |
| 17.9107 | 27.0 | 8424 | 12.6038 | 0.4729 |
| 17.8944 | 28.0 | 8736 | 12.3329 | 0.5271 |
| 17.8667 | 29.0 | 9048 | 12.3034 | 0.5271 |
| 17.8667 | 30.0 | 9360 | 12.4605 | 0.5271 |
| 17.8228 | 31.0 | 9672 | 12.5110 | 0.4729 |
| 17.8228 | 32.0 | 9984 | 12.4227 | 0.5271 |
| 17.8006 | 33.0 | 10296 | 12.2972 | 0.4729 |
| 17.76 | 34.0 | 10608 | 12.3011 | 0.4729 |
| 17.76 | 35.0 | 10920 | 12.2179 | 0.4729 |
| 17.7564 | 36.0 | 11232 | 12.2381 | 0.4729 |
| 17.7084 | 37.0 | 11544 | 12.8747 | 0.4729 |
| 17.7084 | 38.0 | 11856 | 12.1945 | 0.4729 |
| 17.7035 | 39.0 | 12168 | 12.2180 | 0.4729 |
| 17.7035 | 40.0 | 12480 | 12.2830 | 0.4729 |
| 17.6668 | 41.0 | 12792 | 12.1857 | 0.4693 |
| 17.6396 | 42.0 | 13104 | 12.2239 | 0.5379 |
| 17.6396 | 43.0 | 13416 | 12.1584 | 0.5271 |
| 17.6452 | 44.0 | 13728 | 12.3185 | 0.4729 |
| 17.6074 | 45.0 | 14040 | 12.2421 | 0.5271 |
| 17.6074 | 46.0 | 14352 | 12.1912 | 0.4729 |
| 17.6167 | 47.0 | 14664 | 12.2022 | 0.5271 |
| 17.6167 | 48.0 | 14976 | 12.1326 | 0.4729 |
| 17.5782 | 49.0 | 15288 | 12.1550 | 0.4729 |
| 17.562 | 50.0 | 15600 | 12.2250 | 0.5271 |
| 17.562 | 51.0 | 15912 | 12.1190 | 0.4729 |
| 17.5409 | 52.0 | 16224 | 12.1505 | 0.5271 |
| 17.5211 | 53.0 | 16536 | 12.1046 | 0.4729 |
| 17.5211 | 54.0 | 16848 | 12.1132 | 0.5271 |
| 17.5043 | 55.0 | 17160 | 12.1159 | 0.4729 |
| 17.5043 | 56.0 | 17472 | 12.1085 | 0.5271 |
| 17.4952 | 57.0 | 17784 | 12.1024 | 0.4729 |
| 17.4731 | 58.0 | 18096 | 12.0955 | 0.4729 |
| 17.4731 | 59.0 | 18408 | 12.0981 | 0.5271 |
| 17.4654 | 60.0 | 18720 | 12.0925 | 0.4729 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_train_on_validated_cv_model__0015
|
bigmorning
| 2023-08-21T18:03:57Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T18:03:49Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_train_on_validated_cv_model__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_train_on_validated_cv_model__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0815
- Train Accuracy: 0.0804
- Train Wermet: 3.8567
- Validation Loss: 0.3806
- Validation Accuracy: 0.0717
- Validation Wermet: 6.1506
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 2.2992 | 0.0331 | 6.3187 | 1.9677 | 0.0372 | 7.2782 | 0 |
| 1.7447 | 0.0431 | 5.6345 | 1.7865 | 0.0408 | 6.3969 | 1 |
| 1.5036 | 0.0480 | 5.6512 | 1.4821 | 0.0468 | 5.1810 | 2 |
| 1.0955 | 0.0567 | 4.4606 | 1.0492 | 0.0557 | 3.7886 | 3 |
| 0.7076 | 0.0654 | 4.9157 | 0.7312 | 0.0627 | 4.2416 | 4 |
| 0.4703 | 0.0710 | 4.9749 | 0.5668 | 0.0664 | 4.8883 | 5 |
| 0.3552 | 0.0737 | 5.1823 | 0.4725 | 0.0685 | 4.8141 | 6 |
| 0.2865 | 0.0754 | 4.1024 | 0.4358 | 0.0694 | 3.8550 | 7 |
| 0.2380 | 0.0765 | 3.3689 | 0.3947 | 0.0704 | 2.2953 | 8 |
| 0.1998 | 0.0775 | 2.6219 | 0.3848 | 0.0707 | 3.0529 | 9 |
| 0.1687 | 0.0782 | 2.2111 | 0.3689 | 0.0711 | 1.8146 | 10 |
| 0.1417 | 0.0789 | 2.5284 | 0.3709 | 0.0713 | 1.9439 | 11 |
| 0.1190 | 0.0795 | 2.8048 | 0.3631 | 0.0716 | 3.0845 | 12 |
| 0.0983 | 0.0800 | 3.2668 | 0.3657 | 0.0717 | 3.6423 | 13 |
| 0.0815 | 0.0804 | 3.8567 | 0.3806 | 0.0717 | 6.1506 | 14 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Onutoa/20230822010704
|
Onutoa
| 2023-08-21T18:03:26Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-21T16:07:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230822010704'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230822010704
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 12.2037
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 33.1191 | 0.4513 |
| 33.504 | 2.0 | 624 | 30.1105 | 0.5126 |
| 33.504 | 3.0 | 936 | 28.6596 | 0.4729 |
| 29.6796 | 4.0 | 1248 | 28.1189 | 0.5018 |
| 28.1744 | 5.0 | 1560 | 24.7761 | 0.4729 |
| 28.1744 | 6.0 | 1872 | 21.9627 | 0.5235 |
| 24.4505 | 7.0 | 2184 | 19.0019 | 0.5271 |
| 24.4505 | 8.0 | 2496 | 17.1277 | 0.5271 |
| 21.3932 | 9.0 | 2808 | 16.1660 | 0.5271 |
| 19.6922 | 10.0 | 3120 | 15.5951 | 0.5271 |
| 19.6922 | 11.0 | 3432 | 15.0824 | 0.4729 |
| 18.9663 | 12.0 | 3744 | 14.8520 | 0.4729 |
| 18.4915 | 13.0 | 4056 | 14.5191 | 0.4729 |
| 18.4915 | 14.0 | 4368 | 14.2798 | 0.4729 |
| 18.1712 | 15.0 | 4680 | 14.1216 | 0.4729 |
| 18.1712 | 16.0 | 4992 | 13.9650 | 0.5271 |
| 17.9497 | 17.0 | 5304 | 13.8237 | 0.5307 |
| 17.7679 | 18.0 | 5616 | 13.7031 | 0.5271 |
| 17.7679 | 19.0 | 5928 | 13.6600 | 0.4729 |
| 17.6276 | 20.0 | 6240 | 13.4947 | 0.5271 |
| 17.4928 | 21.0 | 6552 | 13.3930 | 0.4729 |
| 17.4928 | 22.0 | 6864 | 13.3240 | 0.5271 |
| 17.3723 | 23.0 | 7176 | 13.2304 | 0.5271 |
| 17.3723 | 24.0 | 7488 | 13.1542 | 0.4729 |
| 17.2738 | 25.0 | 7800 | 13.0519 | 0.5271 |
| 17.1691 | 26.0 | 8112 | 13.0350 | 0.4729 |
| 17.1691 | 27.0 | 8424 | 12.9247 | 0.4729 |
| 17.0746 | 28.0 | 8736 | 12.8456 | 0.5126 |
| 16.9881 | 29.0 | 9048 | 12.7944 | 0.4729 |
| 16.9881 | 30.0 | 9360 | 12.7474 | 0.4729 |
| 16.9201 | 31.0 | 9672 | 12.7131 | 0.5271 |
| 16.9201 | 32.0 | 9984 | 12.6670 | 0.4729 |
| 16.8521 | 33.0 | 10296 | 12.6285 | 0.5271 |
| 16.7917 | 34.0 | 10608 | 12.5831 | 0.4729 |
| 16.7917 | 35.0 | 10920 | 12.5488 | 0.5271 |
| 16.7467 | 36.0 | 11232 | 12.5223 | 0.4729 |
| 16.7092 | 37.0 | 11544 | 12.4885 | 0.4729 |
| 16.7092 | 38.0 | 11856 | 12.4606 | 0.5271 |
| 16.6584 | 39.0 | 12168 | 12.4352 | 0.5271 |
| 16.6584 | 40.0 | 12480 | 12.4116 | 0.4729 |
| 16.6245 | 41.0 | 12792 | 12.3909 | 0.5271 |
| 16.5986 | 42.0 | 13104 | 12.4119 | 0.4729 |
| 16.5986 | 43.0 | 13416 | 12.3479 | 0.5271 |
| 16.5728 | 44.0 | 13728 | 12.3328 | 0.4729 |
| 16.5395 | 45.0 | 14040 | 12.3359 | 0.4729 |
| 16.5395 | 46.0 | 14352 | 12.3195 | 0.4729 |
| 16.5222 | 47.0 | 14664 | 12.3031 | 0.4729 |
| 16.5222 | 48.0 | 14976 | 12.2788 | 0.5271 |
| 16.5068 | 49.0 | 15288 | 12.2630 | 0.5596 |
| 16.4947 | 50.0 | 15600 | 12.2533 | 0.4729 |
| 16.4947 | 51.0 | 15912 | 12.2531 | 0.4729 |
| 16.4716 | 52.0 | 16224 | 12.2479 | 0.4729 |
| 16.4646 | 53.0 | 16536 | 12.2272 | 0.5271 |
| 16.4646 | 54.0 | 16848 | 12.2213 | 0.5271 |
| 16.4479 | 55.0 | 17160 | 12.2177 | 0.4729 |
| 16.4479 | 56.0 | 17472 | 12.2112 | 0.4765 |
| 16.447 | 57.0 | 17784 | 12.2106 | 0.4729 |
| 16.4403 | 58.0 | 18096 | 12.2055 | 0.4729 |
| 16.4403 | 59.0 | 18408 | 12.2039 | 0.4729 |
| 16.4371 | 60.0 | 18720 | 12.2037 | 0.4729 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sarwarbeing/a2c-PandaReachDense-v3
|
sarwarbeing
| 2023-08-21T18:00:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T17:54:56Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VK246/IC_ver6H_coco_swin_gpt2_50B_1e
|
VK246
| 2023-08-21T17:54:15Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"base_model:VK246/IC_ver6G_coco_swin_gpt2_50A_1e",
"base_model:finetune:VK246/IC_ver6G_coco_swin_gpt2_50A_1e",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-21T15:34:20Z |
---
base_model: VK246/IC_ver6G_coco_swin_gpt2_50A_1e
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
model-index:
- name: IC_ver6H_coco_swin_gpt2_50B_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver6H_coco_swin_gpt2_50B_1e
This model is a fine-tuned version of [VK246/IC_ver6G_coco_swin_gpt2_50A_1e](https://huggingface.co/VK246/IC_ver6G_coco_swin_gpt2_50A_1e) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7969
- Cider: 7.3588
- Rouge1: 41.9295
- Rouge2: 16.3455
- Rougel: 37.9811
- Rougelsum: 37.9766
- Bleu-1: 42.8743
- Bleu-2: 24.7756
- Bleu-3: 15.6692
- Bleu-4: 10.4429
- Gen Len: 11.3063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cider | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| 0.6159 | 0.34 | 1000 | 0.8323 | 6.7172 | 41.0274 | 15.5809 | 37.2211 | 37.2045 | 42.2207 | 24.0365 | 15.0562 | 9.9118 | 11.3063 |
| 0.6802 | 0.68 | 2000 | 0.7969 | 7.3588 | 41.9295 | 16.3455 | 37.9811 | 37.9766 | 42.8743 | 24.7756 | 15.6692 | 10.4429 | 11.3063 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vedantmahalle21/whisper-small-dv
|
vedantmahalle21
| 2023-08-21T17:51:37Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-21T14:26:54Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Vedant Mahalle
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.266335153180094
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Vedant Mahalle
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1698
- Wer Ortho: 62.0169
- Wer: 13.2663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1232 | 1.63 | 500 | 0.1698 | 62.0169 | 13.2663 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
peymansyh/distilhubert-finetuned-gtzan
|
peymansyh
| 2023-08-21T17:50:22Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-11T14:09:10Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-88
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-88
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6139
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0172 | 1.0 | 112 | 1.8314 | 0.37 |
| 1.5433 | 2.0 | 225 | 1.2575 | 0.5 |
| 1.1517 | 3.0 | 337 | 0.9577 | 0.7 |
| 0.904 | 4.0 | 450 | 0.7582 | 0.77 |
| 0.4788 | 5.0 | 562 | 0.7504 | 0.79 |
| 0.3843 | 6.0 | 675 | 0.6265 | 0.79 |
| 0.3683 | 7.0 | 787 | 0.6683 | 0.8 |
| 0.2278 | 8.0 | 900 | 0.8167 | 0.77 |
| 0.4534 | 9.0 | 1012 | 0.6023 | 0.83 |
| 0.2357 | 10.0 | 1125 | 0.6185 | 0.83 |
| 0.3674 | 11.0 | 1237 | 0.6079 | 0.86 |
| 0.148 | 11.95 | 1344 | 0.6139 | 0.87 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Msrisrujan/distilbert-base-uncased-finetuned-emotion
|
Msrisrujan
| 2023-08-21T17:49:34Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-20T01:12:09Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241048144634979
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2227
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8661 | 1.0 | 250 | 0.3402 | 0.905 | 0.9041 |
| 0.2651 | 2.0 | 500 | 0.2227 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
thanhnew2001/vn-falcon-7b
|
thanhnew2001
| 2023-08-21T17:28:55Z | 7 | 0 |
peft
|
[
"peft",
"text-generation",
"region:us"
] |
text-generation
| 2023-08-03T00:47:59Z |
---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
leonard-pak/ppo-SnowballTarget
|
leonard-pak
| 2023-08-21T17:27:51Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-21T17:27:48Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: leonard-pak/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
thanhnew2001/vn-bloom7b1-news
|
thanhnew2001
| 2023-08-21T17:21:15Z | 30 | 0 |
peft
|
[
"peft",
"text-generation",
"region:us"
] |
text-generation
| 2023-08-07T23:56:03Z |
---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
thanhnew2001/bloom7b1_grade7_500
|
thanhnew2001
| 2023-08-21T17:16:02Z | 2 | 0 |
peft
|
[
"peft",
"text-generation",
"region:us"
] |
text-generation
| 2023-08-21T14:08:33Z |
---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
aviroes/elderly_whisper-small-fr
|
aviroes
| 2023-08-21T17:10:03Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:aviroes/whisper-small-fr",
"base_model:finetune:aviroes/whisper-small-fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-19T19:17:09Z |
---
license: apache-2.0
base_model: aviroes/whisper-small-fr
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: elderly_whisper-small-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# elderly_whisper-small-fr
This model is a fine-tuned version of [aviroes/whisper-small-fr](https://huggingface.co/aviroes/whisper-small-fr) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3006
- Wer: 0.3065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5099 | 0.04 | 100 | 0.4949 | 0.2392 |
| 0.4968 | 0.08 | 200 | 0.4577 | 0.2922 |
| 0.4662 | 0.11 | 300 | 0.4336 | 0.2325 |
| 0.4241 | 0.15 | 400 | 0.4232 | 0.2545 |
| 0.3902 | 0.19 | 500 | 0.4073 | 0.3009 |
| 0.4205 | 0.23 | 600 | 0.3978 | 0.2672 |
| 0.4 | 0.27 | 700 | 0.3798 | 0.2473 |
| 0.3508 | 0.3 | 800 | 0.3860 | 0.2218 |
| 0.3601 | 0.34 | 900 | 0.3870 | 0.2509 |
| 0.3147 | 0.38 | 1000 | 0.3663 | 0.2983 |
| 0.3194 | 0.42 | 1100 | 0.3637 | 0.2285 |
| 0.3218 | 0.46 | 1200 | 0.3616 | 0.2361 |
| 0.3365 | 0.5 | 1300 | 0.3555 | 0.2091 |
| 0.3474 | 0.53 | 1400 | 0.3560 | 0.2075 |
| 0.3439 | 0.57 | 1500 | 0.3490 | 0.2228 |
| 0.3254 | 0.61 | 1600 | 0.3432 | 0.1892 |
| 0.3089 | 0.65 | 1700 | 0.3426 | 0.1979 |
| 0.3577 | 0.69 | 1800 | 0.3383 | 0.1897 |
| 0.325 | 0.72 | 1900 | 0.3402 | 0.1871 |
| 0.2855 | 0.76 | 2000 | 0.3350 | 0.2040 |
| 0.3012 | 0.8 | 2100 | 0.3309 | 0.3121 |
| 0.3677 | 0.84 | 2200 | 0.3313 | 0.2040 |
| 0.3208 | 0.88 | 2300 | 0.3301 | 0.2917 |
| 0.3459 | 0.91 | 2400 | 0.3248 | 0.2973 |
| 0.2694 | 0.95 | 2500 | 0.3146 | 0.1866 |
| 0.3347 | 0.99 | 2600 | 0.3141 | 0.1953 |
| 0.1851 | 1.03 | 2700 | 0.3159 | 0.1943 |
| 0.1691 | 1.07 | 2800 | 0.3143 | 0.1856 |
| 0.1861 | 1.1 | 2900 | 0.3135 | 0.3039 |
| 0.1525 | 1.14 | 3000 | 0.3136 | 0.3320 |
| 0.165 | 1.18 | 3100 | 0.3124 | 0.2126 |
| 0.1421 | 1.22 | 3200 | 0.3161 | 0.3565 |
| 0.1676 | 1.26 | 3300 | 0.3180 | 0.2050 |
| 0.1719 | 1.3 | 3400 | 0.3157 | 0.1984 |
| 0.1863 | 1.33 | 3500 | 0.3173 | 0.3080 |
| 0.1499 | 1.37 | 3600 | 0.3102 | 0.2438 |
| 0.1599 | 1.41 | 3700 | 0.3096 | 0.2055 |
| 0.1762 | 1.45 | 3800 | 0.3070 | 0.3157 |
| 0.1641 | 1.49 | 3900 | 0.3052 | 0.2529 |
| 0.1387 | 1.52 | 4000 | 0.3071 | 0.2009 |
| 0.1662 | 1.56 | 4100 | 0.3077 | 0.2040 |
| 0.1715 | 1.6 | 4200 | 0.3050 | 0.2315 |
| 0.1584 | 1.64 | 4300 | 0.3031 | 0.2004 |
| 0.1563 | 1.68 | 4400 | 0.3019 | 0.2035 |
| 0.1515 | 1.71 | 4500 | 0.3020 | 0.2101 |
| 0.1582 | 1.75 | 4600 | 0.3021 | 0.3075 |
| 0.1534 | 1.79 | 4700 | 0.3013 | 0.1989 |
| 0.1699 | 1.83 | 4800 | 0.3012 | 0.3055 |
| 0.1503 | 1.87 | 4900 | 0.3009 | 0.3055 |
| 0.14 | 1.9 | 5000 | 0.3006 | 0.3065 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
digitalpipelines/llama2_13b_chat_uncensored-GGML
|
digitalpipelines
| 2023-08-21T17:05:11Z | 0 | 4 | null |
[
"uncensored",
"wizard",
"vicuna",
"llama",
"en",
"dataset:digitalpipelines/wizard_vicuna_70k_uncensored",
"license:llama2",
"region:us"
] | null | 2023-08-20T14:44:43Z |
---
language:
- en
license: llama2
datasets:
- digitalpipelines/wizard_vicuna_70k_uncensored
tags:
- uncensored
- wizard
- vicuna
- llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://digitalpipelines.net/assets/images/logos/dp_logo_transparent.png" alt="Digital Pipelines" style="width: 10%; min-width: 200px; display: block; margin: auto;">
</div>
<!-- header end -->
# Overview
Fine-tuned [Llama-2 13B](https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16) trained with an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
A QLoRA was created and used for fine-tuning and then merged back into the model. Llama2 has inherited bias even though it's been finetuned on an uncensored dataset.
## Available versions of this model
* [GPTQ model for usage with GPU. Multiple quantisation options available.](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored-GPTQ)
* [Various GGML model quantization sizesfor CPU/GPU/Apple M1 usage.](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored-GGML)
* [Original unquantised model](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored)
## Prompt template: Llama-2-Chat
```
SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
USER: {prompt}
ASSISTANT:
```
|
digitalpipelines/llama2_13b_chat_uncensored-GPTQ
|
digitalpipelines
| 2023-08-21T17:04:56Z | 9 | 3 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"uncensored",
"wizard",
"vicuna",
"en",
"dataset:digitalpipelines/wizard_vicuna_70k_uncensored",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-20T14:44:29Z |
---
language:
- en
license: llama2
datasets:
- digitalpipelines/wizard_vicuna_70k_uncensored
tags:
- uncensored
- wizard
- vicuna
- llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://digitalpipelines.net/assets/images/logos/dp_logo_transparent.png" alt="Digital Pipelines" style="width: 10%; min-width: 200px; display: block; margin: auto;">
</div>
<!-- header end -->
# Overview
Fine-tuned [Llama-2 13B](https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16) trained with an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
A QLoRA was created and used for fine-tuning and then merged back into the model. Llama2 has inherited bias even though it's been finetuned on an uncensored dataset.
## Available versions of this model
* [GPTQ model for usage with GPU. Multiple quantisation options available.](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored-GPTQ)
* [Various GGML model quantization sizesfor CPU/GPU/Apple M1 usage.](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored-GGML)
* [Original unquantised model](https://huggingface.co/digitalpipelines/llama2_13b_chat_uncensored)
## Prompt template: Llama-2-Chat
```
SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
USER: {prompt}
ASSISTANT:
```
|
BabaYaga048/poca-SoccerTwos
|
BabaYaga048
| 2023-08-21T17:02:37Z | 22 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-21T17:02:11Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: BabaYaga048/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
simondh/q-FrozenLake-v1-4x4-noSlippery
|
simondh
| 2023-08-21T17:02:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T17:01:57Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="simondh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ainjarts/new_model
|
ainjarts
| 2023-08-21T16:58:31Z | 30 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-21T16:34:22Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ainjarts/new_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
Maxph2211/rl_course_vizdoom_health_gathering_supreme
|
Maxph2211
| 2023-08-21T16:51:42Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T16:51:24Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.69 +/- 4.38
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Maxph2211/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Rzoro/sd-prompt-generator-gpt-neo
|
Rzoro
| 2023-08-21T16:41:12Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"dataset:FredZhang7/krea-ai-prompts",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-21T14:48:18Z |
---
datasets:
- FredZhang7/krea-ai-prompts
---
This model is an experimental model by fine tuning 'EleutherAI/gpt-neo-125m' model on Stable Diffusion AI prompts data.
The code that was used to train the model is found [here](https://github.com/mandar4tech/Text_Gen/blob/main/fine-tune-gpt-neo-for-stable-diffusion-prompt-gen.ipynb)
This code NB is a reference used from the following [Youtube](https://www.youtube.com/watch?v=uE0_XKh2d6g) video.
@software{gpt-neo,
author = {Black, Sid and
Gao, Leo and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
|
dkqjrm/20230821153812
|
dkqjrm
| 2023-08-21T16:35:21Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-21T06:38:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: '20230821153812'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230821153812
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Txinplas/Taxi-v3
|
Txinplas
| 2023-08-21T16:32:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T16:32:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Txinplas/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rishabh063/lora-trained-xl-colab
|
rishabh063
| 2023-08-21T16:23:48Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-21T15:29:55Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rishabh063/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
dkqjrm/20230821153710
|
dkqjrm
| 2023-08-21T16:23:48Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-21T06:37:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: '20230821153710'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230821153710
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ivivnov/dqn-SpaceInvadersNoFrameskip-v4
|
ivivnov
| 2023-08-21T16:23:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T16:22:33Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 5.00 +/- 7.07
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ivivnov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ivivnov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ivivnov
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
digiplay/SXZ_Luma_v0.98VAE
|
digiplay
| 2023-08-21T16:19:19Z | 449 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-26T05:05:37Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/25831?modelVersionId=68200
Sample images I made :
(generated by Hugginface's API)

Original Author's DEMO images :

noEMA version is from Yntec :
https://huggingface.co/Yntec/Luma
Yntec has many cool merge models,
very recommend to use/try. 👍😄
|
Lv1122/aas
|
Lv1122
| 2023-08-21T16:18:44Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-21T16:11:10Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brunoboat/rl_course_vizdoom_health_gathering_supreme
|
brunoboat
| 2023-08-21T16:13:21Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-21T16:02:21Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 7.27 +/- 1.57
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r brunoboat/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
dkqjrm/20230821153636
|
dkqjrm
| 2023-08-21T16:09:28Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-21T06:37:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: '20230821153636'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230821153636
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.