modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 00:38:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 525
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 00:38:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bigmorning/whisper_charsplit_new_0061
|
bigmorning
| 2023-08-13T13:41:13Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T13:41:06Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0061
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0061
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0044
- Train Accuracy: 0.0795
- Train Wermet: 10.3884
- Validation Loss: 0.5223
- Validation Accuracy: 0.0764
- Validation Wermet: 8.8152
- Epoch: 60
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
| 0.0070 | 0.0794 | 10.2166 | 0.5169 | 0.0760 | 8.8409 | 46 |
| 0.0037 | 0.0795 | 10.0455 | 0.5174 | 0.0762 | 8.2514 | 47 |
| 0.0023 | 0.0795 | 9.9201 | 0.5167 | 0.0763 | 8.9537 | 48 |
| 0.0008 | 0.0795 | 10.0022 | 0.5166 | 0.0764 | 8.4855 | 49 |
| 0.0006 | 0.0795 | 9.9494 | 0.5233 | 0.0763 | 8.5719 | 50 |
| 0.0069 | 0.0794 | 10.2037 | 0.5434 | 0.0759 | 8.5399 | 51 |
| 0.0083 | 0.0794 | 9.9557 | 0.5173 | 0.0762 | 8.2406 | 52 |
| 0.0032 | 0.0795 | 10.0283 | 0.5240 | 0.0763 | 9.0101 | 53 |
| 0.0018 | 0.0795 | 10.0694 | 0.5247 | 0.0763 | 8.5717 | 54 |
| 0.0008 | 0.0795 | 10.1079 | 0.5217 | 0.0764 | 8.5608 | 55 |
| 0.0005 | 0.0795 | 10.0546 | 0.5286 | 0.0764 | 8.8830 | 56 |
| 0.0007 | 0.0795 | 10.2557 | 0.5328 | 0.0764 | 8.5665 | 57 |
| 0.0006 | 0.0795 | 10.2165 | 0.5412 | 0.0763 | 8.4623 | 58 |
| 0.0124 | 0.0792 | 10.2304 | 0.5284 | 0.0762 | 9.1194 | 59 |
| 0.0044 | 0.0795 | 10.3884 | 0.5223 | 0.0764 | 8.8152 | 60 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_t5-small_lora_500_10_3000_8_e7_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:41:12Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:41:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e8_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:40:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:40:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e5_s108_v4_l4_v50
|
KingKazma
| 2023-08-13T13:39:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:39:05Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e5_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T13:38:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:38:02Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e4_s108_v4_l4_v50
|
KingKazma
| 2023-08-13T13:35:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:35:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e4_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T13:35:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:35:02Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e6_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:34:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:34:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e3_s108_v4_l4_v50
|
KingKazma
| 2023-08-13T13:32:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:32:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
helamri/rl_course_vizdoom_health_gathering_supreme
|
helamri
| 2023-08-13T13:32:25Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T13:24:16Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.97 +/- 4.61
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r helamri/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e3_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T13:32:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:32:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e5_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:31:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:31:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Mtc2/poca-SoccerTwos
|
Mtc2
| 2023-08-13T13:31:31Z | 33 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-13T13:27:24Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Mtc2/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
KingKazma/cnn_dailymail_t5-small_lora_500_10_3000_8_e4_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:31:08Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:31:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
aeolian83/poly-ko-1.3b-translate
|
aeolian83
| 2023-08-13T13:29:17Z | 84 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"ko",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-12T06:10:33Z |
---
license: apache-2.0
language:
- ko
datasets:
- squarelike/sharegpt_deepl_ko_translation
tags:
- pytorch
- causal-lm
---
# poly-ko-1.3b-translate
- EleutherAI/polyglot-ko-1.3b์ squarelike/sharegpt_deepl_ko_translation์ผ๋ก ์ํ ๋ฒ์ญ๋ง ๊ฐ๋ฅํ๋๋ก fine-tuningํ ๋ชจ๋ธ
- QRoLA๊ธฐ๋ฒ์ผ๋ก fine-tunnig
### ํ๋ จ ์ ๋ณด
- Epoch: 1
- learning-rate: 3e-4
- batch_size: 3
- Lora r: 8
- Lora target modules: query_key_value
3090GPU 1๋๋ก ํ๋ จํ์ต๋๋ค.
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e4_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:29:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:29:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e2_s108_v4_l4_v100
|
KingKazma
| 2023-08-13T13:29:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:28:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_charsplit_new_0058
|
bigmorning
| 2023-08-13T13:28:01Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T13:27:55Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0058
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0058
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0007
- Train Accuracy: 0.0795
- Train Wermet: 10.2557
- Validation Loss: 0.5328
- Validation Accuracy: 0.0764
- Validation Wermet: 8.5665
- Epoch: 57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
| 0.0070 | 0.0794 | 10.2166 | 0.5169 | 0.0760 | 8.8409 | 46 |
| 0.0037 | 0.0795 | 10.0455 | 0.5174 | 0.0762 | 8.2514 | 47 |
| 0.0023 | 0.0795 | 9.9201 | 0.5167 | 0.0763 | 8.9537 | 48 |
| 0.0008 | 0.0795 | 10.0022 | 0.5166 | 0.0764 | 8.4855 | 49 |
| 0.0006 | 0.0795 | 9.9494 | 0.5233 | 0.0763 | 8.5719 | 50 |
| 0.0069 | 0.0794 | 10.2037 | 0.5434 | 0.0759 | 8.5399 | 51 |
| 0.0083 | 0.0794 | 9.9557 | 0.5173 | 0.0762 | 8.2406 | 52 |
| 0.0032 | 0.0795 | 10.0283 | 0.5240 | 0.0763 | 9.0101 | 53 |
| 0.0018 | 0.0795 | 10.0694 | 0.5247 | 0.0763 | 8.5717 | 54 |
| 0.0008 | 0.0795 | 10.1079 | 0.5217 | 0.0764 | 8.5608 | 55 |
| 0.0005 | 0.0795 | 10.0546 | 0.5286 | 0.0764 | 8.8830 | 56 |
| 0.0007 | 0.0795 | 10.2557 | 0.5328 | 0.0764 | 8.5665 | 57 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e3_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:26:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:26:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e1_s108_v4_l4_v50
|
KingKazma
| 2023-08-13T13:25:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:25:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e1_s55555_v4_l4_r4
|
KingKazma
| 2023-08-13T13:20:25Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T13:20:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Dayanand4574/stable-diffusion-chair
|
Dayanand4574
| 2023-08-13T13:15:29Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T13:11:48Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Stable-diffusion-chair Dreambooth model trained by Dayanand4574 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ShowCarSign/car-show-display-signs-and-boards
|
ShowCarSign
| 2023-08-13T13:14:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-13T12:56:16Z |
Welcome to the ShowCarSign <a href="https://showcarsign.com/product/show-car-sign/">car show display boards</a>. ! Here, you'll find an excellent selection of boards that will showcase your cars in style. Whether you're a seasoned pro with a fleet of vehicles or a first-time buyer with just one car, we've got something for everyone.
Our boards are designed to turn heads and get attention. That means they have to look just as great on the outside as they do the inside. Choose from our wide range of <a href="http://https://showcarsign.com/">car show display ideas</a>. Plus, our state-of-the-art printing techniques make sure that whatever board you choose, it'll be vibrant and strong - ready to show off your precious vehicles in all their glory.
For those with more unique tastes, we can even design and create custom <a href="http://https://showcarsign.com/product/car-show-boards/">car show reader boards</a> made to measure. So come take a look at what we have to offer here at ShowCarSign and make sure your cars get the display they deserve.
Visit: https://showcarsign.com
|
bigmorning/whisper_charsplit_new_0053
|
bigmorning
| 2023-08-13T13:05:58Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T13:05:49Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0053
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0053
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0083
- Train Accuracy: 0.0794
- Train Wermet: 9.9557
- Validation Loss: 0.5173
- Validation Accuracy: 0.0762
- Validation Wermet: 8.2406
- Epoch: 52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
| 0.0070 | 0.0794 | 10.2166 | 0.5169 | 0.0760 | 8.8409 | 46 |
| 0.0037 | 0.0795 | 10.0455 | 0.5174 | 0.0762 | 8.2514 | 47 |
| 0.0023 | 0.0795 | 9.9201 | 0.5167 | 0.0763 | 8.9537 | 48 |
| 0.0008 | 0.0795 | 10.0022 | 0.5166 | 0.0764 | 8.4855 | 49 |
| 0.0006 | 0.0795 | 9.9494 | 0.5233 | 0.0763 | 8.5719 | 50 |
| 0.0069 | 0.0794 | 10.2037 | 0.5434 | 0.0759 | 8.5399 | 51 |
| 0.0083 | 0.0794 | 9.9557 | 0.5173 | 0.0762 | 8.2406 | 52 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0052
|
bigmorning
| 2023-08-13T13:01:35Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T13:01:23Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0052
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0052
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0069
- Train Accuracy: 0.0794
- Train Wermet: 10.2037
- Validation Loss: 0.5434
- Validation Accuracy: 0.0759
- Validation Wermet: 8.5399
- Epoch: 51
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
| 0.0070 | 0.0794 | 10.2166 | 0.5169 | 0.0760 | 8.8409 | 46 |
| 0.0037 | 0.0795 | 10.0455 | 0.5174 | 0.0762 | 8.2514 | 47 |
| 0.0023 | 0.0795 | 9.9201 | 0.5167 | 0.0763 | 8.9537 | 48 |
| 0.0008 | 0.0795 | 10.0022 | 0.5166 | 0.0764 | 8.4855 | 49 |
| 0.0006 | 0.0795 | 9.9494 | 0.5233 | 0.0763 | 8.5719 | 50 |
| 0.0069 | 0.0794 | 10.2037 | 0.5434 | 0.0759 | 8.5399 | 51 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
ailabturkiye/tonguc
|
ailabturkiye
| 2023-08-13T13:01:01Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-13T12:50:26Z |
---
license: openrail
language:
- tr
tags:
- music
---
|
bigmorning/whisper_charsplit_new_0051
|
bigmorning
| 2023-08-13T12:57:04Z | 73 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T12:56:57Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0051
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0051
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0006
- Train Accuracy: 0.0795
- Train Wermet: 9.9494
- Validation Loss: 0.5233
- Validation Accuracy: 0.0763
- Validation Wermet: 8.5719
- Epoch: 50
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
| 0.0070 | 0.0794 | 10.2166 | 0.5169 | 0.0760 | 8.8409 | 46 |
| 0.0037 | 0.0795 | 10.0455 | 0.5174 | 0.0762 | 8.2514 | 47 |
| 0.0023 | 0.0795 | 9.9201 | 0.5167 | 0.0763 | 8.9537 | 48 |
| 0.0008 | 0.0795 | 10.0022 | 0.5166 | 0.0764 | 8.4855 | 49 |
| 0.0006 | 0.0795 | 9.9494 | 0.5233 | 0.0763 | 8.5719 | 50 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
fp16-guy/Cetus-Mix_v4_fp16_cleaned
|
fp16-guy
| 2023-08-13T12:51:38Z | 0 | 1 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-07-31T14:37:04Z |
---
pipeline_tag: text-to-image
---
Cetus-Mix v4, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://civitai.com/models/6755?modelVersionId=126564)**
*(all rights to the model belong to Eagelaxis)*
---
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/cetusmixv4%2001%2020230807110113-111-cetusMix_v4-Euler%20a-6.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/cetusmixv4%2002%2020230807110204-111-cetusMix_v4-Euler%20a-6.png) *(1.83gb version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/cetusMix_v4%20inp%2001%2020230813151521-111-cetusMix_v4_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/cetusMix_v4%20inp%2002%2020230813151628-111-cetusMix_v4_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
|
fathyshalab/mdcsi-medizin-gesundheit-pflege-setfit
|
fathyshalab
| 2023-08-13T12:46:00Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-13T12:44:45Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# C:\Users\F896D~1.SHA\AppData\Local\Temp\tmpx6f5c9l2\fathyshalab\mdcsi-medizin-gesundheit-pflege-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmpx6f5c9l2\fathyshalab\mdcsi-medizin-gesundheit-pflege-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
bigmorning/whisper_charsplit_new_0048
|
bigmorning
| 2023-08-13T12:43:50Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T12:43:42Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0048
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0048
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0037
- Train Accuracy: 0.0795
- Train Wermet: 10.0455
- Validation Loss: 0.5174
- Validation Accuracy: 0.0762
- Validation Wermet: 8.2514
- Epoch: 47
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
| 0.0070 | 0.0794 | 10.2166 | 0.5169 | 0.0760 | 8.8409 | 46 |
| 0.0037 | 0.0795 | 10.0455 | 0.5174 | 0.0762 | 8.2514 | 47 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0047
|
bigmorning
| 2023-08-13T12:39:25Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T12:39:16Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0047
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0047
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0070
- Train Accuracy: 0.0794
- Train Wermet: 10.2166
- Validation Loss: 0.5169
- Validation Accuracy: 0.0760
- Validation Wermet: 8.8409
- Epoch: 46
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
| 0.0070 | 0.0794 | 10.2166 | 0.5169 | 0.0760 | 8.8409 | 46 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bulentsoykan/q-Taxi-v3
|
bulentsoykan
| 2023-08-13T12:37:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T12:37:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bulentsoykan/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bigmorning/whisper_charsplit_new_0046
|
bigmorning
| 2023-08-13T12:35:00Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T12:34:52Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0046
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0046
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0041
- Train Accuracy: 0.0795
- Train Wermet: 10.2751
- Validation Loss: 0.5242
- Validation Accuracy: 0.0760
- Validation Wermet: 8.3671
- Epoch: 45
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
| 0.0028 | 0.0795 | 10.3174 | 0.4968 | 0.0762 | 8.8525 | 42 |
| 0.0023 | 0.0795 | 10.4871 | 0.5027 | 0.0762 | 8.6712 | 43 |
| 0.0024 | 0.0795 | 10.3731 | 0.5055 | 0.0762 | 8.6347 | 44 |
| 0.0041 | 0.0795 | 10.2751 | 0.5242 | 0.0760 | 8.3671 | 45 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0042
|
bigmorning
| 2023-08-13T12:17:26Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T12:17:18Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0042
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0042
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0045
- Train Accuracy: 0.0795
- Train Wermet: 10.2251
- Validation Loss: 0.5014
- Validation Accuracy: 0.0761
- Validation Wermet: 8.5737
- Epoch: 41
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
| 0.0045 | 0.0795 | 10.2251 | 0.5014 | 0.0761 | 8.5737 | 41 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Pierre-Arthur/T5_small_eurlexsum
|
Pierre-Arthur
| 2023-08-13T12:16:31Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eur-lex-sum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-24T20:26:43Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- eur-lex-sum
metrics:
- rouge
model-index:
- name: T5_small_eurlexsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eur-lex-sum
type: eur-lex-sum
config: french
split: test
args: french
metrics:
- name: Rouge1
type: rouge
value: 0.2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_small_eurlexsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eur-lex-sum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1159
- Rouge1: 0.2
- Rouge2: 0.1394
- Rougel: 0.1833
- Rougelsum: 0.1829
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 71 | 1.4740 | 0.1718 | 0.0935 | 0.1476 | 0.1476 | 19.0 |
| No log | 2.0 | 142 | 1.2138 | 0.1915 | 0.1207 | 0.1719 | 0.1719 | 19.0 |
| No log | 3.0 | 213 | 1.1368 | 0.1953 | 0.1306 | 0.1759 | 0.1759 | 19.0 |
| No log | 4.0 | 284 | 1.1159 | 0.2 | 0.1394 | 0.1833 | 0.1829 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
lizsergeeva/vit-base-patch16-224-finetuned-vit
|
lizsergeeva
| 2023-08-13T12:13:49Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-13T08:28:07Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-vit
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9160530191458026
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-vit
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2549
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6065 | 0.99 | 47 | 0.4006 | 0.8748 |
| 0.335 | 2.0 | 95 | 0.2745 | 0.9175 |
| 0.2707 | 2.97 | 141 | 0.2549 | 0.9161 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0041
|
bigmorning
| 2023-08-13T12:12:58Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T12:12:51Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0041
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0041
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0120
- Train Accuracy: 0.0793
- Train Wermet: 10.4810
- Validation Loss: 0.4963
- Validation Accuracy: 0.0760
- Validation Wermet: 8.9194
- Epoch: 40
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
| 0.0120 | 0.0793 | 10.4810 | 0.4963 | 0.0760 | 8.9194 | 40 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bulentsoykan/q-FrozenLake-v1-4x4-noSlippery
|
bulentsoykan
| 2023-08-13T12:12:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T12:12:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bulentsoykan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bigmorning/whisper_charsplit_new_0040
|
bigmorning
| 2023-08-13T12:08:34Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T12:08:26Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0040
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0035
- Train Accuracy: 0.0795
- Train Wermet: 10.6833
- Validation Loss: 0.5276
- Validation Accuracy: 0.0757
- Validation Wermet: 8.9798
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
| 0.0014 | 0.0795 | 10.6908 | 0.4937 | 0.0761 | 9.2445 | 38 |
| 0.0035 | 0.0795 | 10.6833 | 0.5276 | 0.0757 | 8.9798 | 39 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0038
|
bigmorning
| 2023-08-13T11:59:47Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:59:40Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0038
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0038
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0020
- Train Accuracy: 0.0795
- Train Wermet: 10.7428
- Validation Loss: 0.4835
- Validation Accuracy: 0.0762
- Validation Wermet: 9.0566
- Epoch: 37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
| 0.0074 | 0.0794 | 10.9738 | 0.4730 | 0.0760 | 9.3348 | 35 |
| 0.0032 | 0.0795 | 10.6370 | 0.4750 | 0.0762 | 8.8298 | 36 |
| 0.0020 | 0.0795 | 10.7428 | 0.4835 | 0.0762 | 9.0566 | 37 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0035
|
bigmorning
| 2023-08-13T11:46:46Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:46:39Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0130
- Train Accuracy: 0.0793
- Train Wermet: 11.1022
- Validation Loss: 0.4748
- Validation Accuracy: 0.0760
- Validation Wermet: 9.4521
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
| 0.0130 | 0.0793 | 11.1022 | 0.4748 | 0.0760 | 9.4521 | 34 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0034
|
bigmorning
| 2023-08-13T11:42:29Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:42:22Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0034
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0034
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0047
- Train Accuracy: 0.0795
- Train Wermet: 10.7613
- Validation Loss: 0.4788
- Validation Accuracy: 0.0759
- Validation Wermet: 9.4065
- Epoch: 33
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
| 0.0047 | 0.0795 | 10.7613 | 0.4788 | 0.0759 | 9.4065 | 33 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
ManuVleuBeu/bart_base_answer-aware_normal_eduQG
|
ManuVleuBeu
| 2023-08-13T11:39:33Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-13T11:23:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: bart_base_answer-aware_normal_eduQG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_base_answer-aware_normal_eduQG
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0033
|
bigmorning
| 2023-08-13T11:38:10Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:38:03Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0033
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0033
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0036
- Train Accuracy: 0.0795
- Train Wermet: 10.7759
- Validation Loss: 0.4667
- Validation Accuracy: 0.0761
- Validation Wermet: 9.0385
- Epoch: 32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
| 0.0043 | 0.0795 | 10.9497 | 0.4525 | 0.0761 | 9.3202 | 31 |
| 0.0036 | 0.0795 | 10.7759 | 0.4667 | 0.0761 | 9.0385 | 32 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
abdelhamidmalki/dqn-SpaceInvadersNoFrameskip-v4
|
abdelhamidmalki
| 2023-08-13T11:29:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T11:28:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 742.50 +/- 347.09
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abdelhamidmalki -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abdelhamidmalki -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga abdelhamidmalki
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
bigmorning/whisper_charsplit_new_0031
|
bigmorning
| 2023-08-13T11:29:27Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:29:20Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0031
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0031
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0084
- Train Accuracy: 0.0794
- Train Wermet: 10.9143
- Validation Loss: 0.4474
- Validation Accuracy: 0.0760
- Validation Wermet: 9.3668
- Epoch: 30
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
| 0.0109 | 0.0794 | 11.4214 | 0.4419 | 0.0760 | 9.4377 | 29 |
| 0.0084 | 0.0794 | 10.9143 | 0.4474 | 0.0760 | 9.3668 | 30 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Sparticle/llama-2-7b-chat-japanese-lora
|
Sparticle
| 2023-08-13T11:28:48Z | 0 | 8 | null |
[
"ja",
"en",
"dataset:izumi-lab/llm-japanese-dataset",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-07-29T02:39:49Z |
---
license: cc-by-sa-4.0
datasets:
- izumi-lab/llm-japanese-dataset
language:
- ja
- en
---
## This model is a fine-tuned Llama2-7b-chat-hf model with Japanese dataset with LoRA.
# This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
The training set of this model contains:
5% of randomly chosen data from llm-japanese-dataset by izumi-lab.
Japanese-alpaca-lora dataset, retrieved from https://github.com/masa3141/japanese-alpaca-lora/tree/main
For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.5.0.dev0
You must agree with Meta's agreements when using this LoRA adapter with Llama-2.
|
bigmorning/whisper_charsplit_new_0029
|
bigmorning
| 2023-08-13T11:20:47Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:20:40Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0029
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0029
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0114
- Train Accuracy: 0.0794
- Train Wermet: 11.3093
- Validation Loss: 0.4431
- Validation Accuracy: 0.0758
- Validation Wermet: 9.5545
- Epoch: 28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
| 0.0114 | 0.0794 | 11.3093 | 0.4431 | 0.0758 | 9.5545 | 28 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Punit71/Taxi-v3
|
Punit71
| 2023-08-13T11:19:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T11:19:17Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Punit71/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bigmorning/whisper_charsplit_new_0028
|
bigmorning
| 2023-08-13T11:16:24Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:16:16Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0028
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0028
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0101
- Train Accuracy: 0.0794
- Train Wermet: 11.2963
- Validation Loss: 0.4282
- Validation Accuracy: 0.0760
- Validation Wermet: 9.5792
- Epoch: 27
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
| 0.0101 | 0.0794 | 11.2963 | 0.4282 | 0.0760 | 9.5792 | 27 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0027
|
bigmorning
| 2023-08-13T11:12:00Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:11:52Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0027
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0027
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0118
- Train Accuracy: 0.0794
- Train Wermet: 11.0532
- Validation Loss: 0.4207
- Validation Accuracy: 0.0759
- Validation Wermet: 9.7227
- Epoch: 26
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
| 0.0118 | 0.0794 | 11.0532 | 0.4207 | 0.0759 | 9.7227 | 26 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0026
|
bigmorning
| 2023-08-13T11:07:34Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:07:26Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0026
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0026
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0142
- Train Accuracy: 0.0794
- Train Wermet: 11.3562
- Validation Loss: 0.4057
- Validation Accuracy: 0.0760
- Validation Wermet: 9.6831
- Epoch: 25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
| 0.0142 | 0.0794 | 11.3562 | 0.4057 | 0.0760 | 9.6831 | 25 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
morell23/3dmm
|
morell23
| 2023-08-13T11:04:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-13T11:03:45Z |
---
license: creativeml-openrail-m
---
|
bigmorning/whisper_charsplit_new_0025
|
bigmorning
| 2023-08-13T11:03:11Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T11:03:04Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0181
- Train Accuracy: 0.0793
- Train Wermet: 11.3124
- Validation Loss: 0.3982
- Validation Accuracy: 0.0759
- Validation Wermet: 9.8710
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
| 0.0262 | 0.0792 | 11.4603 | 0.3728 | 0.0760 | 10.0035 | 22 |
| 0.0224 | 0.0792 | 11.4330 | 0.3824 | 0.0760 | 9.1995 | 23 |
| 0.0181 | 0.0793 | 11.3124 | 0.3982 | 0.0759 | 9.8710 | 24 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Falah/Iyad_Radi_SDXL1.0_Lora
|
Falah
| 2023-08-13T11:01:53Z | 3 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-13T08:27:32Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a Iyad Radi
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# ART Text-to-Image Generation using stabilityai/stable-diffusion-xl-base-1.0
This repository contains code and instructions for using the `stabilityai/stable-diffusion-xl-base-1.0` model from Hugging Face's Transformers library to generate images from textual descriptions. The model utilizes diffusion models for high-quality image synthesis based on the provided text prompts.






## Model Information
- Base Model: stabilityai/stable-diffusion-xl-base-1.0
- Instance Prompt: "photo of Iyad Radi"
- Tags:
- text-to-image
- diffusers
- autotrain
## Inference
To use this model for generating images from text prompts, follow these steps:
1. **Environment Setup:**
Make sure you have Python installed on your system. You can also use a virtual environment for isolation.
2. **Install Dependencies:**
Install the required Python packages by running the following command:
```bash
pip install -r requirements.txt
```
3.## Usage
Here is an example of how you can use the `stabilityai/stable-diffusion-xl-base-1.0` model for text-to-image generation in Python using the `diffusers` library.
```python
from diffusers import DiffusionPipeline
import torch
# Load LoRA weights
lora_weights = torch.load("/path/to/lora_weights/pytorch_lora_weights.safetensors")
# Initialize the DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipe.to("cuda")
# Load LoRA weights into the pipeline
pipe.load_lora_weights(lora_weights)
# Text prompt for image generation
prompt = "photo of Iyad Radi with cat in the pool"
# Generate Images
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images
```
4. **Generated Images:**
The generated images will be saved in the `output_images` directory by default.
## Application in Art and Cinema Industry
This model can be incredibly useful in the art and cinema movie production industry, especially for creating visuals based on textual descriptions. In the case of Aiyad Radi, an Iraqi actor and comedian, this tool can aid in visualizing character designs, scenes, and concepts before actual production. Directors, artists, and producers can utilize the generated images as a reference to plan and visualize their projects effectively.
## Credits
- Model developed by [stabilityai](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
- This repository is created and maintained by [Falah.G.Saleih]
## Disclaimer
Please note that the model's outputs might vary, and the generated images are based on the input text prompts. The model's behavior is influenced by its training data and might not always produce accurate or desired results.
Feel free to experiment, provide feedback, and contribute to this repository if you'd like to enhance its functionality!
---
|
bigmorning/whisper_charsplit_new_0022
|
bigmorning
| 2023-08-13T10:50:13Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T10:50:05Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0022
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0022
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0318
- Train Accuracy: 0.0790
- Train Wermet: 11.6314
- Validation Loss: 0.3628
- Validation Accuracy: 0.0760
- Validation Wermet: 9.6652
- Epoch: 21
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
| 0.0386 | 0.0789 | 11.6855 | 0.3517 | 0.0760 | 10.0599 | 20 |
| 0.0318 | 0.0790 | 11.6314 | 0.3628 | 0.0760 | 9.6652 | 21 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0020
|
bigmorning
| 2023-08-13T10:41:26Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T10:41:19Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0463
- Train Accuracy: 0.0787
- Train Wermet: 11.9677
- Validation Loss: 0.3402
- Validation Accuracy: 0.0760
- Validation Wermet: 10.2814
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
| 0.0651 | 0.0782 | 12.1215 | 0.3192 | 0.0761 | 10.0750 | 17 |
| 0.0547 | 0.0785 | 12.0551 | 0.3294 | 0.0761 | 10.4732 | 18 |
| 0.0463 | 0.0787 | 11.9677 | 0.3402 | 0.0760 | 10.2814 | 19 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
steve-tong/opus-mt-en-zh-tw
|
steve-tong
| 2023-08-13T10:39:43Z | 107 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-13T10:36:48Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-zh
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-tw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-tw
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0017
|
bigmorning
| 2023-08-13T10:28:14Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T10:28:07Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0017
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0017
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0760
- Train Accuracy: 0.0779
- Train Wermet: 12.2637
- Validation Loss: 0.3142
- Validation Accuracy: 0.0761
- Validation Wermet: 10.2638
- Epoch: 16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
| 0.0872 | 0.0777 | 12.3196 | 0.3129 | 0.0759 | 10.7707 | 15 |
| 0.0760 | 0.0779 | 12.2637 | 0.3142 | 0.0761 | 10.2638 | 16 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0015
|
bigmorning
| 2023-08-13T10:19:26Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T10:19:18Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0998
- Train Accuracy: 0.0773
- Train Wermet: 11.9502
- Validation Loss: 0.3025
- Validation Accuracy: 0.0761
- Validation Wermet: 10.7066
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
| 0.1287 | 0.0766 | 11.8509 | 0.3029 | 0.0759 | 10.2042 | 12 |
| 0.1140 | 0.0770 | 12.1100 | 0.3004 | 0.0760 | 10.3873 | 13 |
| 0.0998 | 0.0773 | 11.9502 | 0.3025 | 0.0761 | 10.7066 | 14 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_charsplit_new_0012
|
bigmorning
| 2023-08-13T10:06:23Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T10:06:15Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0012
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0012
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1450
- Train Accuracy: 0.0762
- Train Wermet: 11.7637
- Validation Loss: 0.2971
- Validation Accuracy: 0.0758
- Validation Wermet: 10.1481
- Epoch: 11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
| 0.3292 | 0.0720 | 11.5732 | 0.3630 | 0.0740 | 9.9885 | 4 |
| 0.2886 | 0.0729 | 11.5171 | 0.3403 | 0.0745 | 9.8042 | 5 |
| 0.2561 | 0.0736 | 11.3173 | 0.3256 | 0.0749 | 9.9431 | 6 |
| 0.2282 | 0.0743 | 11.7308 | 0.3159 | 0.0752 | 9.2086 | 7 |
| 0.2036 | 0.0748 | 11.4503 | 0.3071 | 0.0754 | 9.5236 | 8 |
| 0.1820 | 0.0754 | 11.7175 | 0.3005 | 0.0756 | 10.0755 | 9 |
| 0.1628 | 0.0758 | 11.7056 | 0.2993 | 0.0757 | 9.9497 | 10 |
| 0.1450 | 0.0762 | 11.7637 | 0.2971 | 0.0758 | 10.1481 | 11 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
alxxtexxr/WizardCoder-15B-v1.0-Sharded-8GB
|
alxxtexxr
| 2023-08-13T10:00:50Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"arxiv:2306.08568",
"arxiv:2304.12244",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-12T11:52:53Z |
---
license: bigscience-openrail-m
pipeline_tag: text-generation
---
# Disclaimer: I do not own the weights of WizardCoder-15B-V1.0, nor did I train the model. I only sharded or split the model weights.
The actual weights can be found [here](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0).
The rest of the README is copied from the same page listed above.
This is the Full-Weight of WizardCoder.
**Repository**: https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
**Twitter**: https://twitter.com/WizardLM_AI/status/1669109414559911937
**Paper**: [WizardCoder: Empowering Code Large Language Models with Evol-Instruct](https://arxiv.org/abs/2306.08568)
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
<p align="center">
๐ค <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> โข ๐ฆ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> โข ๐ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> โข ๐ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> <br>
</p>
<p align="center">
๐ Join our <a href="https://discord.gg/bpmeZD7V" target="_blank">Discord</a>
</p>
## News
- ๐ฅ ๐ฅ ๐ฅ [08/11/2023] We release **WizardMath** Models.
- ๐ฅ Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- ๐ฅ Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
- ๐ฅ Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | ๐ค <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | ๐Coming Soon| **81.6** | **22.7** || <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a> |
| WizardMath-13B-V1.0 | ๐ค <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | ๐Coming Soon| **63.9** | **14.0** | [Demo-1](http://47.103.63.15:50082/), | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a> |
| WizardMath-7B-V1.0 | ๐ค <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | ๐Coming Soon| **54.9** | **10.7** | [Demo-1](http://47.103.63.15:50080/), [Demo-2](http://47.103.63.15:50081/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>๐ค <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> ๐ค <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>๐ค <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>๐ค <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>๐ค <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> ๐ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
| <sup>WizardCoder-15B-V1.0</sup> | <sup> ๐ค <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>๐ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | || |<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
</font>
# WizardCoder: Empowering Code Large Language Models with Evol-Instruct
To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set.
## News
- ๐ฅ Our **WizardCoder-15B-v1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs.
- ๐ฅ We released **WizardCoder-15B-v1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0), and [Paper]().
- 📣 Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time.
## Comparing WizardCoder with the Closed-Source Models.
๐ฅ The following figure shows that our **WizardCoder attains the third position in this benchmark**, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/pass1.png" alt="WizardCoder" style="width: 86%; min-width: 300px; display: block; margin: auto;"></a>
</p>
โ**Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).**
## Comparing WizardCoder with the Open-Source Models.
The following table clearly demonstrates that our **WizardCoder** exhibits a substantial performance advantage over all the open-source models. โ**If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.**
| Model | HumanEval Pass@1 | MBPP Pass@1 |
|------------------|------------------|-------------|
| CodeGen-16B-Multi| 18.3 |20.9 |
| CodeGeeX | 22.9 |24.4 |
| LLaMA-33B | 21.7 |30.2 |
| LLaMA-65B | 23.7 |37.7 |
| PaLM-540B | 26.2 |36.8 |
| PaLM-Coder-540B | 36.0 |47.0 |
| PaLM 2-S | 37.6 |50.0 |
| CodeGen-16B-Mono | 29.3 |35.3 |
| Code-Cushman-001 | 33.5 |45.9 |
| StarCoder-15B | 33.6 |43.6* |
| InstructCodeT5+ | 35.0 |-- |
| WizardLM-30B 1.0| 37.8 |-- |
| WizardCoder-15B 1.0 | **57.3** |**51.8** |
โ**Note: The reproduced result of StarCoder on MBPP.**
โ**Note: The above table conducts a comprehensive comparison of our **WizardCoder** with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating **20 samples** for each problem to estimate the pass@1 score and evaluate with the same [code](https://github.com/openai/human-eval/tree/master). The scores of GPT4 and GPT3.5 reported by [OpenAI](https://openai.com/research/gpt-4) are 67.0 and 48.1 (maybe these are the early version GPT4&3.5).**
## Call for Feedbacks
We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it.
## Contents
1. [Online Demo](#online-demo)
2. [Fine-tuning](#fine-tuning)
3. [Inference](#inference)
4. [Evaluation](#evaluation)
5. [Citation](#citation)
6. [Disclaimer](#disclaimer)
## Online Demo
We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks.
## Fine-tuning
We fine-tune WizardCoder using the modified code `train.py` from [Llama-X](https://github.com/AetherCortex/Llama-X).
We fine-tune StarCoder-15B with the following hyperparameters:
| Hyperparameter | StarCoder-15B |
|----------------|---------------|
| Batch size | 512 |
| Learning rate | 2e-5 |
| Epochs | 3 |
| Max length | 2048 |
| Warmup step | 30 |
| LR scheduler | cosine |
To reproduce our fine-tuning of WizardCoder, please follow the following steps:
1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy. (Note: `deepspeed==0.9.2` and `transformers==4.29.2`)
2. Replace the `train.py` with the `train_wizardcoder.py` in our repo (`src/train_wizardcoder.py`)
3. Login Huggingface:
```bash
huggingface-cli login
```
4. Execute the following training command:
```bash
deepspeed train_wizardcoder.py \
--model_name_or_path "bigcode/starcoder" \
--data_path "/your/path/to/code_instruction_data.json" \
--output_dir "/your/path/to/ckpt" \
--num_train_epochs 3 \
--model_max_length 2048 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--warmup_steps 30 \
--logging_steps 2 \
--lr_scheduler_type "cosine" \
--report_to "tensorboard" \
--gradient_checkpointing True \
--deepspeed configs/deepspeed_config.json \
--fp16 True
```
## Inference
We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file.
You can specify `base_model`, `input_data_path` and `output_data_path` in `src\inference_wizardcoder.py` to set the decoding model, path of input file and path of output file.
```bash
pip install jsonlines
```
The decoding command is:
```
python src\inference_wizardcoder.py \
--base_model "/your/path/to/ckpt" \
--input_data_path "/your/path/to/input/data.jsonl" \
--output_data_path "/your/path/to/output/result.jsonl"
```
The format of `data.jsonl` should be:
```
{"idx": 11, "Instruction": "Write a Python code to count 1 to 10."}
{"idx": 12, "Instruction": "Write a Jave code to sum 1 to 10."}
```
The prompt for our WizardCoder in `src\inference_wizardcoder.py` is:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
## Evaluation
We provide the evaluation script on HumanEval for WizardCoder.
1. According to the instructions of [HumanEval](https://github.com/openai/human-eval), install the environment.
2. Run the following script to generate the answer.
```bash
model="/path/to/your/model"
temp=0.2
max_len=2048
pred_num=200
num_seqs_per_iter=2
output_path=preds/T${temp}_N${pred_num}
mkdir -p ${output_path}
echo 'Output path: '$output_path
echo 'Model to eval: '$model
# 164 problems, 21 per GPU if GPU=8
index=0
gpu_num=8
for ((i = 0; i < $gpu_num; i++)); do
start_index=$((i * 21))
end_index=$(((i + 1) * 21))
gpu=$((i))
echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
((index++))
(
CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
--start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path}
) &
if (($index % $gpu_num == 0)); then wait; fi
done
```
3. Run the post processing code `src/process_humaneval.py` to collect the code completions from all answer files.
```bash
output_path=preds/T${temp}_N${pred_num}
echo 'Output path: '$output_path
python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
evaluate_functional_correctness ${output_path}.jsonl
```
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{luo2023wizardcoder,
title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
year={2023},
}
```
## Disclaimer
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
|
mrvincenzo/dqn-SpaceInvadersNoFrameskip-v4
|
mrvincenzo
| 2023-08-13T09:48:54Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T09:48:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 872.00 +/- 417.93
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrvincenzo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrvincenzo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrvincenzo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
HachiML/japanese-stablelm-alpha-7b-hh-rlhf-49k-ja-qlora-v2-1.2ep
|
HachiML
| 2023-08-13T09:48:00Z | 1 | 0 |
peft
|
[
"peft",
"dataset:HachiML/hh-rlhf-49k-ja-alpaca-format",
"region:us"
] | null | 2023-08-13T09:46:23Z |
---
library_name: peft
datasets:
- HachiML/hh-rlhf-49k-ja-alpaca-format
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
darthPanda/whisper-tiny-urdu
|
darthPanda
| 2023-08-13T09:47:03Z | 86 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ur",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T07:25:30Z |
---
language:
- ur
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Urdu - darth
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: ur
split: test
args: ur
metrics:
- name: Wer
type: wer
value: 59.544821179749185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Urdu - darth
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8511
- Wer Ortho: 62.5039
- Wer: 59.5448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.673 | 1.08 | 500 | 0.8511 | 62.5039 | 59.5448 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
asenella/MMVAEPlus_beta_25_scale_True_seed_3
|
asenella
| 2023-08-13T09:44:09Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T19:38:45Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
bigmorning/whisper_charsplit_new_0004
|
bigmorning
| 2023-08-13T09:31:14Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T09:31:06Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_new_0004
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_new_0004
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3813
- Train Accuracy: 0.0708
- Train Wermet: 11.9157
- Validation Loss: 0.3935
- Validation Accuracy: 0.0733
- Validation Wermet: 9.4615
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.8733 | 0.0602 | 13.0686 | 0.6470 | 0.0676 | 11.4066 | 0 |
| 0.5740 | 0.0666 | 12.7778 | 0.5113 | 0.0706 | 11.1022 | 1 |
| 0.4553 | 0.0692 | 12.2404 | 0.4371 | 0.0723 | 10.9105 | 2 |
| 0.3813 | 0.0708 | 11.9157 | 0.3935 | 0.0733 | 9.4615 | 3 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
TinToTin/ppo-CartPole-v1
|
TinToTin
| 2023-08-13T09:27:09Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T09:24:39Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 247.10 +/- 99.41
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Thineshan/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
moraxgiga/llama-2-7b-Gokul_datadolly
|
moraxgiga
| 2023-08-13T09:09:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-28T09:17:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
abhijeet2022/Taxi-v3
|
abhijeet2022
| 2023-08-13T09:03:06Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T08:10:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="abhijeet2022/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fathyshalab/mdcsi-finanzen-setfit
|
fathyshalab
| 2023-08-13T08:57:01Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-13T08:56:11Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp1dwypgha\fathyshalab\mdcsi-finanzen-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmp1dwypgha\fathyshalab\mdcsi-finanzen-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
caffeinatedwoof/whisper-tiny-minds14-enUS
|
caffeinatedwoof
| 2023-08-13T08:55:58Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T05:59:19Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14-enUS
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 30.3873431533006
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14-enUS
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7518
- Wer Ortho: 30.8480
- Wer: 30.3873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0006 | 35.71 | 500 | 0.7518 | 30.8480 | 30.3873 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Sandiago21/llama-2-13b-hf-prompt-answering
|
Sandiago21
| 2023-08-13T08:49:51Z | 0 | 2 | null |
[
"pytorch",
"generated_from_trainer",
"text-generation",
"multilingual",
"dataset:conversations",
"region:us"
] |
text-generation
| 2023-08-05T17:55:00Z |
---
language:
- multilingual
tags:
- generated_from_trainer
datasets:
- conversations
model-index:
- name: llama-2-13b-hf-prompt-answering
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-2-13b-hf-prompt-answering
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the CONVERSATIONS dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Tokenizers 0.13.3
|
ldhldh/_diff_big_12kstep
|
ldhldh
| 2023-08-13T08:44:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-13T08:44:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
asenella/MMVAEPlus_beta_25_scale_True_seed_2
|
asenella
| 2023-08-13T08:43:16Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T17:15:22Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Ryukijano/lora-trained-xl-anime_colab
|
Ryukijano
| 2023-08-13T08:39:09Z | 3 | 4 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-13T06:05:06Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: anime prompts
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Ryukijano/lora-trained-xl-anime_colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on anime prompts using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
asenella/MMVAEPlus_beta_25_scale_True_seed_0
|
asenella
| 2023-08-13T08:29:36Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T16:49:53Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
fathyshalab/mdcsi-moebel-einrichtungshaeuser-setfit
|
fathyshalab
| 2023-08-13T08:25:32Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-13T08:24:41Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# C:\Users\F896D~1.SHA\AppData\Local\Temp\tmpvfzjmjqz\fathyshalab\mdcsi-moebel-einrichtungshaeuser-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("C:\Users\F896D~1.SHA\AppData\Local\Temp\tmpvfzjmjqz\fathyshalab\mdcsi-moebel-einrichtungshaeuser-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
chriskim2273/IOTNation_Classification_Model_0.7_5K_AND_ORIGINAL_DATASET_ROBERTA
|
chriskim2273
| 2023-08-13T08:24:12Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T07:57:43Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IOTNation_Classification_Model_0.7_5K_AND_ORIGINAL_DATASET_ROBERTA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_Classification_Model_0.7_5K_AND_ORIGINAL_DATASET_ROBERTA
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0365
- Accuracy: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
GhifSmile/distilbert-base-uncased-DSC-new-cllbck
|
GhifSmile
| 2023-08-13T08:19:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-13T08:01:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: distilbert-base-uncased-DSC-new-cllbck
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-DSC-new-cllbck
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1160
- Accuracy: 0.9817
- Precision: 0.9831
- Recall: 0.9818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 0.5329 | 1.0 | 618 | 0.1812 | 0.9511 | 0.9577 | 0.9518 |
| 0.0853 | 2.0 | 1236 | 0.1160 | 0.9817 | 0.9831 | 0.9818 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
abhijeet2022/q-FrozenLake-v1-4x4-noSlippery
|
abhijeet2022
| 2023-08-13T08:02:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T08:01:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="abhijeet2022/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nivoko1022/mnivoko1022
|
nivoko1022
| 2023-08-13T07:58:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-13T07:58:01Z |
---
license: creativeml-openrail-m
---
|
modelmaker/luna
|
modelmaker
| 2023-08-13T07:55:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"cat",
"ay",
"dataset:Open-Orca/OpenOrca",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-13T07:53:41Z |
---
license: creativeml-openrail-m
datasets:
- Open-Orca/OpenOrca
language:
- ay
metrics:
- accuracy
library_name: diffusers
tags:
- cat
---
|
timjwhite/distilhubert-finetuned-gtzan
|
timjwhite
| 2023-08-13T07:21:22Z | 168 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:Sandiago21/distilhubert-finetuned-gtzan",
"base_model:finetune:Sandiago21/distilhubert-finetuned-gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-11T10:54:34Z |
---
license: apache-2.0
base_model: Sandiago21/distilhubert-finetuned-gtzan
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [Sandiago21/distilhubert-finetuned-gtzan](https://huggingface.co/Sandiago21/distilhubert-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9951
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0951 | 1.0 | 57 | 0.5566 | 0.87 |
| 0.0629 | 2.0 | 114 | 0.6819 | 0.83 |
| 0.0231 | 3.0 | 171 | 0.6118 | 0.86 |
| 0.0159 | 4.0 | 228 | 0.9208 | 0.83 |
| 0.0374 | 5.0 | 285 | 0.8746 | 0.85 |
| 0.1714 | 6.0 | 342 | 0.6671 | 0.87 |
| 0.2148 | 7.0 | 399 | 1.1850 | 0.79 |
| 0.0147 | 8.0 | 456 | 1.0551 | 0.79 |
| 0.0788 | 9.0 | 513 | 1.5179 | 0.79 |
| 0.0015 | 10.0 | 570 | 1.3290 | 0.8 |
| 0.0049 | 11.0 | 627 | 1.0943 | 0.85 |
| 0.0012 | 12.0 | 684 | 1.0667 | 0.85 |
| 0.0043 | 13.0 | 741 | 1.1816 | 0.82 |
| 0.0015 | 14.0 | 798 | 0.9108 | 0.88 |
| 0.0011 | 15.0 | 855 | 1.0289 | 0.87 |
| 0.001 | 16.0 | 912 | 0.7696 | 0.87 |
| 0.0006 | 17.0 | 969 | 0.8539 | 0.87 |
| 0.1001 | 18.0 | 1026 | 1.1917 | 0.78 |
| 0.0017 | 19.0 | 1083 | 1.0016 | 0.83 |
| 0.0525 | 20.0 | 1140 | 0.9513 | 0.88 |
| 0.0004 | 21.0 | 1197 | 0.9268 | 0.86 |
| 0.0003 | 22.0 | 1254 | 1.1209 | 0.82 |
| 0.0003 | 23.0 | 1311 | 0.9270 | 0.87 |
| 0.0003 | 24.0 | 1368 | 1.1148 | 0.84 |
| 0.0003 | 25.0 | 1425 | 1.0507 | 0.85 |
| 0.0002 | 26.0 | 1482 | 1.0156 | 0.86 |
| 0.0002 | 27.0 | 1539 | 1.0062 | 0.87 |
| 0.0002 | 28.0 | 1596 | 1.0124 | 0.87 |
| 0.0002 | 29.0 | 1653 | 1.0154 | 0.87 |
| 0.0002 | 30.0 | 1710 | 1.0092 | 0.88 |
| 0.0002 | 31.0 | 1767 | 1.0123 | 0.88 |
| 0.0175 | 32.0 | 1824 | 0.9928 | 0.88 |
| 0.0002 | 33.0 | 1881 | 1.0014 | 0.88 |
| 0.0115 | 34.0 | 1938 | 0.9989 | 0.88 |
| 0.0001 | 35.0 | 1995 | 0.9871 | 0.88 |
| 0.0001 | 36.0 | 2052 | 0.9920 | 0.88 |
| 0.0002 | 37.0 | 2109 | 0.9974 | 0.88 |
| 0.0002 | 38.0 | 2166 | 0.9950 | 0.88 |
| 0.0001 | 39.0 | 2223 | 0.9997 | 0.88 |
| 0.0001 | 40.0 | 2280 | 0.9951 | 0.88 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
caiAtSNU/ppo-from-scratch-LunarLander-v2
|
caiAtSNU
| 2023-08-13T07:10:14Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T07:07:30Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -126.67 +/- 91.01
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo_solution'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'caiAtSNU/ppo-from-scratch-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
LongSafari/hyenadna-medium-160k-seqlen
|
LongSafari
| 2023-08-13T07:05:42Z | 17 | 2 |
transformers
|
[
"transformers",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-06-23T05:23:10Z |
---
license: bsd-3-clause
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/abs/2306.15794)
- [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna)
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
- [github](https://github.com/HazyResearch/hyena-dna)
**Links to all HuggingFace models:**
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main)
- [tiny-1k-d256](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen-d256/tree/main)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main)
See [GPU requirements](#hardware) for each model.
### Sample snippet
This code example lets you select which pretrained model to load from HuggingFace, perform inference and get embeddings.
See the [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) for these classes, or the ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) script in the main [github](https://github.com/HazyResearch/hyena-dna).
```python
# instantiate pretrained model
pretrained_model_name = 'hyenadna-medium-450k-seqlen'
max_length = 450_000
model = HyenaDNAPreTrainedModel.from_pretrained(
'./checkpoints',
pretrained_model_name,
)
# create tokenizer, no training involved :)
tokenizer = CharacterTokenizer(
characters=['A', 'C', 'G', 'T', 'N'], # add DNA characters
model_max_length=max_length,
)
# create a sample
sequence = 'ACTG' * int(max_length/4)
tok_seq = tokenizer(sequence)["input_ids"]
# place on device, convert to tensor
tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device) # unsqueeze for batch dim
# prep model and forward
model.to(device)
model.eval() # deterministic
with torch.inference_mode():
embeddings = model(tok_seq)
print(embeddings.shape) # embeddings here!
```
### How to use pretrained weights
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see this example first.
- [github](https://github.com/HazyResearch/hyena-dna)
Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning all of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it to Pytorch Lightning via command line or config. See the [github](https://github.com/HazyResearch/hyena-dna) README for how to do all that.
If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too.
### GPU requirements (suggested)
<a name="hardware"></a>
Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
GPU during: Pretrain, fine-tune, inference
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40, T4, T4)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40, A100-40, T4)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40, A100-40, T4)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80, A100-80, A100-40)
T4: 16GB
A100-40: 40GB
A100-80: 80GB
## Model & Training Overview
<a name="model"></a>
HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.
This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).
We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer.
We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).
HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.
Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA!
### Authors
Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
**Contact**
Eric Nguyen, etnguyen@stanford.edu
Michael Poli, poli@stanford.edu
Marjan Faizi, Marjan_Faizi@hms.harvard.edu
## Citation
Feel free to cite us :)
```
@article{nguyen2023hyenadna,
title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution},
author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Rรฉ},
year={2023},
eprint={2306.15794},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
LongSafari/hyenadna-medium-450k-seqlen
|
LongSafari
| 2023-08-13T07:05:18Z | 9 | 7 |
transformers
|
[
"transformers",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-06-23T08:11:59Z |
---
license: bsd-3-clause
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/abs/2306.15794)
- [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna)
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
- [github](https://github.com/HazyResearch/hyena-dna)
**Links to all HuggingFace models:**
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main)
- [tiny-1k-d256](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen-d256/tree/main)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main)
See [GPU requirements](#hardware) for each model.
### Sample snippet
This code example lets you select which pretrained model to load from HuggingFace, perform inference and get embeddings.
See the [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) for these classes, or the ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) script in the main [github](https://github.com/HazyResearch/hyena-dna).
```python
# instantiate pretrained model
pretrained_model_name = 'hyenadna-medium-450k-seqlen'
max_length = 450_000
model = HyenaDNAPreTrainedModel.from_pretrained(
'./checkpoints',
pretrained_model_name,
)
# create tokenizer, no training involved :)
tokenizer = CharacterTokenizer(
characters=['A', 'C', 'G', 'T', 'N'], # add DNA characters
model_max_length=max_length,
)
# create a sample
sequence = 'ACTG' * int(max_length/4)
tok_seq = tokenizer(sequence)["input_ids"]
# place on device, convert to tensor
tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device) # unsqueeze for batch dim
# prep model and forward
model.to(device)
model.eval() # deterministic
with torch.inference_mode():
embeddings = model(tok_seq)
print(embeddings.shape) # embeddings here!
```
### How to use pretrained weights
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see this example first.
- [github](https://github.com/HazyResearch/hyena-dna)
Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning all of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it to Pytorch Lightning via command line or config. See the [github](https://github.com/HazyResearch/hyena-dna) README for how to do all that.
If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too.
### GPU requirements (suggested)
<a name="hardware"></a>
Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
GPU during: Pretrain, fine-tune, inference
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40, T4, T4)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40, A100-40, T4)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40, A100-40, T4)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80, A100-80, A100-40)
T4: 16GB
A100-40: 40GB
A100-80: 80GB
## Model & Training Overview
<a name="model"></a>
HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.
This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).
We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer.
We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).
HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.
Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA!
### Authors
Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
**Contact**
Eric Nguyen, etnguyen@stanford.edu
Michael Poli, poli@stanford.edu
Marjan Faizi, Marjan_Faizi@hms.harvard.edu
## Citation
Feel free to cite us :)
```
@article{nguyen2023hyenadna,
title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution},
author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Rรฉ},
year={2023},
eprint={2306.15794},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
aidenpan/FillLineGaps
|
aidenpan
| 2023-08-13T07:03:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-08-08T14:43:44Z |
---
license: apache-2.0
---
# FillLineGaps
This repo holds the model from [FillLineGaps](https://github.com/zhenglinpan/FillLineGaps).
`LightUNet_generator_700_mono_fuji.pth`: model for binarized black and white(channel=1) images.
`LightUNet_generator_1000_color_dls.pth`: model for chromatic(channel=3) images.
|
fp16-guy/Cetus-Mix_Whalefall_fp16_cleaned
|
fp16-guy
| 2023-08-13T06:58:15Z | 0 | 4 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-07-26T18:24:50Z |
---
pipeline_tag: text-to-image
---
Cetus-Mix Whalefall, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://civitai.com/models/6755?modelVersionId=126564)**
*(all rights to the model belong to Eagelaxis)*
---
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/cetusMix_Whalefall2%2001.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/cetusMix_Whalefall2%2002%20no%20vae.png) *(1.83gb version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/cetusMix_Whalefall2%20inp%2001%2020230812123319-111-cetusMix_Whalefall2_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/cetusMix_Whalefall2%20inp%2002%2020230812123519-111-cetusMix_Whalefall2_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
|
bigmorning/whisper_charsplit_0005
|
bigmorning
| 2023-08-13T06:26:40Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-13T06:26:32Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_charsplit_0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_charsplit_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0544
- Train Accuracy: 0.0561
- Train Wermet: 9.9365
- Validation Loss: 0.8809
- Validation Accuracy: 0.0623
- Validation Wermet: 10.1087
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 2.7836 | 0.0245 | 8.8338 | 2.1866 | 0.0348 | 6.5008 | 0 |
| 2.0715 | 0.0354 | 7.5148 | 1.8725 | 0.0410 | 5.8800 | 1 |
| 1.7730 | 0.0412 | 7.4995 | 1.5964 | 0.0467 | 6.7257 | 2 |
| 1.4468 | 0.0478 | 8.1713 | 1.2401 | 0.0544 | 8.7249 | 3 |
| 1.0544 | 0.0561 | 9.9365 | 0.8809 | 0.0623 | 10.1087 | 4 |
### Framework versions
- Transformers 4.32.0.dev0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
NocteZeta/ppo-Huggy
|
NocteZeta
| 2023-08-13T06:17:15Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-13T06:17:05Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NocteZeta/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Evan-Lin/Bart-large-abs-yelp-entailment
|
Evan-Lin
| 2023-08-13T06:09:54Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-13T06:02:49Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmp57lx8mhn/Evan-Lin/Bart-large-abs-yelp-entailment")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmp57lx8mhn/Evan-Lin/Bart-large-abs-yelp-entailment")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmp57lx8mhn/Evan-Lin/Bart-large-abs-yelp-entailment")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
keena/inResonance
|
keena
| 2023-08-13T05:58:53Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-25T05:03:45Z |
---
license: creativeml-openrail-m
---
<style type="text/css">
.title{
font-size: 25px
}
.top{
margin: 0 0 80px 0
}
</style>
<div class="top">
<h1 class="title">
<span style="color: #cc0000">in</span>Resonance
</h1>
<img src="https://huggingface.co/keena/inResonance/resolve/main/images/header.jpg" width="1000" height="">
<p>EasyNegativeใDeepNegativeใใพใOrangeMixใฎVAEใไฝฟ็จใใใใจใใๅงใใใพใใ</p>
</div>
<div class="comparison">
<h1 class="tytle">ใป2็จฎ้กใฎใขใใซ</h1>
<img src="https://huggingface.co/keena/inResonance/resolve/main/images/comparison.jpg" width="500" height="">
<p>
"<span style="color: #cc0000">in</span>ResonanceT"ใฏ"<span style="color: #cc0000">in</span>ResonanceZ"ใฎๆน่ฏ็ใงใใใๅคใใฎใขใใซใใใผใธใใฆใใพใใ<br>
ไธๆนใๆฉ่ฝ้ขใง่ใใๅชใใฆใใใจใใใใจใฏ็กใใฎใงใใๅฅฝใใชๆนใใไฝฟใใใ ใใใ
</p>
</div>
<p>[ใใผใ ็ป้ขไฝๆไธญ]</p>
|
asenella/MMVAEPlus_beta_25_scale_True_seed_1
|
asenella
| 2023-08-13T05:48:28Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-27T17:07:31Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
scoldgrin/ppo-LunarLander-v2
|
scoldgrin
| 2023-08-13T05:48:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-13T05:47:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.55 +/- 12.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
iknow-lab/ko-flan-zero-v0-0731
|
iknow-lab
| 2023-08-13T05:46:38Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"ko",
"dataset:nsmc",
"dataset:jason9693/APEACH",
"dataset:KETI-AIR/korquad",
"dataset:klue",
"dataset:smilegate-ai/kor_unsmile",
"dataset:kor_nlu",
"dataset:skt/kobest_v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T13:49:38Z |
---
license: apache-2.0
language:
- ko
pipeline_tag: text-classification
widget:
- text: ์์ ์๋ ์ฃผ๋ง๋ง๋ค ๊ทน์ฅ์ ๋๋ฌ๊ฐ๋๋ฐ ์์๋ ์ข ์๊ฐ๋ ํธ์ด์์ [SEP] ๋๊ธ ์ฃผ์ ๋ฅผ ๋ถ๋ฅํ์ธ์ [SEP] ์๋ค๋ง
- text: >-
์ธ์ฒ๋ฐ KTX์ ๊ด๋ จํโ์ก๋์ญ ๋ณตํฉํ์น์ผํฐ๊ฐโ์ฌ์ค์โ๋ฌด์ฐ,โ๋จ์ ์ฒ ๋ยท๋ฒ์ค ์์ฃผ ํ์น์์ค๋กโ๋ง๋ค์ด์ง๋ค.โ์ด ๋๋ฌธ์ ์ธ์ฒ์์ ์ธ์ฒ๋ฐ
KTXโ๊ธฐ์ ์ ์ต์ปค์์ค์ธ ๋ณตํฉํ์น์ผํฐ๋ฅผ ํตํ ์ธ๊ทผโ์ง์ญโ๊ฒฝ์ โํ์ฑํ๋ฅผโ์ด๋ค๋ธ๋ค๋ ๊ณํ์ ์ฐจ์ง์ด ๋ถ๊ฐํผํ๋ค. [SEP] ๊ฒฝ์ ์ ๊ธ์ ์ ์ธ
๋ด์ค์ธ๊ฐ์? [SEP] ์๋์
- text: ๋ง์ง๋ง์๋ kํ ๊ณต์ฐ๋ณด๊ณ ์ข์ ์ถ์ต ๋จ์์ผ๋ฉด ์ข๊ฒ ๋ค์ [SEP] ์์ค์ด ํฌํจ๋์ด์๋์? [SEP] ์๋์
datasets:
- nsmc
- jason9693/APEACH
- KETI-AIR/korquad
- klue
- smilegate-ai/kor_unsmile
- kor_nlu
- skt/kobest_v1
---
## ์ฌ์ฉ ์์
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("iknow-lab/ko-flan-zero-v0-0731")
model = AutoModelForSequenceClassification.from_pretrained("iknow-lab/ko-flan-zero-v0-0731")
def inference(instruction, input, labels):
instruction = f"{input} [SEP] {instruction}"
inputs = tokenizer([instruction] * len(labels), labels, truncation=True, padding=True, return_tensors="pt")
scores = model(**inputs).logits.squeeze(1).tolist()
output = dict(zip(labels, scores))
print(instruction, output)
inference(
"๋ฌธ์ฅ์ ๊ฐ์ฑ๋ถ๋ฅํด์ฃผ์ธ์",
"์ ์ํ ๊ฐ๋
ธ์ผ",
["๊ธ์ ์ ", "๋ถ์ ์ "]
)
inference(
"๊ธ๊ณผ ๊ด๋ จ๋ ๋ด์ฉ์ ๋ง๋ค์ด์ฃผ์ธ์",
"์์ ์๋ ์ฃผ๋ง๋ง๋ค ๊ทน์ฅ์ ๋๋ฌ๊ฐ๋๋ฐ ์์๋ ์ข ์๊ฐ๋ ํธ์ด์์",
["์ํ์ ๊ดํ ๊ธ์ด๋ค", "๋๋ผ๋ง์ ๊ดํ ๊ธ์
๋๋ค"]
)
inference(
"๊ธ์ ์ฝ๊ณ ์์ฅ์ ๋ฏธ์น ์ํฅ์ ํ๋จํด๋ณด์ธ์",
"""์ธ์ฒ๋ฐ KTX์ ๊ด๋ จํโ์ก๋์ญ ๋ณตํฉํ์น์ผํฐ๊ฐโ์ฌ์ค์โ๋ฌด์ฐ,โ๋จ์ ์ฒ ๋ยท๋ฒ์ค ์์ฃผ ํ์น์์ค๋กโ๋ง๋ค์ด์ง๋ค.โ์ด ๋๋ฌธ์ ์ธ์ฒ์์ ์ธ์ฒ๋ฐ KTXโ๊ธฐ์ ์ ์ต์ปค์์ค์ธ ๋ณตํฉํ์น์ผํฐ๋ฅผ ํตํ ์ธ๊ทผโ์ง์ญโ๊ฒฝ์ โํ์ฑํ๋ฅผโ์ด๋ค๋ธ๋ค๋ ๊ณํ์ ์ฐจ์ง์ด ๋ถ๊ฐํผํ๋ค.
25์ผโ์์โ๋ฐ๋ฅด๋ฉดโ์ฐ์๊ตฌโ์ฅ๋ จ๋โ104 ์ผ๋ 29๋ง1์ฒ725ใก(8๋ง8์ฒํ)์โ์ถ์ง ์ค์ธ 2๋ง8์ฒ62๊ฐ๊ตฌ ๊ท๋ชจ์ ์ก๋์ญ์ธ๊ถ๊ตฌ์ญโ๋์๊ฐ๋ฐ์ฌ์
๊ณผ ์ฐ๊ณ, KTXโ์ก๋์ญโ๋ณตํฉํ์น์ผํฐ์โ์์
์์คยท์
๋ฌด์์คโ๋ฑ์ ์กฐ์ฑ์ ์ถ์ง ์ค์ด๋ค.โ""",
["๊ธ์ ", "๋ถ์ ", "์ค๋ฆฝ"]
)
```
### ์คํ ๊ฒฐ๊ณผ
```
์ ์ํ ๊ฐ๋
ธ์ผ [SEP] ๋ฌธ์ฅ์ ๊ฐ์ฑ๋ถ๋ฅํด์ฃผ์ธ์
{'๊ธ์ ์ ': -7.878206253051758, '๋ถ์ ์ ': 50.96009826660156}
์์ ์๋ ์ฃผ๋ง๋ง๋ค ๊ทน์ฅ์ ๋๋ฌ๊ฐ๋๋ฐ ์์๋ ์ข ์๊ฐ๋ ํธ์ด์์ [SEP] ๊ธ๊ณผ ๊ด๋ จ๋ ๋ด์ฉ์ ๋ง๋ค์ด์ฃผ์ธ์
{'์ํ์ ๊ดํ ๊ธ์ด๋ค': 25.37109375, '๋๋ผ๋ง์ ๊ดํ ๊ธ์
๋๋ค': -31.869916915893555}
์ธ์ฒ๋ฐ KTX์ ๊ด๋ จํโ์ก๋์ญ ๋ณตํฉํ์น์ผํฐ๊ฐโ์ฌ์ค์โ๋ฌด์ฐ,โ๋จ์ ์ฒ ๋ยท๋ฒ์ค ์์ฃผ ํ์น์์ค๋กโ๋ง๋ค์ด์ง๋ค.โ์ด ๋๋ฌธ์ ์ธ์ฒ์์ ์ธ์ฒ๋ฐ KTXโ๊ธฐ์ ์ ์ต์ปค์์ค์ธ ๋ณตํฉํ์น์ผํฐ๋ฅผ ํตํ ์ธ๊ทผโ์ง์ญโ๊ฒฝ์ โํ์ฑํ๋ฅผโ์ด๋ค๋ธ๋ค๋ ๊ณํ์ ์ฐจ์ง์ด ๋ถ๊ฐํผํ๋ค.
25์ผโ์์โ๋ฐ๋ฅด๋ฉดโ์ฐ์๊ตฌโ์ฅ๋ จ๋โ104 ์ผ๋ 29๋ง1์ฒ725ใก(8๋ง8์ฒํ)์โ์ถ์ง ์ค์ธ 2๋ง8์ฒ62๊ฐ๊ตฌ ๊ท๋ชจ์ ์ก๋์ญ์ธ๊ถ๊ตฌ์ญโ๋์๊ฐ๋ฐ์ฌ์
๊ณผ ์ฐ๊ณ, KTXโ์ก๋์ญโ๋ณตํฉํ์น์ผํฐ์โ์์
์์คยท์
๋ฌด์์คโ๋ฑ์ ์กฐ์ฑ์ ์ถ์ง ์ค์ด๋ค.โ [SEP] ๊ธ์ ์ฝ๊ณ ์์ฅ์ ๋ฏธ์น ์ํฅ์ ํ๋จํด๋ณด์ธ์
{'๊ธ์ ': -61.86758804321289, '๋ถ์ ': 23.72732925415039, '์ค๋ฆฝ': -70.4837417602539}
```
## ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ฑ
```json
{
"splits": "train",
"tasks": "nsmc,apeach,korquad_v1.0,klue_mrc,klue_nli,klue_ynat,kor_nlu,unsmile,klue_re,kobest_copa,kobest_hellaswag,kobest_boolq,kobest_wic,niklex,nikl_absa",
"max_instance_per_task": 20000,
"split_train": {
"nsmc": 20000,
"apeach": 7895,
"korquad_v1.0": 20000,
"klue_mrc": 17553,
"klue_nli": 8046,
"klue_ynat": 20000,
"kor_nlu": 20000,
"unsmile": 15002,
"klue_re": 20000,
"kobest_copa": 3075,
"kobest_hellaswag": 499,
"kobest_boolq": 3664,
"kobest_wic": 3317,
"niklex": 20000,
"nikl_absa": 2139
},
"split_train_total": 181190
}
```
## ํ๊ฐ(test set)
| task | accuracy |
| --- | --- |
| [nsmc](https://huggingface.co/datasets/nsmc) | 85.92 |
| [jason9693/APEACH](https://huggingface.co/datasets/jason9693/APEACH) | 32.12 |
| [klue-ynat](https://huggingface.co/datasets/klue) | 77.59 |
| [kobest-boolq](https://huggingface.co/datasets/skt/kobest_v1) | 76.99 |
| [kobest-copa](https://huggingface.co/datasets/skt/kobest_v1) | 61.2 |
| [kobest-hellaswag](https://huggingface.co/datasets/skt/kobest_v1) | ์ฝ๋ ๋ฒ๊ทธ ์์ด์ ์ ์ธ |
| [kobest-sentineg](https://huggingface.co/datasets/skt/kobest_v1) | 55.92 |
| [kobest-wic](https://huggingface.co/datasets/skt/kobest_v1) | 58.49 |
### ํ๊ฐ ๋ฐฉ์
- ๋ชจ๋ธ์ `[CLS] {input} [SEP] {instruction} [SEP] label [SEP]` ํ์์ผ๋ก ๋ฃ๊ณ ๋์จ positive์ negative๋ผ๋ฆฌ ๋น๊ตํจ.
- positive๋ ์ ๋ต ๋ผ๋ฒจ์ ์ฌ์ฉํ๊ณ , negative๋ ์ ๋ต ๋ผ๋ฒจ์ด ์๋ ๋ชจ๋ ๋ผ๋ฒจ์ ์ฌ์ฉ
- ์ ๋ต ๋ผ๋ฒจ์ ์ ์๊ฐ ๋ชจ๋ negative๋ณด๋ค ๋์ ๊ฒฝ์ฐ ๋ง์ถ ๊ฒ์ผ๋ก ๊ฐ์ฃผํจ. ์ด๋ฐ ์์ผ๋ก accuracy ์ธก์ .
ํ
์คํธ์ ์ฌ์ฉํ ๋งคํ ์ฝ๋
```
klue_ynat_labelToTextDict = {
0: "IT๊ณผํ",
1: "๊ฒฝ์ ",
2: "์ฌํ",
3: "์ํ๋ฌธํ",
4: "์ธ๊ณ",
5: "์คํฌ์ธ ",
6: "์ ์น",
}
klue_ynat_labels = set(klue_ynat_labelToTextDict.values())
def klue_ynat_mapper(item):
positives = [klue_ynat_labelToTextDict[item["label"]]]
return {
"instruction": "๋ฌธ์ฅ์ ์ฝ๊ณ ์ฃผ์ ๋ฅผ ๋ถ๋ฅํ์ธ์",
"input": item["title"],
"positives": positives,
"negatives": klue_ynat_labels - set(positives)
}
kobest_wic_labels = ["์๋์ค", "์"]
def kobest_wic_mapper(item):
return {
"instruction": "์ฃผ์ด์ง ๋ ๋ฌธ์ฅ์์ ๋จ์ด {word}์(๋) ๋์ผํ ์๋ฏธ๋ก ์ฌ์ฉ๋์๋์?".format(word=item["word"]),
"input": "๋ฌธ์ฅ1: {context_1}\n๋ฌธ์ฅ2: {context_2}".format(**item),
"positives": [kobest_wic_labels[item['label']]],
"negatives": [kobest_wic_labels[1 - item['label']]]
}
copa_question = {
"๊ฒฐ๊ณผ": "์ดํ์ ์ด์ด์ง ๊ฒฐ๊ณผ๋?",
"์์ธ": "์ด๋ฌํ ์ผ์ด ์ผ์ด๋ ์์ธ์?"
}
def kobest_copa_mapper(item):
answers = [item["alternative_1"], item["alternative_2"]]
return {
"instruction": copa_question[item["question"]],
"input": item["premise"],
"positives": [answers[item['label']]],
"negatives": [answers[1 - item['label']]]
}
def kobest_hellaswag_mapper(item):
answers = [item[f"ending_{i}"] for i in range(1, 5)]
label = answers[item['label']]
answers.remove(label)
return {
"instruction": "์ดํ์ ์ด์ด์ง ๋ด์ฉ์ผ๋ก ๊ฐ์ฅ ์ ์ ํ ๊ฒ์?",
"input": item["context"],
"positives": [label],
"negatives": answers
}
kobest_boolq_labels = ["์๋์ค", "์"]
def kobest_boolq_mapper(item):
return {
"instruction": item["question"],
"input": item["paragraph"],
"positives": [kobest_boolq_labels[item['label']]],
"negatives": [kobest_boolq_labels[1 - item['label']]]
}
kobest_sentineg_labels = ["๋ถ์ ", "๊ธ์ "]
def kobest_sentineg_mapper(item):
return {
"instruction": "์ฃผ์ด์ง ๋ฌธ์ฅ์ ๊ฐ์ ์ ๋ถ๋ฅํ์ธ์",
"input": item["sentence"],
"positives": [kobest_boolq_labels[item['label']]],
"negatives": [kobest_boolq_labels[1 - item['label']]]
}
nsmc_labels = ["๋ถ์ ", "๊ธ์ "]
def nsmc_mapper(item):
return {
"instruction": "์ฃผ์ด์ง ๋ฌธ์ฅ์ ๊ฐ์ ์ ๋ถ๋ฅํ์ธ์",
"input": item["document"],
"positives": [nsmc_labels[item['label']]],
"negatives": [nsmc_labels[1 - item['label']]]
}
apeach_labels = ["ํ์ค ํํ์ด ์๋๋๋ค", "ํ์คํํ"]
def apeach_mapper(item):
return {
"instruction": "ํ์ค์ฑ์ ๋ถ๋ฅํด๋ณด์ธ์.",
"input": item["text"],
"positives": [nsmc_labels[item['class']]],
"negatives": [nsmc_labels[1 - item['class']]]
}
EVAL_LIST = {
"klue-ynat": dict(
load_args=dict(
path="klue",
name="ynat",
split="validation"
),
mapper=klue_ynat_mapper
),
"nsmc": dict(
load_args=dict(
path="nsmc",
split="test"
),
mapper=nsmc_mapper
),
"apeach": dict(
load_args=dict(
path="jason9693/APEACH",
split="test"
),
mapper=apeach_mapper
),
"kobest-wic": dict(
load_args=dict(
path="skt/kobest_v1",
name="wic",
split="test"
),
mapper=kobest_wic_mapper
),
"kobest-copa": dict(
load_args=dict(
path="skt/kobest_v1",
name="copa",
split="test"
),
mapper=kobest_copa_mapper
),
"kobest-hellaswag": dict(
load_args=dict(
path="skt/kobest_v1",
name="hellaswag",
split="test"
),
mapper=kobest_hellaswag_mapper
),
"kobest-boolq": dict(
load_args=dict(
path="skt/kobest_v1",
name="boolq",
split="test"
),
mapper=kobest_boolq_mapper
),
"kobest-sentineg": dict(
load_args=dict(
path="skt/kobest_v1",
name="sentineg",
split="test"
),
mapper=kobest_sentineg_mapper
)
}
```
|
Envoid/Bacchus-22B
|
Envoid
| 2023-08-13T05:46:06Z | 9 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-12T21:49:34Z |
# Warning: This model is unpredictable and may produce adult content.
Bacchus-22B uses the chargoddard llama-22b block diagonal merge script found here:
https://huggingface.co/chargoddard/llama2-22b
In this case I used Nous Hermes 13B as the base model:
https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b
And Manticore-30b-Chat-Pyg-Alpha-Landmark as the donor model:
https://huggingface.co/Honkware/Manticore-30b-Chat-Pyg-Alpha-Landmark
The initial results were a surprisingly coherent and functional model although I went ahead and gave it a fairly deep LoRA on 51 megabytes of raw text.
It responds well to Alpaca instruct style prompt formatting.
It can be a little rude at times and doesn't have Dendrite's ego and thirst for philosophical discussion but I feel that it's overall it's a much better general purpose model.
It does occasionally output grammatical errors during RP so might need a few more epochs to better fit the training data.
If you are role playing using the SillyTavern+SimpleProxy stack it does have a tendency to run away with a scene when using the verbose.mjs prompt format. The singleline.mjs format sometimes remedies this issue however it also causes some characters to give very short, dull replies. So achieving a balance might require a complete new custom prompt format.
## use_cache was originally set to false when uploaded this has now been remedied. recommended edit or redownload config.
## I have been asked about GPTQ for this model unfortunately there seems to be some weird vocabulary mismatch that causes GPTQ to corrupt the model. So the only way to run it in 4bit at the moment is to load the FP16 model in 4bit via transformers.
|
Dredta/Ukiyana
|
Dredta
| 2023-08-13T05:12:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-13T05:10:04Z |
---
license: creativeml-openrail-m
---
|
nagupv/Llama-7B_LLMExam_f0
|
nagupv
| 2023-08-13T05:01:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-12T12:37:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Chattiori/MelonMix
|
Chattiori
| 2023-08-13T04:37:41Z | 37 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Grapefruit",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-20T09:42:48Z |
---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- Grapefruit
---
# <span style="color:#00a0a0; font-size:30pt; font-weight:bolder; font-style:italic;"> MelonMix </span>
This model was checkpoint merge of Anything v4.5, AbyssOrangeMix 3A1B, GrapeFruitV4.1 and 7th Anime v3 C.
V2 has AnyOrangeMix 48A13B, Hassaku v1.3, blue_pencil EX, MIX-Pro v4.5+ColorBox and MeinaPastel V6.
Since AnyOrangeMix 48A13B is the mix of Anything v5, AnythingElse v4.5, AbyssOrangeMix3 A1B and AbyssOrangeMix3 A3,
merge recipe showing below is identicle.
For V2, I used [Chattiori-Model-Merger](https://github.com/Faildes/Chattiori-Model-Merger).
## Merge Recipe
V1:(Anything v4.5 (0.5) + AbyssOrangeMix 3A1B (0.5) Weighted Sum) (0.5) +
(grapefruitV4.1 (0.5) + 7th Anime v3 C (0.5) Weighted Sum) (0.5) Weighted Sum
V2:
* Weighted Sum, [**AnythingElse V4-v4.5**](https://civitai.com/models/4855) + [**Anything v5-Prt-Re**](https://civitai.com/models/9409), alpha(0.6) >> **TEMP_0**
* Weighted Sum, [**AbyssOrangeMix3-A1B**](https://civitai.com/models/9942) + [**AbyssOrangeMix3-A3**](https://civitai.com/models/9942), alpha(0.5) >> **TEMP_1**
* Sum Twice, **TEMP_0** + **TEMP_1** + [**MIX-Pro-V4.5+ColorBox**](https://civitai.com/models/14206), alpha(0.5) rand_beta(0.3, 0.7, 17546192) >> **TEMP_2**
* Sum Twice, [**Hassaku (hentai model)-v1.3**](https://civitai.com/models/2583) + [**MeinaPastel-V6**](https://civitai.com/models/11866) + [**blue_pencil-EX**](https://civitai.com/models/79083), rand_alpha(0.35, 0.65, 5481652) rand_beta(0.2, 0.45, 61427253) >> **TEMP_3**
* Weighted Sum, **TEMP_3** + **TEMP_2**, rand_alpha(0.25, 0.75, 964451837) >> **MelonMixV2**
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.