modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 00:36:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 540
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 00:36:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nhoxinh/56074fa1-1876-44f5-9e04-4019f008f055
|
nhoxinh
| 2025-01-21T12:55:18Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:44:17Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 56074fa1-1876-44f5-9e04-4019f008f055
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6bb273fb8d3c0253_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6bb273fb8d3c0253_train_data.json
type:
field_input: condition
field_instruction: drugName
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/56074fa1-1876-44f5-9e04-4019f008f055
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6bb273fb8d3c0253_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f44a8599-bd2c-4b24-9468-fb17670debf8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f44a8599-bd2c-4b24-9468-fb17670debf8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 56074fa1-1876-44f5-9e04-4019f008f055
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.8806 | 0.0078 | 200 | 2.0659 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/601e1c9e-6e78-4c6c-83a7-ce96943576bf
|
Best000
| 2025-01-21T12:55:06Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | 2025-01-21T12:54:41Z |
---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 601e1c9e-6e78-4c6c-83a7-ce96943576bf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4214a6acaa4ea6d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4214a6acaa4ea6d5_train_data.json
type:
field_input: tags
field_instruction: sentences
field_output: NER_TAGS
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/601e1c9e-6e78-4c6c-83a7-ce96943576bf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4214a6acaa4ea6d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7c1cea2e-e61e-4570-af77-6f76e74a258b
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7c1cea2e-e61e-4570-af77-6f76e74a258b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 601e1c9e-6e78-4c6c-83a7-ce96943576bf
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 43.6855 | 0.0009 | 1 | 10.9245 |
| 43.7175 | 0.0028 | 3 | 10.9243 |
| 43.7176 | 0.0056 | 6 | 10.9227 |
| 43.6932 | 0.0084 | 9 | 10.8839 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sergioalves/05618dee-ab92-4757-af73-12793dbaba30
|
sergioalves
| 2025-01-21T12:55:00Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-21T12:44:20Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 05618dee-ab92-4757-af73-12793dbaba30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2285406178062357_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2285406178062357_train_data.json
type:
field_input: code_before
field_instruction: func_before
field_output: code_after
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: sergioalves/05618dee-ab92-4757-af73-12793dbaba30
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/2285406178062357_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ad2f2b3d-aa0d-468c-9405-73b96cd163da
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ad2f2b3d-aa0d-468c-9405-73b96cd163da
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 05618dee-ab92-4757-af73-12793dbaba30
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0020 | 1 | 0.7530 |
| 2.7448 | 0.0101 | 5 | 0.7507 |
| 3.0068 | 0.0202 | 10 | 0.7387 |
| 2.7597 | 0.0302 | 15 | 0.7309 |
| 2.8839 | 0.0403 | 20 | 0.7269 |
| 2.7014 | 0.0504 | 25 | 0.7234 |
| 2.6363 | 0.0605 | 30 | 0.7229 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
havinash-ai/5a51f800-f03b-497d-9dc7-c04b99c41fb6
|
havinash-ai
| 2025-01-21T12:54:31Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Artples/L-MChat-7b",
"base_model:adapter:Artples/L-MChat-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:53:16Z |
---
library_name: peft
license: apache-2.0
base_model: Artples/L-MChat-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5a51f800-f03b-497d-9dc7-c04b99c41fb6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Artples/L-MChat-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- df03514e65800f80_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/df03514e65800f80_train_data.json
type:
field_instruction: input
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/5a51f800-f03b-497d-9dc7-c04b99c41fb6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/df03514e65800f80_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_turn|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e3508d62-5471-4cdf-8dba-5844f441931a
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e3508d62-5471-4cdf-8dba-5844f441931a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5a51f800-f03b-497d-9dc7-c04b99c41fb6
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0011 | 1 | nan |
| 0.0 | 0.0034 | 3 | nan |
| 0.0 | 0.0068 | 6 | nan |
| 0.0 | 0.0101 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/1220e340-6dd5-449f-955a-5c7b981b876f
|
Best000
| 2025-01-21T12:54:14Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"license:gemma",
"region:us"
] | null | 2025-01-21T11:50:56Z |
---
library_name: peft
license: gemma
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1220e340-6dd5-449f-955a-5c7b981b876f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7152f650ecb2903c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7152f650ecb2903c_train_data.json
type:
field_instruction: pattern
field_output: sentence
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/1220e340-6dd5-449f-955a-5c7b981b876f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/7152f650ecb2903c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2f783681-0954-497d-858e-f6a9740d789d
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2f783681-0954-497d-858e-f6a9740d789d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1220e340-6dd5-449f-955a-5c7b981b876f
This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.1394 | 0.0000 | 1 | 5.6277 |
| 5.728 | 0.0001 | 3 | 5.5548 |
| 5.1083 | 0.0001 | 6 | 4.6567 |
| 3.2958 | 0.0002 | 9 | 3.5726 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung03/e06908a5-3e83-40a2-971b-3893a66fe938
|
nhung03
| 2025-01-21T12:53:53Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:48:12Z |
---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e06908a5-3e83-40a2-971b-3893a66fe938
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a75236d5c65ead30_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a75236d5c65ead30_train_data.json
type:
field_input: scene_setting
field_instruction: user_setting
field_output: assistant_setting
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/e06908a5-3e83-40a2-971b-3893a66fe938
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a75236d5c65ead30_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a26c3dbd-260b-429a-b4e4-cbf7b2da5f3d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a26c3dbd-260b-429a-b4e4-cbf7b2da5f3d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e06908a5-3e83-40a2-971b-3893a66fe938
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.404 | 0.0427 | 200 | 2.5639 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/7cc54379-cd55-4a1a-a934-592749a9aa76
|
Best000
| 2025-01-21T12:53:40Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2025-01-21T12:52:58Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7cc54379-cd55-4a1a-a934-592749a9aa76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ddbeadb543cf2f4e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ddbeadb543cf2f4e_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/7cc54379-cd55-4a1a-a934-592749a9aa76
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ddbeadb543cf2f4e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 42dfa003-a971-4f6d-a499-5d2f92d18baa
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 42dfa003-a971-4f6d-a499-5d2f92d18baa
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7cc54379-cd55-4a1a-a934-592749a9aa76
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4439 | 0.0092 | 1 | 4.2637 |
| 3.8856 | 0.0277 | 3 | 4.2370 |
| 3.7924 | 0.0554 | 6 | 3.9160 |
| 3.1065 | 0.0831 | 9 | 3.4881 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thdihan/gemma-2b-finetuned-psych8k-1k
|
thdihan
| 2025-01-21T12:51:49Z | 38 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T12:44:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhunglaaaaaaa/f773374e-60ef-4fe9-b325-4ce2a455347b
|
nhunglaaaaaaa
| 2025-01-21T12:51:49Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:16:05Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f773374e-60ef-4fe9-b325-4ce2a455347b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- baac717caf978860_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/baac717caf978860_train_data.json
type:
field_input: chosen-r
field_instruction: source
field_output: chosen-refined
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/f773374e-60ef-4fe9-b325-4ce2a455347b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/baac717caf978860_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 15392e77-9853-4e00-86aa-ecd75e9c25d7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 15392e77-9853-4e00-86aa-ecd75e9c25d7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f773374e-60ef-4fe9-b325-4ce2a455347b
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8422 | 0.0687 | 200 | 0.8181 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
great0001/40731a2d-6e66-4847-9aad-f6c99c65c3c0
|
great0001
| 2025-01-21T12:50:20Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2025-01-21T12:48:40Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 40731a2d-6e66-4847-9aad-f6c99c65c3c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ddbeadb543cf2f4e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ddbeadb543cf2f4e_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/40731a2d-6e66-4847-9aad-f6c99c65c3c0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ddbeadb543cf2f4e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 42dfa003-a971-4f6d-a499-5d2f92d18baa
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 42dfa003-a971-4f6d-a499-5d2f92d18baa
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 40731a2d-6e66-4847-9aad-f6c99c65c3c0
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4439 | 0.0092 | 1 | 4.2637 |
| 3.8892 | 0.0277 | 3 | 4.2397 |
| 3.797 | 0.0554 | 6 | 3.9266 |
| 3.1095 | 0.0831 | 9 | 3.4869 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k8_task7_organization
|
MayBashendy
| 2025-01-21T12:48:54Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T12:44:58Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k8_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k8_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9402
- Qwk: 0.2460
- Mse: 0.9402
- Rmse: 0.9697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1053 | 2 | 2.4641 | -0.0568 | 2.4641 | 1.5697 |
| No log | 0.2105 | 4 | 1.2791 | 0.1882 | 1.2791 | 1.1310 |
| No log | 0.3158 | 6 | 1.0031 | -0.0550 | 1.0031 | 1.0015 |
| No log | 0.4211 | 8 | 1.1737 | -0.1355 | 1.1737 | 1.0834 |
| No log | 0.5263 | 10 | 1.2532 | -0.1993 | 1.2532 | 1.1195 |
| No log | 0.6316 | 12 | 0.8472 | 0.0 | 0.8472 | 0.9204 |
| No log | 0.7368 | 14 | 0.6607 | 0.1232 | 0.6607 | 0.8128 |
| No log | 0.8421 | 16 | 0.6433 | 0.2676 | 0.6433 | 0.8021 |
| No log | 0.9474 | 18 | 0.7040 | 0.3019 | 0.7040 | 0.8391 |
| No log | 1.0526 | 20 | 0.7188 | 0.3019 | 0.7188 | 0.8478 |
| No log | 1.1579 | 22 | 0.6808 | 0.2676 | 0.6808 | 0.8251 |
| No log | 1.2632 | 24 | 0.6844 | 0.2676 | 0.6844 | 0.8273 |
| No log | 1.3684 | 26 | 0.6940 | 0.2676 | 0.6940 | 0.8331 |
| No log | 1.4737 | 28 | 0.7603 | 0.3125 | 0.7603 | 0.8720 |
| No log | 1.5789 | 30 | 0.8185 | 0.1660 | 0.8185 | 0.9047 |
| No log | 1.6842 | 32 | 0.7336 | 0.2748 | 0.7336 | 0.8565 |
| No log | 1.7895 | 34 | 0.8494 | 0.2358 | 0.8494 | 0.9217 |
| No log | 1.8947 | 36 | 1.0127 | 0.1955 | 1.0127 | 1.0063 |
| No log | 2.0 | 38 | 0.8270 | 0.1459 | 0.8270 | 0.9094 |
| No log | 2.1053 | 40 | 0.7232 | 0.1277 | 0.7232 | 0.8504 |
| No log | 2.2105 | 42 | 0.7874 | 0.2156 | 0.7874 | 0.8874 |
| No log | 2.3158 | 44 | 0.7714 | 0.1365 | 0.7714 | 0.8783 |
| No log | 2.4211 | 46 | 0.7528 | 0.0717 | 0.7528 | 0.8677 |
| No log | 2.5263 | 48 | 0.8233 | 0.2407 | 0.8233 | 0.9073 |
| No log | 2.6316 | 50 | 0.8316 | 0.2652 | 0.8316 | 0.9119 |
| No log | 2.7368 | 52 | 0.7702 | 0.1863 | 0.7702 | 0.8776 |
| No log | 2.8421 | 54 | 0.7735 | 0.2884 | 0.7735 | 0.8795 |
| No log | 2.9474 | 56 | 0.7935 | 0.3238 | 0.7935 | 0.8908 |
| No log | 3.0526 | 58 | 1.0933 | 0.1241 | 1.0933 | 1.0456 |
| No log | 3.1579 | 60 | 1.2316 | 0.1839 | 1.2316 | 1.1098 |
| No log | 3.2632 | 62 | 1.0365 | 0.2460 | 1.0365 | 1.0181 |
| No log | 3.3684 | 64 | 0.9874 | 0.1692 | 0.9874 | 0.9937 |
| No log | 3.4737 | 66 | 1.1110 | 0.2209 | 1.1110 | 1.0540 |
| No log | 3.5789 | 68 | 1.4546 | 0.1067 | 1.4546 | 1.2061 |
| No log | 3.6842 | 70 | 1.6366 | 0.1555 | 1.6366 | 1.2793 |
| No log | 3.7895 | 72 | 1.3934 | 0.1093 | 1.3934 | 1.1804 |
| No log | 3.8947 | 74 | 1.3039 | 0.1175 | 1.3039 | 1.1419 |
| No log | 4.0 | 76 | 1.1431 | 0.1394 | 1.1431 | 1.0692 |
| No log | 4.1053 | 78 | 1.1404 | 0.1976 | 1.1404 | 1.0679 |
| No log | 4.2105 | 80 | 1.3409 | 0.1568 | 1.3409 | 1.1580 |
| No log | 4.3158 | 82 | 1.1559 | 0.1618 | 1.1559 | 1.0751 |
| No log | 4.4211 | 84 | 1.2002 | 0.1799 | 1.2002 | 1.0955 |
| No log | 4.5263 | 86 | 1.5611 | 0.1169 | 1.5611 | 1.2495 |
| No log | 4.6316 | 88 | 1.9707 | 0.0421 | 1.9707 | 1.4038 |
| No log | 4.7368 | 90 | 1.9777 | 0.0421 | 1.9777 | 1.4063 |
| No log | 4.8421 | 92 | 1.7506 | 0.0589 | 1.7506 | 1.3231 |
| No log | 4.9474 | 94 | 1.5731 | 0.1195 | 1.5731 | 1.2542 |
| No log | 5.0526 | 96 | 1.4243 | 0.1093 | 1.4243 | 1.1934 |
| No log | 5.1579 | 98 | 1.3725 | 0.1093 | 1.3725 | 1.1716 |
| No log | 5.2632 | 100 | 1.6731 | 0.0689 | 1.6731 | 1.2935 |
| No log | 5.3684 | 102 | 1.6296 | 0.0300 | 1.6296 | 1.2766 |
| No log | 5.4737 | 104 | 1.1849 | 0.1029 | 1.1849 | 1.0885 |
| No log | 5.5789 | 106 | 0.9579 | 0.1661 | 0.9579 | 0.9787 |
| No log | 5.6842 | 108 | 0.9987 | 0.1603 | 0.9987 | 0.9994 |
| No log | 5.7895 | 110 | 1.2834 | 0.1458 | 1.2834 | 1.1329 |
| No log | 5.8947 | 112 | 1.4733 | 0.0803 | 1.4733 | 1.2138 |
| No log | 6.0 | 114 | 1.3218 | 0.1458 | 1.3218 | 1.1497 |
| No log | 6.1053 | 116 | 0.9666 | 0.1651 | 0.9666 | 0.9832 |
| No log | 6.2105 | 118 | 0.9062 | 0.2076 | 0.9062 | 0.9519 |
| No log | 6.3158 | 120 | 1.0268 | 0.1210 | 1.0268 | 1.0133 |
| No log | 6.4211 | 122 | 1.4521 | 0.0361 | 1.4521 | 1.2050 |
| No log | 6.5263 | 124 | 1.8056 | 0.0350 | 1.8056 | 1.3437 |
| No log | 6.6316 | 126 | 1.7978 | 0.0350 | 1.7978 | 1.3408 |
| No log | 6.7368 | 128 | 1.5457 | 0.0447 | 1.5457 | 1.2433 |
| No log | 6.8421 | 130 | 1.2205 | 0.1262 | 1.2205 | 1.1048 |
| No log | 6.9474 | 132 | 1.1007 | 0.1356 | 1.1007 | 1.0491 |
| No log | 7.0526 | 134 | 1.1800 | 0.1293 | 1.1800 | 1.0863 |
| No log | 7.1579 | 136 | 1.3351 | 0.1174 | 1.3351 | 1.1555 |
| No log | 7.2632 | 138 | 1.2657 | 0.1293 | 1.2657 | 1.1251 |
| No log | 7.3684 | 140 | 1.2024 | 0.1293 | 1.2024 | 1.0965 |
| No log | 7.4737 | 142 | 1.3602 | 0.1175 | 1.3602 | 1.1663 |
| No log | 7.5789 | 144 | 1.5870 | 0.0283 | 1.5870 | 1.2598 |
| No log | 7.6842 | 146 | 1.5829 | 0.0283 | 1.5829 | 1.2581 |
| No log | 7.7895 | 148 | 1.3217 | 0.1175 | 1.3217 | 1.1497 |
| No log | 7.8947 | 150 | 1.0634 | 0.2119 | 1.0634 | 1.0312 |
| No log | 8.0 | 152 | 1.0604 | 0.1787 | 1.0604 | 1.0298 |
| No log | 8.1053 | 154 | 1.1823 | 0.2412 | 1.1823 | 1.0873 |
| No log | 8.2105 | 156 | 1.3899 | 0.0873 | 1.3899 | 1.1789 |
| No log | 8.3158 | 158 | 1.3188 | 0.1464 | 1.3188 | 1.1484 |
| No log | 8.4211 | 160 | 1.0921 | 0.1709 | 1.0921 | 1.0450 |
| No log | 8.5263 | 162 | 0.9088 | 0.1777 | 0.9088 | 0.9533 |
| No log | 8.6316 | 164 | 0.8664 | 0.2692 | 0.8664 | 0.9308 |
| No log | 8.7368 | 166 | 0.9342 | 0.1955 | 0.9342 | 0.9665 |
| No log | 8.8421 | 168 | 1.2602 | 0.1458 | 1.2602 | 1.1226 |
| No log | 8.9474 | 170 | 1.4897 | 0.0745 | 1.4897 | 1.2205 |
| No log | 9.0526 | 172 | 1.3828 | 0.0829 | 1.3828 | 1.1759 |
| No log | 9.1579 | 174 | 1.0783 | 0.2782 | 1.0783 | 1.0384 |
| No log | 9.2632 | 176 | 0.8226 | 0.2352 | 0.8226 | 0.9070 |
| No log | 9.3684 | 178 | 0.7643 | 0.2407 | 0.7643 | 0.8743 |
| No log | 9.4737 | 180 | 0.7740 | 0.2718 | 0.7740 | 0.8798 |
| No log | 9.5789 | 182 | 0.9104 | 0.2000 | 0.9104 | 0.9541 |
| No log | 9.6842 | 184 | 1.2011 | 0.2045 | 1.2011 | 1.0959 |
| No log | 9.7895 | 186 | 1.3583 | 0.1427 | 1.3583 | 1.1655 |
| No log | 9.8947 | 188 | 1.3863 | 0.1275 | 1.3863 | 1.1774 |
| No log | 10.0 | 190 | 1.4604 | 0.1549 | 1.4604 | 1.2085 |
| No log | 10.1053 | 192 | 1.2898 | 0.1638 | 1.2898 | 1.1357 |
| No log | 10.2105 | 194 | 1.1966 | 0.1784 | 1.1966 | 1.0939 |
| No log | 10.3158 | 196 | 1.1865 | 0.1784 | 1.1865 | 1.0893 |
| No log | 10.4211 | 198 | 1.2104 | 0.1490 | 1.2104 | 1.1002 |
| No log | 10.5263 | 200 | 1.2296 | 0.1458 | 1.2296 | 1.1089 |
| No log | 10.6316 | 202 | 1.2308 | 0.1458 | 1.2308 | 1.1094 |
| No log | 10.7368 | 204 | 1.1178 | 0.1626 | 1.1178 | 1.0573 |
| No log | 10.8421 | 206 | 0.9721 | 0.1274 | 0.9721 | 0.9859 |
| No log | 10.9474 | 208 | 0.9787 | 0.1557 | 0.9787 | 0.9893 |
| No log | 11.0526 | 210 | 1.0491 | 0.0925 | 1.0491 | 1.0242 |
| No log | 11.1579 | 212 | 1.2200 | 0.1943 | 1.2200 | 1.1046 |
| No log | 11.2632 | 214 | 1.3621 | 0.1220 | 1.3621 | 1.1671 |
| No log | 11.3684 | 216 | 1.2789 | 0.1427 | 1.2789 | 1.1309 |
| No log | 11.4737 | 218 | 1.1262 | 0.2782 | 1.1262 | 1.0612 |
| No log | 11.5789 | 220 | 1.0803 | 0.1787 | 1.0803 | 1.0394 |
| No log | 11.6842 | 222 | 1.0540 | 0.1787 | 1.0540 | 1.0266 |
| No log | 11.7895 | 224 | 1.1171 | 0.1949 | 1.1171 | 1.0569 |
| No log | 11.8947 | 226 | 1.2800 | 0.0712 | 1.2800 | 1.1314 |
| No log | 12.0 | 228 | 1.3595 | 0.0419 | 1.3595 | 1.1660 |
| No log | 12.1053 | 230 | 1.3345 | 0.0694 | 1.3345 | 1.1552 |
| No log | 12.2105 | 232 | 1.5379 | 0.0832 | 1.5379 | 1.2401 |
| No log | 12.3158 | 234 | 1.7880 | 0.0932 | 1.7880 | 1.3372 |
| No log | 12.4211 | 236 | 1.6579 | 0.1549 | 1.6579 | 1.2876 |
| No log | 12.5263 | 238 | 1.3537 | 0.0952 | 1.3537 | 1.1635 |
| No log | 12.6316 | 240 | 1.1170 | 0.0448 | 1.1170 | 1.0569 |
| No log | 12.7368 | 242 | 1.0241 | 0.0448 | 1.0241 | 1.0120 |
| No log | 12.8421 | 244 | 1.0171 | 0.0799 | 1.0171 | 1.0085 |
| No log | 12.9474 | 246 | 1.1371 | 0.0585 | 1.1371 | 1.0663 |
| No log | 13.0526 | 248 | 1.2053 | 0.1205 | 1.2053 | 1.0979 |
| No log | 13.1579 | 250 | 1.1845 | 0.0538 | 1.1845 | 1.0884 |
| No log | 13.2632 | 252 | 1.0946 | 0.0982 | 1.0946 | 1.0463 |
| No log | 13.3684 | 254 | 1.1487 | 0.0982 | 1.1487 | 1.0718 |
| No log | 13.4737 | 256 | 1.3419 | 0.0694 | 1.3419 | 1.1584 |
| No log | 13.5789 | 258 | 1.5060 | 0.0086 | 1.5060 | 1.2272 |
| No log | 13.6842 | 260 | 1.4753 | 0.0086 | 1.4753 | 1.2146 |
| No log | 13.7895 | 262 | 1.2918 | 0.0459 | 1.2918 | 1.1366 |
| No log | 13.8947 | 264 | 1.2588 | 0.0761 | 1.2588 | 1.1220 |
| No log | 14.0 | 266 | 1.1447 | 0.1147 | 1.1447 | 1.0699 |
| No log | 14.1053 | 268 | 1.0228 | 0.1385 | 1.0228 | 1.0114 |
| No log | 14.2105 | 270 | 1.0093 | 0.1734 | 1.0093 | 1.0046 |
| No log | 14.3158 | 272 | 1.1433 | 0.0561 | 1.1433 | 1.0692 |
| No log | 14.4211 | 274 | 1.3866 | 0.0584 | 1.3866 | 1.1776 |
| No log | 14.5263 | 276 | 1.5194 | 0.0465 | 1.5194 | 1.2326 |
| No log | 14.6316 | 278 | 1.4404 | 0.0531 | 1.4404 | 1.2002 |
| No log | 14.7368 | 280 | 1.3060 | 0.1458 | 1.3060 | 1.1428 |
| No log | 14.8421 | 282 | 1.2034 | 0.0546 | 1.2034 | 1.0970 |
| No log | 14.9474 | 284 | 1.1476 | 0.0315 | 1.1476 | 1.0712 |
| No log | 15.0526 | 286 | 1.2144 | 0.1262 | 1.2144 | 1.1020 |
| No log | 15.1579 | 288 | 1.3063 | 0.0648 | 1.3063 | 1.1429 |
| No log | 15.2632 | 290 | 1.3645 | 0.0921 | 1.3645 | 1.1681 |
| No log | 15.3684 | 292 | 1.2482 | 0.0921 | 1.2482 | 1.1172 |
| No log | 15.4737 | 294 | 1.0915 | 0.2183 | 1.0915 | 1.0447 |
| No log | 15.5789 | 296 | 1.0100 | 0.2032 | 1.0100 | 1.0050 |
| No log | 15.6842 | 298 | 0.9617 | 0.1651 | 0.9617 | 0.9807 |
| No log | 15.7895 | 300 | 0.9237 | 0.1822 | 0.9237 | 0.9611 |
| No log | 15.8947 | 302 | 0.9719 | 0.1651 | 0.9719 | 0.9859 |
| No log | 16.0 | 304 | 1.0726 | 0.1389 | 1.0726 | 1.0357 |
| No log | 16.1053 | 306 | 1.2458 | 0.1490 | 1.2458 | 1.1162 |
| No log | 16.2105 | 308 | 1.4795 | 0.0519 | 1.4795 | 1.2164 |
| No log | 16.3158 | 310 | 1.4706 | 0.0519 | 1.4706 | 1.2127 |
| No log | 16.4211 | 312 | 1.2663 | 0.1233 | 1.2663 | 1.1253 |
| No log | 16.5263 | 314 | 1.0773 | 0.0569 | 1.0773 | 1.0379 |
| No log | 16.6316 | 316 | 1.0396 | 0.0592 | 1.0396 | 1.0196 |
| No log | 16.7368 | 318 | 1.0507 | 0.0894 | 1.0507 | 1.0251 |
| No log | 16.8421 | 320 | 1.0997 | 0.1394 | 1.0997 | 1.0487 |
| No log | 16.9474 | 322 | 1.1279 | 0.1635 | 1.1279 | 1.0620 |
| No log | 17.0526 | 324 | 1.1312 | 0.1635 | 1.1312 | 1.0636 |
| No log | 17.1579 | 326 | 1.0328 | 0.1463 | 1.0328 | 1.0163 |
| No log | 17.2632 | 328 | 0.9286 | 0.1612 | 0.9286 | 0.9636 |
| No log | 17.3684 | 330 | 0.9605 | 0.1612 | 0.9605 | 0.9800 |
| No log | 17.4737 | 332 | 1.0802 | 0.2183 | 1.0802 | 1.0393 |
| No log | 17.5789 | 334 | 1.2681 | 0.0921 | 1.2681 | 1.1261 |
| No log | 17.6842 | 336 | 1.3526 | 0.0343 | 1.3526 | 1.1630 |
| No log | 17.7895 | 338 | 1.3346 | 0.0898 | 1.3346 | 1.1553 |
| No log | 17.8947 | 340 | 1.3625 | 0.0873 | 1.3625 | 1.1673 |
| No log | 18.0 | 342 | 1.1606 | 0.1523 | 1.1606 | 1.0773 |
| No log | 18.1053 | 344 | 0.9443 | 0.1803 | 0.9443 | 0.9717 |
| No log | 18.2105 | 346 | 0.9077 | 0.2287 | 0.9077 | 0.9527 |
| No log | 18.3158 | 348 | 0.9189 | 0.2000 | 0.9189 | 0.9586 |
| No log | 18.4211 | 350 | 0.9441 | 0.2211 | 0.9441 | 0.9717 |
| No log | 18.5263 | 352 | 0.9530 | 0.2211 | 0.9530 | 0.9762 |
| No log | 18.6316 | 354 | 1.0559 | 0.2316 | 1.0559 | 1.0276 |
| No log | 18.7368 | 356 | 1.1002 | 0.2552 | 1.1002 | 1.0489 |
| No log | 18.8421 | 358 | 1.1299 | 0.1858 | 1.1299 | 1.0629 |
| No log | 18.9474 | 360 | 1.1509 | 0.1821 | 1.1509 | 1.0728 |
| No log | 19.0526 | 362 | 1.0969 | 0.1709 | 1.0969 | 1.0473 |
| No log | 19.1579 | 364 | 1.0309 | 0.1869 | 1.0309 | 1.0153 |
| No log | 19.2632 | 366 | 1.0453 | 0.1869 | 1.0453 | 1.0224 |
| No log | 19.3684 | 368 | 1.0596 | 0.2032 | 1.0596 | 1.0294 |
| No log | 19.4737 | 370 | 1.1637 | 0.1870 | 1.1637 | 1.0788 |
| No log | 19.5789 | 372 | 1.2256 | 0.1205 | 1.2256 | 1.1070 |
| No log | 19.6842 | 374 | 1.2246 | 0.1205 | 1.2246 | 1.1066 |
| No log | 19.7895 | 376 | 1.1160 | 0.2183 | 1.1160 | 1.0564 |
| No log | 19.8947 | 378 | 1.0576 | 0.1210 | 1.0576 | 1.0284 |
| No log | 20.0 | 380 | 1.0879 | 0.1463 | 1.0879 | 1.0430 |
| No log | 20.1053 | 382 | 1.1567 | 0.1463 | 1.1567 | 1.0755 |
| No log | 20.2105 | 384 | 1.2232 | 0.1262 | 1.2232 | 1.1060 |
| No log | 20.3158 | 386 | 1.2875 | 0.1233 | 1.2875 | 1.1347 |
| No log | 20.4211 | 388 | 1.3831 | 0.0343 | 1.3831 | 1.1761 |
| No log | 20.5263 | 390 | 1.4274 | 0.0708 | 1.4274 | 1.1947 |
| No log | 20.6316 | 392 | 1.3132 | 0.1233 | 1.3132 | 1.1459 |
| No log | 20.7368 | 394 | 1.1052 | 0.1747 | 1.1052 | 1.0513 |
| No log | 20.8421 | 396 | 0.9038 | 0.1651 | 0.9038 | 0.9507 |
| No log | 20.9474 | 398 | 0.8384 | 0.2632 | 0.8384 | 0.9157 |
| No log | 21.0526 | 400 | 0.8279 | 0.2063 | 0.8279 | 0.9099 |
| No log | 21.1579 | 402 | 0.8476 | 0.2297 | 0.8476 | 0.9206 |
| No log | 21.2632 | 404 | 0.9798 | 0.0616 | 0.9798 | 0.9899 |
| No log | 21.3684 | 406 | 1.2113 | 0.1591 | 1.2113 | 1.1006 |
| No log | 21.4737 | 408 | 1.4117 | 0.1093 | 1.4117 | 1.1882 |
| No log | 21.5789 | 410 | 1.5392 | 0.1285 | 1.5392 | 1.2407 |
| No log | 21.6842 | 412 | 1.4635 | 0.0766 | 1.4635 | 1.2097 |
| No log | 21.7895 | 414 | 1.1993 | 0.1324 | 1.1993 | 1.0951 |
| No log | 21.8947 | 416 | 0.9784 | 0.0896 | 0.9784 | 0.9891 |
| No log | 22.0 | 418 | 0.8859 | 0.0895 | 0.8859 | 0.9412 |
| No log | 22.1053 | 420 | 0.8847 | 0.0895 | 0.8847 | 0.9406 |
| No log | 22.2105 | 422 | 0.8522 | 0.2171 | 0.8522 | 0.9232 |
| No log | 22.3158 | 424 | 0.8241 | 0.2171 | 0.8241 | 0.9078 |
| No log | 22.4211 | 426 | 0.8996 | 0.1718 | 0.8996 | 0.9485 |
| No log | 22.5263 | 428 | 1.0471 | 0.0896 | 1.0471 | 1.0233 |
| No log | 22.6316 | 430 | 1.0850 | 0.0842 | 1.0850 | 1.0416 |
| No log | 22.7368 | 432 | 1.0406 | 0.0896 | 1.0406 | 1.0201 |
| No log | 22.8421 | 434 | 0.9429 | 0.1692 | 0.9429 | 0.9711 |
| No log | 22.9474 | 436 | 0.8892 | 0.2632 | 0.8892 | 0.9430 |
| No log | 23.0526 | 438 | 0.8751 | 0.2409 | 0.8751 | 0.9355 |
| No log | 23.1579 | 440 | 0.8877 | 0.1914 | 0.8877 | 0.9422 |
| No log | 23.2632 | 442 | 0.8993 | 0.1914 | 0.8993 | 0.9483 |
| No log | 23.3684 | 444 | 0.9416 | 0.2000 | 0.9416 | 0.9703 |
| No log | 23.4737 | 446 | 0.9549 | 0.1651 | 0.9549 | 0.9772 |
| No log | 23.5789 | 448 | 0.8887 | 0.1962 | 0.8887 | 0.9427 |
| No log | 23.6842 | 450 | 0.8707 | 0.2352 | 0.8707 | 0.9331 |
| No log | 23.7895 | 452 | 0.9281 | 0.2142 | 0.9281 | 0.9634 |
| No log | 23.8947 | 454 | 0.9651 | 0.2259 | 0.9651 | 0.9824 |
| No log | 24.0 | 456 | 0.9547 | 0.2000 | 0.9547 | 0.9771 |
| No log | 24.1053 | 458 | 0.9812 | 0.2259 | 0.9812 | 0.9905 |
| No log | 24.2105 | 460 | 0.9499 | 0.2000 | 0.9499 | 0.9746 |
| No log | 24.3158 | 462 | 0.9134 | 0.1734 | 0.9134 | 0.9557 |
| No log | 24.4211 | 464 | 0.8957 | 0.1542 | 0.8957 | 0.9464 |
| No log | 24.5263 | 466 | 0.9177 | 0.1501 | 0.9177 | 0.9580 |
| No log | 24.6316 | 468 | 0.9123 | 0.1144 | 0.9123 | 0.9552 |
| No log | 24.7368 | 470 | 0.9427 | 0.0803 | 0.9427 | 0.9709 |
| No log | 24.8421 | 472 | 0.9644 | 0.1045 | 0.9644 | 0.9820 |
| No log | 24.9474 | 474 | 0.9877 | 0.0953 | 0.9877 | 0.9938 |
| No log | 25.0526 | 476 | 0.9548 | 0.0746 | 0.9548 | 0.9771 |
| No log | 25.1579 | 478 | 0.8722 | 0.1962 | 0.8722 | 0.9339 |
| No log | 25.2632 | 480 | 0.8807 | 0.1962 | 0.8807 | 0.9385 |
| No log | 25.3684 | 482 | 0.9581 | 0.1642 | 0.9581 | 0.9788 |
| No log | 25.4737 | 484 | 0.9240 | 0.1379 | 0.9240 | 0.9612 |
| No log | 25.5789 | 486 | 0.8329 | 0.2883 | 0.8329 | 0.9126 |
| No log | 25.6842 | 488 | 0.7711 | 0.3312 | 0.7711 | 0.8781 |
| No log | 25.7895 | 490 | 0.7734 | 0.2883 | 0.7734 | 0.8794 |
| No log | 25.8947 | 492 | 0.7991 | 0.3167 | 0.7991 | 0.8939 |
| No log | 26.0 | 494 | 0.8618 | 0.3359 | 0.8618 | 0.9283 |
| No log | 26.1053 | 496 | 0.8989 | 0.3231 | 0.8989 | 0.9481 |
| No log | 26.2105 | 498 | 0.8341 | 0.3294 | 0.8341 | 0.9133 |
| 0.2424 | 26.3158 | 500 | 0.7303 | 0.3099 | 0.7303 | 0.8546 |
| 0.2424 | 26.4211 | 502 | 0.7038 | 0.2817 | 0.7038 | 0.8389 |
| 0.2424 | 26.5263 | 504 | 0.7111 | 0.3099 | 0.7111 | 0.8433 |
| 0.2424 | 26.6316 | 506 | 0.7457 | 0.3099 | 0.7457 | 0.8635 |
| 0.2424 | 26.7368 | 508 | 0.8348 | 0.3359 | 0.8348 | 0.9137 |
| 0.2424 | 26.8421 | 510 | 0.9436 | 0.2460 | 0.9436 | 0.9714 |
| 0.2424 | 26.9474 | 512 | 1.0461 | 0.2141 | 1.0461 | 1.0228 |
| 0.2424 | 27.0526 | 514 | 1.1003 | 0.1832 | 1.1003 | 1.0490 |
| 0.2424 | 27.1579 | 516 | 1.0420 | 0.2227 | 1.0420 | 1.0208 |
| 0.2424 | 27.2632 | 518 | 0.9402 | 0.2460 | 0.9402 | 0.9697 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ClarenceDan/8500479a-7b5d-46f1-9208-c970e58819a2
|
ClarenceDan
| 2025-01-21T12:48:49Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:39:14Z |
---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8500479a-7b5d-46f1-9208-c970e58819a2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b5fe4b652f9222e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b5fe4b652f9222e_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/8500479a-7b5d-46f1-9208-c970e58819a2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/1b5fe4b652f9222e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 857fa1e7-73d3-440e-a388-76fc6a5b2495
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 857fa1e7-73d3-440e-a388-76fc6a5b2495
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8500479a-7b5d-46f1-9208-c970e58819a2
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0006 | 3 | nan |
| 0.0 | 0.0012 | 6 | nan |
| 0.0 | 0.0018 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/8d4acffc-77b2-45e7-b2db-56fd18aa1ade
|
ClarenceDan
| 2025-01-21T12:48:20Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:33:03Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8d4acffc-77b2-45e7-b2db-56fd18aa1ade
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0eba3e80d15355a6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0eba3e80d15355a6_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: accepted
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/8d4acffc-77b2-45e7-b2db-56fd18aa1ade
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/0eba3e80d15355a6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84f8a085-50df-4e7c-9e21-f8d55ac51824
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 84f8a085-50df-4e7c-9e21-f8d55ac51824
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8d4acffc-77b2-45e7-b2db-56fd18aa1ade
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.904 | 0.0002 | 1 | 0.7949 |
| 0.7439 | 0.0005 | 3 | 0.7944 |
| 0.7234 | 0.0009 | 6 | 0.7882 |
| 0.7158 | 0.0014 | 9 | 0.7682 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Manas32122/whisper_merged_new
|
Manas32122
| 2025-01-21T12:48:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-21T12:42:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ryusangwon/ko_en_qe_ppo_0.9_1e-6
|
ryusangwon
| 2025-01-21T12:47:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-01-21T12:43:47Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="ryusangwon//tmp/tmpbj0hofua/ryusangwon/ko_en_qe_ppo_0.9_1e-6")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("ryusangwon//tmp/tmpbj0hofua/ryusangwon/ko_en_qe_ppo_0.9_1e-6")
model = AutoModelForCausalLMWithValueHead.from_pretrained("ryusangwon//tmp/tmpbj0hofua/ryusangwon/ko_en_qe_ppo_0.9_1e-6")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
nhung03/ba28972f-a518-42ba-8b7e-9a76b2c77273
|
nhung03
| 2025-01-21T12:45:44Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:39:33Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ba28972f-a518-42ba-8b7e-9a76b2c77273
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ddbeadb543cf2f4e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ddbeadb543cf2f4e_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/ba28972f-a518-42ba-8b7e-9a76b2c77273
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ddbeadb543cf2f4e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 42dfa003-a971-4f6d-a499-5d2f92d18baa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 42dfa003-a971-4f6d-a499-5d2f92d18baa
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ba28972f-a518-42ba-8b7e-9a76b2c77273
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 109
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.3957 | 0.9977 | 108 | 3.1566 |
| 5.1265 | 1.0069 | 109 | 3.1683 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
havinash-ai/785c4b87-9684-48b5-abf4-93a55427d946
|
havinash-ai
| 2025-01-21T12:44:54Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2025-01-21T12:39:58Z |
---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 785c4b87-9684-48b5-abf4-93a55427d946
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 87ecfef6de5c4ae6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/87ecfef6de5c4ae6_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/785c4b87-9684-48b5-abf4-93a55427d946
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/87ecfef6de5c4ae6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 08949a5f-0b74-4dce-877f-c6b2eba8999f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 08949a5f-0b74-4dce-877f-c6b2eba8999f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 785c4b87-9684-48b5-abf4-93a55427d946
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3716 | 0.0002 | 1 | 1.5921 |
| 1.9101 | 0.0006 | 3 | 1.5875 |
| 1.8918 | 0.0011 | 6 | 1.4767 |
| 1.4858 | 0.0017 | 9 | 1.0688 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Alecardo/Ricardo-Fort-678f930b9d5393dc7e1a8ca9
|
Alecardo
| 2025-01-21T12:44:49Z | 110 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-21T12:29:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ricmaiamefort
---
# Ricardo Fort 678F930B9D5393Dc7E1A8Ca9
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ricmaiamefort` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Alecardo/Ricardo-Fort-678f930b9d5393dc7e1a8ca9', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tarabukinivan/88f3286d-c2d1-4d09-9e58-f6eb64e10140
|
tarabukinivan
| 2025-01-21T12:44:37Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:30:38Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 88f3286d-c2d1-4d09-9e58-f6eb64e10140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c321e8cf88f16f0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c321e8cf88f16f0_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/88f3286d-c2d1-4d09-9e58-f6eb64e10140
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c321e8cf88f16f0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1a9527a4-dbed-4d09-b3dc-303d2f7479cd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1a9527a4-dbed-4d09-b3dc-303d2f7479cd
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 88f3286d-c2d1-4d09-9e58-f6eb64e10140
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.6017 |
| 1.4713 | 0.0008 | 5 | 1.5789 |
| 1.4853 | 0.0016 | 10 | 1.4876 |
| 1.3148 | 0.0024 | 15 | 1.4307 |
| 1.3637 | 0.0033 | 20 | 1.4131 |
| 1.5555 | 0.0041 | 25 | 1.4021 |
| 1.3646 | 0.0049 | 30 | 1.4001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k7_task7_organization
|
MayBashendy
| 2025-01-21T12:44:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T12:40:22Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k7_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k7_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7187
- Qwk: 0.3341
- Mse: 0.7187
- Rmse: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1176 | 2 | 2.5757 | -0.0924 | 2.5757 | 1.6049 |
| No log | 0.2353 | 4 | 1.3587 | 0.0994 | 1.3587 | 1.1656 |
| No log | 0.3529 | 6 | 1.1844 | -0.2292 | 1.1844 | 1.0883 |
| No log | 0.4706 | 8 | 0.9977 | -0.0426 | 0.9977 | 0.9988 |
| No log | 0.5882 | 10 | 0.9364 | 0.1007 | 0.9364 | 0.9677 |
| No log | 0.7059 | 12 | 0.8715 | 0.1648 | 0.8715 | 0.9335 |
| No log | 0.8235 | 14 | 0.8307 | -0.0103 | 0.8307 | 0.9114 |
| No log | 0.9412 | 16 | 0.8269 | -0.0483 | 0.8269 | 0.9094 |
| No log | 1.0588 | 18 | 0.9257 | 0.0495 | 0.9257 | 0.9621 |
| No log | 1.1765 | 20 | 0.9539 | -0.0700 | 0.9539 | 0.9767 |
| No log | 1.2941 | 22 | 0.8720 | 0.0027 | 0.8720 | 0.9338 |
| No log | 1.4118 | 24 | 0.8314 | 0.0 | 0.8314 | 0.9118 |
| No log | 1.5294 | 26 | 0.8248 | 0.0 | 0.8248 | 0.9082 |
| No log | 1.6471 | 28 | 0.8307 | 0.1236 | 0.8307 | 0.9114 |
| No log | 1.7647 | 30 | 0.7798 | 0.0 | 0.7798 | 0.8831 |
| No log | 1.8824 | 32 | 0.7549 | 0.0 | 0.7549 | 0.8688 |
| No log | 2.0 | 34 | 0.7683 | 0.0 | 0.7683 | 0.8765 |
| No log | 2.1176 | 36 | 0.8061 | 0.0481 | 0.8061 | 0.8978 |
| No log | 2.2353 | 38 | 0.9177 | 0.2526 | 0.9177 | 0.9579 |
| No log | 2.3529 | 40 | 0.9241 | 0.3444 | 0.9241 | 0.9613 |
| No log | 2.4706 | 42 | 0.8782 | 0.3173 | 0.8782 | 0.9371 |
| No log | 2.5882 | 44 | 0.7992 | 0.1372 | 0.7992 | 0.8940 |
| No log | 2.7059 | 46 | 0.7477 | 0.0937 | 0.7477 | 0.8647 |
| No log | 2.8235 | 48 | 0.7057 | 0.0428 | 0.7057 | 0.8400 |
| No log | 2.9412 | 50 | 0.7487 | 0.3243 | 0.7487 | 0.8653 |
| No log | 3.0588 | 52 | 0.8144 | 0.1648 | 0.8144 | 0.9025 |
| No log | 3.1765 | 54 | 0.8334 | 0.1699 | 0.8334 | 0.9129 |
| No log | 3.2941 | 56 | 0.8386 | 0.1094 | 0.8386 | 0.9157 |
| No log | 3.4118 | 58 | 0.8747 | -0.0027 | 0.8747 | 0.9353 |
| No log | 3.5294 | 60 | 0.9345 | -0.1275 | 0.9345 | 0.9667 |
| No log | 3.6471 | 62 | 0.8860 | -0.0444 | 0.8860 | 0.9413 |
| No log | 3.7647 | 64 | 0.7937 | 0.0 | 0.7937 | 0.8909 |
| No log | 3.8824 | 66 | 0.7158 | 0.0889 | 0.7158 | 0.8460 |
| No log | 4.0 | 68 | 0.7009 | 0.0393 | 0.7009 | 0.8372 |
| No log | 4.1176 | 70 | 0.7320 | 0.0359 | 0.7320 | 0.8556 |
| No log | 4.2353 | 72 | 0.7952 | -0.0051 | 0.7952 | 0.8917 |
| No log | 4.3529 | 74 | 0.8104 | 0.0265 | 0.8104 | 0.9002 |
| No log | 4.4706 | 76 | 0.8378 | 0.0927 | 0.8378 | 0.9153 |
| No log | 4.5882 | 78 | 0.8915 | 0.0966 | 0.8915 | 0.9442 |
| No log | 4.7059 | 80 | 0.9184 | 0.1699 | 0.9184 | 0.9583 |
| No log | 4.8235 | 82 | 0.9273 | 0.2171 | 0.9273 | 0.9630 |
| No log | 4.9412 | 84 | 0.9053 | 0.1972 | 0.9053 | 0.9515 |
| No log | 5.0588 | 86 | 0.9132 | 0.0245 | 0.9132 | 0.9556 |
| No log | 5.1765 | 88 | 0.8966 | 0.0968 | 0.8966 | 0.9469 |
| No log | 5.2941 | 90 | 0.9317 | 0.1303 | 0.9317 | 0.9652 |
| No log | 5.4118 | 92 | 0.9620 | 0.2063 | 0.9620 | 0.9808 |
| No log | 5.5294 | 94 | 0.9345 | 0.2632 | 0.9345 | 0.9667 |
| No log | 5.6471 | 96 | 0.8433 | 0.3238 | 0.8433 | 0.9183 |
| No log | 5.7647 | 98 | 0.8240 | 0.2007 | 0.8240 | 0.9078 |
| No log | 5.8824 | 100 | 0.8275 | -0.0070 | 0.8275 | 0.9096 |
| No log | 6.0 | 102 | 0.8620 | 0.0362 | 0.8620 | 0.9284 |
| No log | 6.1176 | 104 | 0.8316 | 0.0697 | 0.8316 | 0.9119 |
| No log | 6.2353 | 106 | 0.8263 | 0.2345 | 0.8263 | 0.9090 |
| No log | 6.3529 | 108 | 0.8755 | 0.2604 | 0.8755 | 0.9357 |
| No log | 6.4706 | 110 | 0.8392 | 0.2171 | 0.8392 | 0.9161 |
| No log | 6.5882 | 112 | 0.8389 | 0.2063 | 0.8389 | 0.9159 |
| No log | 6.7059 | 114 | 0.8407 | 0.0 | 0.8407 | 0.9169 |
| No log | 6.8235 | 116 | 0.7234 | 0.1829 | 0.7234 | 0.8505 |
| No log | 6.9412 | 118 | 0.6556 | 0.2819 | 0.6556 | 0.8097 |
| No log | 7.0588 | 120 | 0.6881 | 0.3950 | 0.6881 | 0.8295 |
| No log | 7.1765 | 122 | 0.8563 | 0.3499 | 0.8563 | 0.9254 |
| No log | 7.2941 | 124 | 0.9866 | 0.2921 | 0.9866 | 0.9933 |
| No log | 7.4118 | 126 | 0.9981 | 0.2464 | 0.9981 | 0.9991 |
| No log | 7.5294 | 128 | 1.0640 | 0.1354 | 1.0640 | 1.0315 |
| No log | 7.6471 | 130 | 1.5347 | 0.1007 | 1.5347 | 1.2388 |
| No log | 7.7647 | 132 | 1.5733 | 0.0790 | 1.5733 | 1.2543 |
| No log | 7.8824 | 134 | 1.2396 | 0.1332 | 1.2396 | 1.1134 |
| No log | 8.0 | 136 | 0.9817 | 0.0801 | 0.9817 | 0.9908 |
| No log | 8.1176 | 138 | 0.9364 | 0.2832 | 0.9364 | 0.9677 |
| No log | 8.2353 | 140 | 0.8926 | 0.3183 | 0.8926 | 0.9448 |
| No log | 8.3529 | 142 | 0.8516 | 0.3221 | 0.8516 | 0.9228 |
| No log | 8.4706 | 144 | 0.8222 | 0.2414 | 0.8222 | 0.9068 |
| No log | 8.5882 | 146 | 0.8010 | 0.2813 | 0.8010 | 0.8950 |
| No log | 8.7059 | 148 | 0.7963 | 0.2784 | 0.7963 | 0.8924 |
| No log | 8.8235 | 150 | 0.8079 | 0.3372 | 0.8079 | 0.8989 |
| No log | 8.9412 | 152 | 0.8412 | 0.3699 | 0.8412 | 0.9171 |
| No log | 9.0588 | 154 | 0.7978 | 0.3637 | 0.7978 | 0.8932 |
| No log | 9.1765 | 156 | 0.7614 | 0.3099 | 0.7614 | 0.8726 |
| No log | 9.2941 | 158 | 0.7245 | 0.1699 | 0.7245 | 0.8512 |
| No log | 9.4118 | 160 | 0.7185 | 0.1807 | 0.7185 | 0.8477 |
| No log | 9.5294 | 162 | 0.7381 | 0.1268 | 0.7381 | 0.8592 |
| No log | 9.6471 | 164 | 0.7843 | 0.2171 | 0.7843 | 0.8856 |
| No log | 9.7647 | 166 | 0.8367 | 0.2328 | 0.8367 | 0.9147 |
| No log | 9.8824 | 168 | 0.8363 | 0.1995 | 0.8363 | 0.9145 |
| No log | 10.0 | 170 | 0.8047 | 0.2589 | 0.8047 | 0.8970 |
| No log | 10.1176 | 172 | 0.8180 | 0.0652 | 0.8180 | 0.9044 |
| No log | 10.2353 | 174 | 0.8233 | 0.0652 | 0.8233 | 0.9074 |
| No log | 10.3529 | 176 | 0.7997 | 0.2027 | 0.7997 | 0.8942 |
| No log | 10.4706 | 178 | 0.8016 | 0.3372 | 0.8016 | 0.8953 |
| No log | 10.5882 | 180 | 0.7845 | 0.3819 | 0.7845 | 0.8857 |
| No log | 10.7059 | 182 | 0.6968 | 0.3032 | 0.6968 | 0.8347 |
| No log | 10.8235 | 184 | 0.6736 | 0.3425 | 0.6736 | 0.8208 |
| No log | 10.9412 | 186 | 0.7071 | 0.3127 | 0.7071 | 0.8409 |
| No log | 11.0588 | 188 | 0.7794 | 0.3372 | 0.7794 | 0.8828 |
| No log | 11.1765 | 190 | 0.8896 | 0.3782 | 0.8896 | 0.9432 |
| No log | 11.2941 | 192 | 0.8603 | 0.4251 | 0.8603 | 0.9275 |
| No log | 11.4118 | 194 | 0.7622 | 0.2784 | 0.7622 | 0.8730 |
| No log | 11.5294 | 196 | 0.7462 | 0.2683 | 0.7462 | 0.8638 |
| No log | 11.6471 | 198 | 0.7544 | 0.2683 | 0.7544 | 0.8686 |
| No log | 11.7647 | 200 | 0.7524 | 0.2319 | 0.7524 | 0.8674 |
| No log | 11.8824 | 202 | 0.8204 | 0.4089 | 0.8204 | 0.9058 |
| No log | 12.0 | 204 | 0.8332 | 0.3590 | 0.8332 | 0.9128 |
| No log | 12.1176 | 206 | 0.8053 | 0.1264 | 0.8053 | 0.8974 |
| No log | 12.2353 | 208 | 0.8030 | 0.2043 | 0.8030 | 0.8961 |
| No log | 12.3529 | 210 | 0.8035 | 0.1051 | 0.8035 | 0.8964 |
| No log | 12.4706 | 212 | 0.7790 | 0.2158 | 0.7790 | 0.8826 |
| No log | 12.5882 | 214 | 0.7441 | 0.2319 | 0.7441 | 0.8626 |
| No log | 12.7059 | 216 | 0.7776 | 0.3894 | 0.7776 | 0.8818 |
| No log | 12.8235 | 218 | 0.7996 | 0.4014 | 0.7996 | 0.8942 |
| No log | 12.9412 | 220 | 0.7439 | 0.4052 | 0.7439 | 0.8625 |
| No log | 13.0588 | 222 | 0.7111 | 0.3471 | 0.7111 | 0.8433 |
| No log | 13.1765 | 224 | 0.6985 | 0.3341 | 0.6985 | 0.8357 |
| No log | 13.2941 | 226 | 0.7284 | 0.3545 | 0.7284 | 0.8535 |
| No log | 13.4118 | 228 | 0.8148 | 0.3372 | 0.8148 | 0.9027 |
| No log | 13.5294 | 230 | 0.9035 | 0.3519 | 0.9035 | 0.9505 |
| No log | 13.6471 | 232 | 0.8974 | 0.2754 | 0.8974 | 0.9473 |
| No log | 13.7647 | 234 | 0.8592 | 0.2847 | 0.8592 | 0.9269 |
| No log | 13.8824 | 236 | 0.8327 | 0.3737 | 0.8327 | 0.9125 |
| No log | 14.0 | 238 | 0.8219 | 0.3918 | 0.8219 | 0.9066 |
| No log | 14.1176 | 240 | 0.7738 | 0.3032 | 0.7738 | 0.8797 |
| No log | 14.2353 | 242 | 0.7455 | 0.3712 | 0.7455 | 0.8634 |
| No log | 14.3529 | 244 | 0.7875 | 0.3302 | 0.7875 | 0.8874 |
| No log | 14.4706 | 246 | 0.8593 | 0.3869 | 0.8593 | 0.9270 |
| No log | 14.5882 | 248 | 0.9272 | 0.3825 | 0.9272 | 0.9629 |
| No log | 14.7059 | 250 | 0.8637 | 0.3538 | 0.8637 | 0.9294 |
| No log | 14.8235 | 252 | 0.7736 | 0.3894 | 0.7736 | 0.8795 |
| No log | 14.9412 | 254 | 0.7035 | 0.3594 | 0.7035 | 0.8387 |
| No log | 15.0588 | 256 | 0.6780 | 0.4001 | 0.6780 | 0.8234 |
| No log | 15.1765 | 258 | 0.6778 | 0.4291 | 0.6778 | 0.8233 |
| No log | 15.2941 | 260 | 0.6805 | 0.4001 | 0.6805 | 0.8249 |
| No log | 15.4118 | 262 | 0.7081 | 0.4158 | 0.7081 | 0.8415 |
| No log | 15.5294 | 264 | 0.7434 | 0.3868 | 0.7434 | 0.8622 |
| No log | 15.6471 | 266 | 0.8074 | 0.3746 | 0.8074 | 0.8986 |
| No log | 15.7647 | 268 | 0.8077 | 0.3699 | 0.8077 | 0.8987 |
| No log | 15.8824 | 270 | 0.8001 | 0.4014 | 0.8001 | 0.8945 |
| No log | 16.0 | 272 | 0.7637 | 0.3770 | 0.7637 | 0.8739 |
| No log | 16.1176 | 274 | 0.7132 | 0.3518 | 0.7132 | 0.8445 |
| No log | 16.2353 | 276 | 0.7116 | 0.3238 | 0.7116 | 0.8436 |
| No log | 16.3529 | 278 | 0.7267 | 0.3238 | 0.7267 | 0.8525 |
| No log | 16.4706 | 280 | 0.7366 | 0.3099 | 0.7366 | 0.8583 |
| No log | 16.5882 | 282 | 0.7447 | 0.3712 | 0.7447 | 0.8630 |
| No log | 16.7059 | 284 | 0.7767 | 0.3712 | 0.7767 | 0.8813 |
| No log | 16.8235 | 286 | 0.8651 | 0.3675 | 0.8651 | 0.9301 |
| No log | 16.9412 | 288 | 0.8676 | 0.3606 | 0.8676 | 0.9315 |
| No log | 17.0588 | 290 | 0.8148 | 0.3819 | 0.8148 | 0.9027 |
| No log | 17.1765 | 292 | 0.8137 | 0.3637 | 0.8137 | 0.9021 |
| No log | 17.2941 | 294 | 0.7826 | 0.3737 | 0.7826 | 0.8847 |
| No log | 17.4118 | 296 | 0.7781 | 0.2847 | 0.7781 | 0.8821 |
| No log | 17.5294 | 298 | 0.7970 | 0.2319 | 0.7970 | 0.8927 |
| No log | 17.6471 | 300 | 0.8518 | 0.1918 | 0.8518 | 0.9230 |
| No log | 17.7647 | 302 | 0.9251 | 0.1866 | 0.9251 | 0.9618 |
| No log | 17.8824 | 304 | 0.9407 | 0.1866 | 0.9407 | 0.9699 |
| No log | 18.0 | 306 | 0.8990 | 0.1918 | 0.8990 | 0.9482 |
| No log | 18.1176 | 308 | 0.8168 | 0.2847 | 0.8168 | 0.9038 |
| No log | 18.2353 | 310 | 0.7374 | 0.3919 | 0.7374 | 0.8587 |
| No log | 18.3529 | 312 | 0.7206 | 0.3919 | 0.7206 | 0.8489 |
| No log | 18.4706 | 314 | 0.7563 | 0.4684 | 0.7563 | 0.8696 |
| No log | 18.5882 | 316 | 0.7759 | 0.4684 | 0.7759 | 0.8809 |
| No log | 18.7059 | 318 | 0.7929 | 0.4684 | 0.7929 | 0.8905 |
| No log | 18.8235 | 320 | 0.7879 | 0.3572 | 0.7879 | 0.8876 |
| No log | 18.9412 | 322 | 0.7850 | 0.3572 | 0.7850 | 0.8860 |
| No log | 19.0588 | 324 | 0.7964 | 0.4270 | 0.7964 | 0.8924 |
| No log | 19.1765 | 326 | 0.8016 | 0.3996 | 0.8016 | 0.8953 |
| No log | 19.2941 | 328 | 0.8174 | 0.3590 | 0.8174 | 0.9041 |
| No log | 19.4118 | 330 | 0.7876 | 0.4247 | 0.7876 | 0.8874 |
| No log | 19.5294 | 332 | 0.8040 | 0.3770 | 0.8040 | 0.8966 |
| No log | 19.6471 | 334 | 0.7851 | 0.3770 | 0.7851 | 0.8861 |
| No log | 19.7647 | 336 | 0.7121 | 0.4592 | 0.7121 | 0.8439 |
| No log | 19.8824 | 338 | 0.6655 | 0.2819 | 0.6655 | 0.8158 |
| No log | 20.0 | 340 | 0.6671 | 0.2819 | 0.6671 | 0.8167 |
| No log | 20.1176 | 342 | 0.6997 | 0.3782 | 0.6997 | 0.8365 |
| No log | 20.2353 | 344 | 0.7596 | 0.4076 | 0.7596 | 0.8715 |
| No log | 20.3529 | 346 | 0.8016 | 0.3372 | 0.8016 | 0.8953 |
| No log | 20.4706 | 348 | 0.8415 | 0.3372 | 0.8415 | 0.9173 |
| No log | 20.5882 | 350 | 0.8207 | 0.3519 | 0.8207 | 0.9059 |
| No log | 20.7059 | 352 | 0.8039 | 0.3519 | 0.8039 | 0.8966 |
| No log | 20.8235 | 354 | 0.7602 | 0.3544 | 0.7602 | 0.8719 |
| No log | 20.9412 | 356 | 0.7442 | 0.4190 | 0.7442 | 0.8626 |
| No log | 21.0588 | 358 | 0.7384 | 0.4167 | 0.7384 | 0.8593 |
| No log | 21.1765 | 360 | 0.7168 | 0.4479 | 0.7168 | 0.8467 |
| No log | 21.2941 | 362 | 0.7191 | 0.4479 | 0.7191 | 0.8480 |
| No log | 21.4118 | 364 | 0.7384 | 0.4576 | 0.7384 | 0.8593 |
| No log | 21.5294 | 366 | 0.7592 | 0.4167 | 0.7592 | 0.8713 |
| No log | 21.6471 | 368 | 0.7503 | 0.4576 | 0.7503 | 0.8662 |
| No log | 21.7647 | 370 | 0.7116 | 0.4576 | 0.7116 | 0.8436 |
| No log | 21.8824 | 372 | 0.6735 | 0.3755 | 0.6735 | 0.8207 |
| No log | 22.0 | 374 | 0.6573 | 0.3123 | 0.6573 | 0.8107 |
| No log | 22.1176 | 376 | 0.6633 | 0.3976 | 0.6633 | 0.8144 |
| No log | 22.2353 | 378 | 0.7019 | 0.4753 | 0.7019 | 0.8378 |
| No log | 22.3529 | 380 | 0.7806 | 0.4167 | 0.7806 | 0.8835 |
| No log | 22.4706 | 382 | 0.9162 | 0.3913 | 0.9162 | 0.9572 |
| No log | 22.5882 | 384 | 0.9838 | 0.3128 | 0.9838 | 0.9919 |
| No log | 22.7059 | 386 | 0.9348 | 0.3012 | 0.9348 | 0.9668 |
| No log | 22.8235 | 388 | 0.8444 | 0.2883 | 0.8444 | 0.9189 |
| No log | 22.9412 | 390 | 0.7869 | 0.2145 | 0.7869 | 0.8871 |
| No log | 23.0588 | 392 | 0.7715 | 0.1863 | 0.7715 | 0.8783 |
| No log | 23.1765 | 394 | 0.7584 | 0.2206 | 0.7584 | 0.8709 |
| No log | 23.2941 | 396 | 0.7867 | 0.3020 | 0.7867 | 0.8870 |
| No log | 23.4118 | 398 | 0.8898 | 0.3972 | 0.8898 | 0.9433 |
| No log | 23.5294 | 400 | 0.9919 | 0.3473 | 0.9919 | 0.9960 |
| No log | 23.6471 | 402 | 1.0320 | 0.2781 | 1.0320 | 1.0159 |
| No log | 23.7647 | 404 | 1.0073 | 0.2387 | 1.0073 | 1.0036 |
| No log | 23.8824 | 406 | 0.9323 | 0.3060 | 0.9323 | 0.9656 |
| No log | 24.0 | 408 | 0.9247 | 0.2532 | 0.9247 | 0.9616 |
| No log | 24.1176 | 410 | 0.9271 | 0.2838 | 0.9271 | 0.9629 |
| No log | 24.2353 | 412 | 0.9283 | 0.3106 | 0.9283 | 0.9635 |
| No log | 24.3529 | 414 | 0.9059 | 0.2813 | 0.9059 | 0.9518 |
| No log | 24.4706 | 416 | 0.9277 | 0.3344 | 0.9277 | 0.9632 |
| No log | 24.5882 | 418 | 0.9200 | 0.3918 | 0.9200 | 0.9591 |
| No log | 24.7059 | 420 | 0.8989 | 0.3991 | 0.8989 | 0.9481 |
| No log | 24.8235 | 422 | 0.8600 | 0.3894 | 0.8600 | 0.9274 |
| No log | 24.9412 | 424 | 0.8149 | 0.4392 | 0.8149 | 0.9027 |
| No log | 25.0588 | 426 | 0.7731 | 0.4243 | 0.7731 | 0.8793 |
| No log | 25.1765 | 428 | 0.7624 | 0.3976 | 0.7624 | 0.8731 |
| No log | 25.2941 | 430 | 0.7936 | 0.4243 | 0.7936 | 0.8909 |
| No log | 25.4118 | 432 | 0.8036 | 0.3622 | 0.8036 | 0.8964 |
| No log | 25.5294 | 434 | 0.8172 | 0.3224 | 0.8172 | 0.9040 |
| No log | 25.6471 | 436 | 0.7936 | 0.3224 | 0.7936 | 0.8908 |
| No log | 25.7647 | 438 | 0.7833 | 0.3498 | 0.7833 | 0.8851 |
| No log | 25.8824 | 440 | 0.7797 | 0.4663 | 0.7797 | 0.8830 |
| No log | 26.0 | 442 | 0.8268 | 0.3843 | 0.8268 | 0.9093 |
| No log | 26.1176 | 444 | 0.8310 | 0.3843 | 0.8310 | 0.9116 |
| No log | 26.2353 | 446 | 0.7962 | 0.4753 | 0.7962 | 0.8923 |
| No log | 26.3529 | 448 | 0.7693 | 0.4753 | 0.7693 | 0.8771 |
| No log | 26.4706 | 450 | 0.7771 | 0.4479 | 0.7771 | 0.8816 |
| No log | 26.5882 | 452 | 0.8233 | 0.4052 | 0.8233 | 0.9074 |
| No log | 26.7059 | 454 | 0.8303 | 0.3637 | 0.8303 | 0.9112 |
| No log | 26.8235 | 456 | 0.7922 | 0.4479 | 0.7922 | 0.8901 |
| No log | 26.9412 | 458 | 0.7423 | 0.3622 | 0.7423 | 0.8616 |
| No log | 27.0588 | 460 | 0.7357 | 0.2981 | 0.7357 | 0.8577 |
| No log | 27.1765 | 462 | 0.7645 | 0.3594 | 0.7645 | 0.8744 |
| No log | 27.2941 | 464 | 0.8427 | 0.3972 | 0.8427 | 0.9180 |
| No log | 27.4118 | 466 | 0.9057 | 0.3675 | 0.9057 | 0.9517 |
| No log | 27.5294 | 468 | 0.9028 | 0.3675 | 0.9028 | 0.9501 |
| No log | 27.6471 | 470 | 0.8554 | 0.3894 | 0.8554 | 0.9249 |
| No log | 27.7647 | 472 | 0.8067 | 0.4247 | 0.8067 | 0.8982 |
| No log | 27.8824 | 474 | 0.8030 | 0.4247 | 0.8030 | 0.8961 |
| No log | 28.0 | 476 | 0.7978 | 0.4247 | 0.7978 | 0.8932 |
| No log | 28.1176 | 478 | 0.8022 | 0.4052 | 0.8022 | 0.8957 |
| No log | 28.2353 | 480 | 0.8193 | 0.3972 | 0.8193 | 0.9051 |
| No log | 28.3529 | 482 | 0.8121 | 0.4479 | 0.8121 | 0.9012 |
| No log | 28.4706 | 484 | 0.8070 | 0.4479 | 0.8070 | 0.8984 |
| No log | 28.5882 | 486 | 0.7850 | 0.4219 | 0.7850 | 0.8860 |
| No log | 28.7059 | 488 | 0.7744 | 0.4502 | 0.7744 | 0.8800 |
| No log | 28.8235 | 490 | 0.7633 | 0.4502 | 0.7633 | 0.8737 |
| No log | 28.9412 | 492 | 0.7753 | 0.4774 | 0.7753 | 0.8805 |
| No log | 29.0588 | 494 | 0.8084 | 0.4076 | 0.8084 | 0.8991 |
| No log | 29.1765 | 496 | 0.8381 | 0.4330 | 0.8381 | 0.9155 |
| No log | 29.2941 | 498 | 0.8586 | 0.4409 | 0.8586 | 0.9266 |
| 0.3134 | 29.4118 | 500 | 0.8490 | 0.4224 | 0.8490 | 0.9214 |
| 0.3134 | 29.5294 | 502 | 0.8322 | 0.3972 | 0.8322 | 0.9123 |
| 0.3134 | 29.6471 | 504 | 0.7995 | 0.4479 | 0.7995 | 0.8942 |
| 0.3134 | 29.7647 | 506 | 0.7992 | 0.4845 | 0.7992 | 0.8940 |
| 0.3134 | 29.8824 | 508 | 0.8180 | 0.4414 | 0.8180 | 0.9044 |
| 0.3134 | 30.0 | 510 | 0.8297 | 0.4414 | 0.8297 | 0.9109 |
| 0.3134 | 30.1176 | 512 | 0.8231 | 0.3649 | 0.8231 | 0.9072 |
| 0.3134 | 30.2353 | 514 | 0.8386 | 0.3737 | 0.8386 | 0.9158 |
| 0.3134 | 30.3529 | 516 | 0.8447 | 0.3737 | 0.8447 | 0.9191 |
| 0.3134 | 30.4706 | 518 | 0.8344 | 0.3471 | 0.8344 | 0.9135 |
| 0.3134 | 30.5882 | 520 | 0.7964 | 0.4330 | 0.7964 | 0.8924 |
| 0.3134 | 30.7059 | 522 | 0.7581 | 0.3782 | 0.7581 | 0.8707 |
| 0.3134 | 30.8235 | 524 | 0.7483 | 0.3782 | 0.7483 | 0.8651 |
| 0.3134 | 30.9412 | 526 | 0.7706 | 0.4414 | 0.7706 | 0.8779 |
| 0.3134 | 31.0588 | 528 | 0.8221 | 0.3972 | 0.8221 | 0.9067 |
| 0.3134 | 31.1765 | 530 | 0.8522 | 0.3972 | 0.8522 | 0.9231 |
| 0.3134 | 31.2941 | 532 | 0.8332 | 0.4392 | 0.8332 | 0.9128 |
| 0.3134 | 31.4118 | 534 | 0.7971 | 0.4219 | 0.7971 | 0.8928 |
| 0.3134 | 31.5294 | 536 | 0.7689 | 0.3950 | 0.7689 | 0.8768 |
| 0.3134 | 31.6471 | 538 | 0.7299 | 0.3976 | 0.7299 | 0.8543 |
| 0.3134 | 31.7647 | 540 | 0.7136 | 0.3976 | 0.7136 | 0.8447 |
| 0.3134 | 31.8824 | 542 | 0.7187 | 0.3341 | 0.7187 | 0.8478 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
kostiantynk1205/2dd26e7f-9d61-472b-959a-69573b14c63f
|
kostiantynk1205
| 2025-01-21T12:44:12Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:32:47Z |
---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2dd26e7f-9d61-472b-959a-69573b14c63f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 88305afcc505bc32_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/88305afcc505bc32_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/2dd26e7f-9d61-472b-959a-69573b14c63f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/88305afcc505bc32_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3c103d36-11cb-4530-bc3a-9d1b166132e7
wandb_project: Birthday-SN56-6-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3c103d36-11cb-4530-bc3a-9d1b166132e7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2dd26e7f-9d61-472b-959a-69573b14c63f
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8039 | 0.0001 | 1 | 3.1952 |
| 4.0102 | 0.0002 | 3 | 3.1831 |
| 2.2182 | 0.0004 | 6 | 2.9729 |
| 1.3269 | 0.0006 | 9 | 2.0551 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thalllsssss/0f03e4cb-bf5c-44f3-871d-201307142e82
|
thalllsssss
| 2025-01-21T12:42:58Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:30:41Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0f03e4cb-bf5c-44f3-871d-201307142e82
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c321e8cf88f16f0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c321e8cf88f16f0_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/0f03e4cb-bf5c-44f3-871d-201307142e82
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c321e8cf88f16f0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1a9527a4-dbed-4d09-b3dc-303d2f7479cd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1a9527a4-dbed-4d09-b3dc-303d2f7479cd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0f03e4cb-bf5c-44f3-871d-201307142e82
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2752 | 0.0326 | 200 | 1.1885 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fedovtt/8fc9f7fb-2a88-4444-a575-f1294e1b3b5d
|
fedovtt
| 2025-01-21T12:41:57Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-64k",
"region:us"
] | null | 2025-01-21T11:48:38Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8fc9f7fb-2a88-4444-a575-f1294e1b3b5d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9eb6b8bdf2350702_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9eb6b8bdf2350702_train_data.json
type:
field_input: ''
field_instruction: Text
field_output: Clean_Text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/8fc9f7fb-2a88-4444-a575-f1294e1b3b5d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/9eb6b8bdf2350702_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2d1a479d-63b0-4baa-834e-801f81f8def7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2d1a479d-63b0-4baa-834e-801f81f8def7
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8fc9f7fb-2a88-4444-a575-f1294e1b3b5d
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.5564 |
| 6.24 | 0.0012 | 5 | 1.4933 |
| 5.5815 | 0.0024 | 10 | 1.3498 |
| 4.9161 | 0.0036 | 15 | 1.2571 |
| 4.7732 | 0.0048 | 20 | 1.2136 |
| 4.7155 | 0.0061 | 25 | 1.1968 |
| 4.8142 | 0.0073 | 30 | 1.1936 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nadejdatarabukina/2328ba71-018b-406a-9616-710264e1f406
|
nadejdatarabukina
| 2025-01-21T12:41:54Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:30:40Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2328ba71-018b-406a-9616-710264e1f406
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c321e8cf88f16f0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c321e8cf88f16f0_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/2328ba71-018b-406a-9616-710264e1f406
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c321e8cf88f16f0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1a9527a4-dbed-4d09-b3dc-303d2f7479cd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1a9527a4-dbed-4d09-b3dc-303d2f7479cd
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2328ba71-018b-406a-9616-710264e1f406
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.6018 |
| 1.4712 | 0.0008 | 5 | 1.5746 |
| 1.485 | 0.0016 | 10 | 1.4882 |
| 1.3159 | 0.0024 | 15 | 1.4275 |
| 1.3649 | 0.0033 | 20 | 1.4139 |
| 1.5573 | 0.0041 | 25 | 1.4024 |
| 1.3646 | 0.0049 | 30 | 1.4001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
datlaaaaaaa/daac2d22-2b87-4368-98d4-0bc82576b148
|
datlaaaaaaa
| 2025-01-21T12:41:26Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:41:30Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: daac2d22-2b87-4368-98d4-0bc82576b148
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- db35a4b2827972f9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/db35a4b2827972f9_train_data.json
type:
field_input: rejected
field_instruction: context
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/daac2d22-2b87-4368-98d4-0bc82576b148
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/db35a4b2827972f9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 881827a9-7bb9-4a3a-bfa5-bc8cbc8f588f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 881827a9-7bb9-4a3a-bfa5-bc8cbc8f588f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# daac2d22-2b87-4368-98d4-0bc82576b148
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.879 | 0.0294 | 200 | 2.1089 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
great0001/b6c1df64-4838-486b-8a93-fee44f12a3b9
|
great0001
| 2025-01-21T12:38:03Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:16:00Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b6c1df64-4838-486b-8a93-fee44f12a3b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6bb273fb8d3c0253_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6bb273fb8d3c0253_train_data.json
type:
field_input: condition
field_instruction: drugName
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/b6c1df64-4838-486b-8a93-fee44f12a3b9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6bb273fb8d3c0253_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f44a8599-bd2c-4b24-9468-fb17670debf8
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f44a8599-bd2c-4b24-9468-fb17670debf8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b6c1df64-4838-486b-8a93-fee44f12a3b9
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0004 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/41f6379f-db9e-4d10-acc0-68151277842e
|
lesso04
| 2025-01-21T12:35:42Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:42:25Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 41f6379f-db9e-4d10-acc0-68151277842e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- db35a4b2827972f9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/db35a4b2827972f9_train_data.json
type:
field_input: rejected
field_instruction: context
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/41f6379f-db9e-4d10-acc0-68151277842e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/db35a4b2827972f9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 881827a9-7bb9-4a3a-bfa5-bc8cbc8f588f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 881827a9-7bb9-4a3a-bfa5-bc8cbc8f588f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 41f6379f-db9e-4d10-acc0-68151277842e
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0007 | 5 | nan |
| 0.0 | 0.0015 | 10 | nan |
| 0.0 | 0.0022 | 15 | nan |
| 0.0 | 0.0029 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung02/6e52b841-35b7-4d2f-867b-4cc3b62567c5
|
nhung02
| 2025-01-21T12:34:55Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:23:30Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6e52b841-35b7-4d2f-867b-4cc3b62567c5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1de821793308c2b7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1de821793308c2b7_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung02/6e52b841-35b7-4d2f-867b-4cc3b62567c5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1de821793308c2b7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7a0563f4-7af4-494e-9dbf-8003b312e74d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7a0563f4-7af4-494e-9dbf-8003b312e74d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6e52b841-35b7-4d2f-867b-4cc3b62567c5
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5455 | 0.1792 | 200 | 0.5976 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
visdata/kw5
|
visdata
| 2025-01-21T12:34:36Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T12:29:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
visdata/kw6
|
visdata
| 2025-01-21T12:33:35Z | 31 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T12:29:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VenkataRanjith/llama-3-8b-Instruct-bnb-4bit-Ranjith-coderTrainer
|
VenkataRanjith
| 2025-01-21T12:33:24Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-21T12:29:29Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VenkataRanjith
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ClarenceDan/7848bbb3-caf9-490d-953b-ec68eb34e4ba
|
ClarenceDan
| 2025-01-21T12:31:38Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:22:55Z |
---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7848bbb3-caf9-490d-953b-ec68eb34e4ba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-rw-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8848939c923ff5a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8848939c923ff5a3_train_data.json
type:
field_instruction: query
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/7848bbb3-caf9-490d-953b-ec68eb34e4ba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8848939c923ff5a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5acc14af-26c3-48ba-a29c-137d3b312a22
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5acc14af-26c3-48ba-a29c-137d3b312a22
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7848bbb3-caf9-490d-953b-ec68eb34e4ba
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 22.7759 | 0.0001 | 1 | 4.7796 |
| 19.6007 | 0.0002 | 3 | 4.7656 |
| 16.7883 | 0.0004 | 6 | 4.6233 |
| 17.1448 | 0.0006 | 9 | 4.1183 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k4_task7_organization
|
MayBashendy
| 2025-01-21T12:31:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T12:27:04Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k4_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k4_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7467
- Qwk: 0.0697
- Mse: 0.7467
- Rmse: 0.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.2 | 2 | 2.4280 | -0.0646 | 2.4280 | 1.5582 |
| No log | 0.4 | 4 | 1.0881 | 0.2875 | 1.0881 | 1.0431 |
| No log | 0.6 | 6 | 1.0474 | -0.1517 | 1.0474 | 1.0234 |
| No log | 0.8 | 8 | 1.3691 | -0.1706 | 1.3691 | 1.1701 |
| No log | 1.0 | 10 | 1.2737 | -0.1706 | 1.2737 | 1.1286 |
| No log | 1.2 | 12 | 1.0007 | 0.0283 | 1.0007 | 1.0003 |
| No log | 1.4 | 14 | 0.9275 | 0.1183 | 0.9275 | 0.9630 |
| No log | 1.6 | 16 | 0.8221 | 0.0428 | 0.8221 | 0.9067 |
| No log | 1.8 | 18 | 0.8087 | 0.0 | 0.8087 | 0.8993 |
| No log | 2.0 | 20 | 0.7870 | 0.0 | 0.7870 | 0.8871 |
| No log | 2.2 | 22 | 0.7671 | 0.0 | 0.7671 | 0.8758 |
| No log | 2.4 | 24 | 0.7781 | 0.0 | 0.7781 | 0.8821 |
| No log | 2.6 | 26 | 0.8770 | -0.0320 | 0.8770 | 0.9365 |
| No log | 2.8 | 28 | 1.0144 | -0.0076 | 1.0144 | 1.0072 |
| No log | 3.0 | 30 | 0.8925 | -0.0700 | 0.8925 | 0.9447 |
| No log | 3.2 | 32 | 0.7964 | 0.0481 | 0.7964 | 0.8924 |
| No log | 3.4 | 34 | 0.7693 | 0.1674 | 0.7693 | 0.8771 |
| No log | 3.6 | 36 | 0.8324 | 0.2285 | 0.8324 | 0.9123 |
| No log | 3.8 | 38 | 0.8665 | 0.2319 | 0.8665 | 0.9309 |
| No log | 4.0 | 40 | 0.9179 | -0.0045 | 0.9179 | 0.9581 |
| No log | 4.2 | 42 | 1.2147 | 0.0367 | 1.2147 | 1.1021 |
| No log | 4.4 | 44 | 1.0913 | -0.0033 | 1.0913 | 1.0446 |
| No log | 4.6 | 46 | 0.8643 | 0.2063 | 0.8643 | 0.9297 |
| No log | 4.8 | 48 | 0.8581 | 0.1550 | 0.8581 | 0.9263 |
| No log | 5.0 | 50 | 0.8888 | 0.1815 | 0.8888 | 0.9428 |
| No log | 5.2 | 52 | 0.8949 | 0.1766 | 0.8949 | 0.9460 |
| No log | 5.4 | 54 | 0.8389 | 0.1699 | 0.8389 | 0.9159 |
| No log | 5.6 | 56 | 0.8780 | 0.0410 | 0.8780 | 0.9370 |
| No log | 5.8 | 58 | 1.0235 | 0.0975 | 1.0235 | 1.0117 |
| No log | 6.0 | 60 | 1.0118 | 0.1259 | 1.0118 | 1.0059 |
| No log | 6.2 | 62 | 0.8937 | 0.1498 | 0.8937 | 0.9454 |
| No log | 6.4 | 64 | 0.8858 | 0.1541 | 0.8858 | 0.9412 |
| No log | 6.6 | 66 | 0.8867 | 0.1541 | 0.8867 | 0.9416 |
| No log | 6.8 | 68 | 0.8969 | 0.0930 | 0.8969 | 0.9471 |
| No log | 7.0 | 70 | 0.8899 | 0.1760 | 0.8899 | 0.9434 |
| No log | 7.2 | 72 | 0.9062 | 0.1866 | 0.9062 | 0.9519 |
| No log | 7.4 | 74 | 1.0502 | 0.1271 | 1.0502 | 1.0248 |
| No log | 7.6 | 76 | 0.9760 | 0.1712 | 0.9760 | 0.9879 |
| No log | 7.8 | 78 | 0.8485 | 0.1303 | 0.8485 | 0.9211 |
| No log | 8.0 | 80 | 0.8510 | 0.1379 | 0.8510 | 0.9225 |
| No log | 8.2 | 82 | 0.8621 | 0.1969 | 0.8621 | 0.9285 |
| No log | 8.4 | 84 | 0.8653 | 0.2747 | 0.8653 | 0.9302 |
| No log | 8.6 | 86 | 0.8761 | 0.2987 | 0.8761 | 0.9360 |
| No log | 8.8 | 88 | 0.9020 | 0.2593 | 0.9020 | 0.9498 |
| No log | 9.0 | 90 | 0.8735 | 0.2888 | 0.8735 | 0.9346 |
| No log | 9.2 | 92 | 0.8758 | 0.2256 | 0.8758 | 0.9359 |
| No log | 9.4 | 94 | 0.8433 | 0.2936 | 0.8433 | 0.9183 |
| No log | 9.6 | 96 | 0.8244 | 0.3296 | 0.8244 | 0.9080 |
| No log | 9.8 | 98 | 0.8399 | 0.3060 | 0.8399 | 0.9165 |
| No log | 10.0 | 100 | 0.8450 | 0.3060 | 0.8450 | 0.9192 |
| No log | 10.2 | 102 | 0.8358 | 0.3478 | 0.8358 | 0.9142 |
| No log | 10.4 | 104 | 0.9045 | 0.0678 | 0.9045 | 0.9511 |
| No log | 10.6 | 106 | 0.8931 | 0.0702 | 0.8931 | 0.9450 |
| No log | 10.8 | 108 | 0.8382 | 0.1379 | 0.8382 | 0.9155 |
| No log | 11.0 | 110 | 0.8312 | 0.2475 | 0.8312 | 0.9117 |
| No log | 11.2 | 112 | 0.8191 | 0.2360 | 0.8191 | 0.9050 |
| No log | 11.4 | 114 | 0.8313 | 0.1797 | 0.8313 | 0.9118 |
| No log | 11.6 | 116 | 0.8490 | 0.1179 | 0.8490 | 0.9214 |
| No log | 11.8 | 118 | 0.8143 | 0.1179 | 0.8143 | 0.9024 |
| No log | 12.0 | 120 | 0.7943 | 0.3002 | 0.7943 | 0.8912 |
| No log | 12.2 | 122 | 0.7994 | 0.2973 | 0.7994 | 0.8941 |
| No log | 12.4 | 124 | 0.8554 | 0.2633 | 0.8554 | 0.9249 |
| No log | 12.6 | 126 | 0.8489 | 0.2633 | 0.8489 | 0.9213 |
| No log | 12.8 | 128 | 0.8310 | 0.2561 | 0.8310 | 0.9116 |
| No log | 13.0 | 130 | 0.8112 | 0.1471 | 0.8112 | 0.9006 |
| No log | 13.2 | 132 | 0.8178 | 0.1697 | 0.8178 | 0.9043 |
| No log | 13.4 | 134 | 0.9047 | 0.2899 | 0.9047 | 0.9511 |
| No log | 13.6 | 136 | 1.0909 | 0.1142 | 1.0909 | 1.0445 |
| No log | 13.8 | 138 | 1.0766 | 0.1743 | 1.0766 | 1.0376 |
| No log | 14.0 | 140 | 0.9207 | 0.2495 | 0.9207 | 0.9595 |
| No log | 14.2 | 142 | 0.8547 | 0.2072 | 0.8547 | 0.9245 |
| No log | 14.4 | 144 | 0.9249 | 0.1156 | 0.9249 | 0.9617 |
| No log | 14.6 | 146 | 0.8786 | 0.1494 | 0.8786 | 0.9373 |
| No log | 14.8 | 148 | 0.8118 | 0.1760 | 0.8118 | 0.9010 |
| No log | 15.0 | 150 | 0.8315 | 0.2261 | 0.8315 | 0.9119 |
| No log | 15.2 | 152 | 0.8498 | 0.1740 | 0.8498 | 0.9218 |
| No log | 15.4 | 154 | 0.8157 | 0.2590 | 0.8157 | 0.9031 |
| No log | 15.6 | 156 | 0.8095 | 0.2424 | 0.8095 | 0.8997 |
| No log | 15.8 | 158 | 0.8147 | 0.1870 | 0.8147 | 0.9026 |
| No log | 16.0 | 160 | 0.7790 | 0.0741 | 0.7790 | 0.8826 |
| No log | 16.2 | 162 | 0.7751 | 0.2353 | 0.7751 | 0.8804 |
| No log | 16.4 | 164 | 0.8274 | 0.2995 | 0.8274 | 0.9096 |
| No log | 16.6 | 166 | 0.8519 | 0.2521 | 0.8519 | 0.9230 |
| No log | 16.8 | 168 | 0.7997 | 0.2558 | 0.7997 | 0.8943 |
| No log | 17.0 | 170 | 0.7449 | 0.1386 | 0.7449 | 0.8631 |
| No log | 17.2 | 172 | 0.7760 | 0.1873 | 0.7760 | 0.8809 |
| No log | 17.4 | 174 | 0.7825 | 0.2926 | 0.7825 | 0.8846 |
| No log | 17.6 | 176 | 0.7185 | 0.1133 | 0.7185 | 0.8477 |
| No log | 17.8 | 178 | 0.6876 | 0.1456 | 0.6876 | 0.8292 |
| No log | 18.0 | 180 | 0.7060 | 0.2685 | 0.7060 | 0.8402 |
| No log | 18.2 | 182 | 0.7141 | 0.2471 | 0.7141 | 0.8450 |
| No log | 18.4 | 184 | 0.7117 | 0.2652 | 0.7117 | 0.8436 |
| No log | 18.6 | 186 | 0.7238 | 0.2287 | 0.7238 | 0.8508 |
| No log | 18.8 | 188 | 0.7424 | 0.2182 | 0.7424 | 0.8617 |
| No log | 19.0 | 190 | 0.7592 | 0.2132 | 0.7592 | 0.8713 |
| No log | 19.2 | 192 | 0.7723 | 0.2772 | 0.7723 | 0.8788 |
| No log | 19.4 | 194 | 0.7658 | 0.2458 | 0.7658 | 0.8751 |
| No log | 19.6 | 196 | 0.7629 | 0.2405 | 0.7629 | 0.8735 |
| No log | 19.8 | 198 | 0.7542 | 0.2749 | 0.7542 | 0.8684 |
| No log | 20.0 | 200 | 0.8205 | 0.2995 | 0.8205 | 0.9058 |
| No log | 20.2 | 202 | 0.8206 | 0.3399 | 0.8206 | 0.9059 |
| No log | 20.4 | 204 | 0.7879 | 0.2784 | 0.7879 | 0.8876 |
| No log | 20.6 | 206 | 0.7408 | 0.2589 | 0.7408 | 0.8607 |
| No log | 20.8 | 208 | 0.7042 | 0.1407 | 0.7042 | 0.8392 |
| No log | 21.0 | 210 | 0.7484 | 0.1528 | 0.7484 | 0.8651 |
| No log | 21.2 | 212 | 0.8096 | 0.2068 | 0.8096 | 0.8998 |
| No log | 21.4 | 214 | 0.7809 | 0.1716 | 0.7809 | 0.8837 |
| No log | 21.6 | 216 | 0.7705 | 0.2475 | 0.7705 | 0.8778 |
| No log | 21.8 | 218 | 0.8606 | 0.3586 | 0.8606 | 0.9277 |
| No log | 22.0 | 220 | 0.9064 | 0.3586 | 0.9064 | 0.9521 |
| No log | 22.2 | 222 | 0.8311 | 0.3590 | 0.8311 | 0.9116 |
| No log | 22.4 | 224 | 0.7556 | 0.1353 | 0.7556 | 0.8693 |
| No log | 22.6 | 226 | 0.7622 | -0.0023 | 0.7622 | 0.8731 |
| No log | 22.8 | 228 | 0.7908 | 0.1716 | 0.7908 | 0.8893 |
| No log | 23.0 | 230 | 0.7844 | 0.2349 | 0.7844 | 0.8857 |
| No log | 23.2 | 232 | 0.7792 | 0.2379 | 0.7792 | 0.8827 |
| No log | 23.4 | 234 | 0.7998 | 0.2784 | 0.7998 | 0.8943 |
| No log | 23.6 | 236 | 0.8093 | 0.2899 | 0.8093 | 0.8996 |
| No log | 23.8 | 238 | 0.7995 | 0.3127 | 0.7995 | 0.8942 |
| No log | 24.0 | 240 | 0.7712 | 0.1835 | 0.7712 | 0.8782 |
| No log | 24.2 | 242 | 0.7589 | 0.1813 | 0.7589 | 0.8711 |
| No log | 24.4 | 244 | 0.7670 | 0.1133 | 0.7670 | 0.8758 |
| No log | 24.6 | 246 | 0.7585 | 0.1850 | 0.7585 | 0.8709 |
| No log | 24.8 | 248 | 0.7605 | 0.2590 | 0.7605 | 0.8721 |
| No log | 25.0 | 250 | 0.7949 | 0.3121 | 0.7949 | 0.8916 |
| No log | 25.2 | 252 | 0.8086 | 0.3121 | 0.8086 | 0.8992 |
| No log | 25.4 | 254 | 0.7797 | 0.2161 | 0.7797 | 0.8830 |
| No log | 25.6 | 256 | 0.7776 | 0.2713 | 0.7776 | 0.8818 |
| No log | 25.8 | 258 | 0.8006 | 0.1775 | 0.8006 | 0.8948 |
| No log | 26.0 | 260 | 0.7976 | 0.2683 | 0.7976 | 0.8931 |
| No log | 26.2 | 262 | 0.8170 | 0.2445 | 0.8170 | 0.9039 |
| No log | 26.4 | 264 | 0.8895 | 0.3320 | 0.8895 | 0.9432 |
| No log | 26.6 | 266 | 0.9028 | 0.3320 | 0.9028 | 0.9501 |
| No log | 26.8 | 268 | 0.8500 | 0.3723 | 0.8500 | 0.9220 |
| No log | 27.0 | 270 | 0.7773 | 0.2237 | 0.7773 | 0.8816 |
| No log | 27.2 | 272 | 0.7466 | 0.1432 | 0.7466 | 0.8640 |
| No log | 27.4 | 274 | 0.7358 | 0.1432 | 0.7358 | 0.8578 |
| No log | 27.6 | 276 | 0.7184 | 0.1400 | 0.7184 | 0.8476 |
| No log | 27.8 | 278 | 0.7469 | 0.2913 | 0.7469 | 0.8642 |
| No log | 28.0 | 280 | 0.8341 | 0.4167 | 0.8341 | 0.9133 |
| No log | 28.2 | 282 | 0.9025 | 0.3480 | 0.9025 | 0.9500 |
| No log | 28.4 | 284 | 0.8798 | 0.3480 | 0.8798 | 0.9380 |
| No log | 28.6 | 286 | 0.7983 | 0.3305 | 0.7983 | 0.8935 |
| No log | 28.8 | 288 | 0.7546 | 0.2530 | 0.7546 | 0.8687 |
| No log | 29.0 | 290 | 0.7365 | 0.3198 | 0.7365 | 0.8582 |
| No log | 29.2 | 292 | 0.7642 | 0.3369 | 0.7642 | 0.8742 |
| No log | 29.4 | 294 | 0.7576 | 0.3369 | 0.7576 | 0.8704 |
| No log | 29.6 | 296 | 0.7301 | 0.3603 | 0.7301 | 0.8545 |
| No log | 29.8 | 298 | 0.7250 | 0.2182 | 0.7250 | 0.8515 |
| No log | 30.0 | 300 | 0.7324 | 0.2471 | 0.7324 | 0.8558 |
| No log | 30.2 | 302 | 0.7285 | 0.2471 | 0.7285 | 0.8535 |
| No log | 30.4 | 304 | 0.7248 | 0.2973 | 0.7248 | 0.8514 |
| No log | 30.6 | 306 | 0.7439 | 0.3859 | 0.7439 | 0.8625 |
| No log | 30.8 | 308 | 0.7629 | 0.3716 | 0.7629 | 0.8734 |
| No log | 31.0 | 310 | 0.7752 | 0.3093 | 0.7752 | 0.8804 |
| No log | 31.2 | 312 | 0.7833 | 0.3433 | 0.7833 | 0.8850 |
| No log | 31.4 | 314 | 0.7665 | 0.2535 | 0.7665 | 0.8755 |
| No log | 31.6 | 316 | 0.7582 | 0.2862 | 0.7582 | 0.8707 |
| No log | 31.8 | 318 | 0.7551 | 0.4081 | 0.7551 | 0.8690 |
| No log | 32.0 | 320 | 0.7494 | 0.4081 | 0.7494 | 0.8657 |
| No log | 32.2 | 322 | 0.7288 | 0.3144 | 0.7288 | 0.8537 |
| No log | 32.4 | 324 | 0.7226 | 0.2360 | 0.7226 | 0.8500 |
| No log | 32.6 | 326 | 0.7280 | 0.2392 | 0.7280 | 0.8532 |
| No log | 32.8 | 328 | 0.7445 | 0.2092 | 0.7445 | 0.8628 |
| No log | 33.0 | 330 | 0.7433 | 0.2092 | 0.7433 | 0.8622 |
| No log | 33.2 | 332 | 0.7497 | 0.3144 | 0.7497 | 0.8658 |
| No log | 33.4 | 334 | 0.7552 | 0.3088 | 0.7552 | 0.8690 |
| No log | 33.6 | 336 | 0.7637 | 0.3355 | 0.7637 | 0.8739 |
| No log | 33.8 | 338 | 0.7622 | 0.2751 | 0.7622 | 0.8730 |
| No log | 34.0 | 340 | 0.7537 | 0.3253 | 0.7537 | 0.8681 |
| No log | 34.2 | 342 | 0.7522 | 0.2621 | 0.7522 | 0.8673 |
| No log | 34.4 | 344 | 0.7579 | 0.2530 | 0.7579 | 0.8706 |
| No log | 34.6 | 346 | 0.7845 | 0.3399 | 0.7845 | 0.8857 |
| No log | 34.8 | 348 | 0.8104 | 0.3918 | 0.8104 | 0.9002 |
| No log | 35.0 | 350 | 0.8344 | 0.4167 | 0.8344 | 0.9135 |
| No log | 35.2 | 352 | 0.8210 | 0.4167 | 0.8210 | 0.9061 |
| No log | 35.4 | 354 | 0.7846 | 0.3662 | 0.7846 | 0.8858 |
| No log | 35.6 | 356 | 0.7560 | 0.2813 | 0.7560 | 0.8695 |
| No log | 35.8 | 358 | 0.7321 | 0.3551 | 0.7321 | 0.8556 |
| No log | 36.0 | 360 | 0.7285 | 0.2113 | 0.7285 | 0.8535 |
| No log | 36.2 | 362 | 0.7275 | 0.1760 | 0.7275 | 0.8529 |
| No log | 36.4 | 364 | 0.7351 | 0.2973 | 0.7351 | 0.8574 |
| No log | 36.6 | 366 | 0.7917 | 0.3121 | 0.7917 | 0.8898 |
| No log | 36.8 | 368 | 0.8597 | 0.3092 | 0.8597 | 0.9272 |
| No log | 37.0 | 370 | 0.8741 | 0.3320 | 0.8741 | 0.9349 |
| No log | 37.2 | 372 | 0.8339 | 0.3092 | 0.8339 | 0.9132 |
| No log | 37.4 | 374 | 0.7728 | 0.3088 | 0.7728 | 0.8791 |
| No log | 37.6 | 376 | 0.7437 | 0.2684 | 0.7437 | 0.8624 |
| No log | 37.8 | 378 | 0.7441 | 0.0741 | 0.7441 | 0.8626 |
| No log | 38.0 | 380 | 0.7444 | 0.1133 | 0.7444 | 0.8628 |
| No log | 38.2 | 382 | 0.7398 | 0.0330 | 0.7398 | 0.8601 |
| No log | 38.4 | 384 | 0.7364 | 0.1050 | 0.7364 | 0.8581 |
| No log | 38.6 | 386 | 0.7439 | 0.1988 | 0.7439 | 0.8625 |
| No log | 38.8 | 388 | 0.7504 | 0.2652 | 0.7504 | 0.8663 |
| No log | 39.0 | 390 | 0.7610 | 0.2590 | 0.7610 | 0.8724 |
| No log | 39.2 | 392 | 0.7666 | 0.2877 | 0.7666 | 0.8755 |
| No log | 39.4 | 394 | 0.7729 | 0.2877 | 0.7729 | 0.8791 |
| No log | 39.6 | 396 | 0.7772 | 0.2943 | 0.7772 | 0.8816 |
| No log | 39.8 | 398 | 0.7828 | 0.2327 | 0.7828 | 0.8848 |
| No log | 40.0 | 400 | 0.7829 | 0.2327 | 0.7829 | 0.8848 |
| No log | 40.2 | 402 | 0.7780 | 0.2270 | 0.7780 | 0.8820 |
| No log | 40.4 | 404 | 0.7750 | 0.2590 | 0.7750 | 0.8804 |
| No log | 40.6 | 406 | 0.7717 | 0.3224 | 0.7717 | 0.8785 |
| No log | 40.8 | 408 | 0.7689 | 0.3224 | 0.7689 | 0.8769 |
| No log | 41.0 | 410 | 0.7627 | 0.1935 | 0.7627 | 0.8733 |
| No log | 41.2 | 412 | 0.7545 | 0.1303 | 0.7545 | 0.8686 |
| No log | 41.4 | 414 | 0.7458 | 0.0652 | 0.7458 | 0.8636 |
| No log | 41.6 | 416 | 0.7394 | 0.0652 | 0.7394 | 0.8599 |
| No log | 41.8 | 418 | 0.7392 | 0.0652 | 0.7392 | 0.8598 |
| No log | 42.0 | 420 | 0.7425 | 0.1432 | 0.7425 | 0.8617 |
| No log | 42.2 | 422 | 0.7477 | 0.1697 | 0.7477 | 0.8647 |
| No log | 42.4 | 424 | 0.7587 | 0.1341 | 0.7587 | 0.8710 |
| No log | 42.6 | 426 | 0.7674 | 0.1673 | 0.7674 | 0.8760 |
| No log | 42.8 | 428 | 0.7715 | 0.1673 | 0.7715 | 0.8783 |
| No log | 43.0 | 430 | 0.7743 | 0.2023 | 0.7743 | 0.8800 |
| No log | 43.2 | 432 | 0.7788 | 0.1697 | 0.7788 | 0.8825 |
| No log | 43.4 | 434 | 0.7774 | 0.2004 | 0.7774 | 0.8817 |
| No log | 43.6 | 436 | 0.7802 | 0.1672 | 0.7802 | 0.8833 |
| No log | 43.8 | 438 | 0.7689 | 0.1697 | 0.7689 | 0.8769 |
| No log | 44.0 | 440 | 0.7487 | 0.1393 | 0.7487 | 0.8653 |
| No log | 44.2 | 442 | 0.7407 | 0.1009 | 0.7407 | 0.8607 |
| No log | 44.4 | 444 | 0.7507 | 0.1686 | 0.7507 | 0.8664 |
| No log | 44.6 | 446 | 0.7712 | 0.3471 | 0.7712 | 0.8782 |
| No log | 44.8 | 448 | 0.8306 | 0.3918 | 0.8306 | 0.9114 |
| No log | 45.0 | 450 | 0.8624 | 0.3243 | 0.8624 | 0.9286 |
| No log | 45.2 | 452 | 0.8467 | 0.3918 | 0.8467 | 0.9202 |
| No log | 45.4 | 454 | 0.8022 | 0.3996 | 0.8022 | 0.8956 |
| No log | 45.6 | 456 | 0.7728 | 0.3545 | 0.7728 | 0.8791 |
| No log | 45.8 | 458 | 0.7627 | 0.2621 | 0.7627 | 0.8733 |
| No log | 46.0 | 460 | 0.7581 | 0.1341 | 0.7581 | 0.8707 |
| No log | 46.2 | 462 | 0.7569 | 0.1341 | 0.7569 | 0.8700 |
| No log | 46.4 | 464 | 0.7586 | 0.1341 | 0.7586 | 0.8710 |
| No log | 46.6 | 466 | 0.7676 | 0.2023 | 0.7676 | 0.8761 |
| No log | 46.8 | 468 | 0.7907 | 0.3050 | 0.7907 | 0.8892 |
| No log | 47.0 | 470 | 0.8157 | 0.3196 | 0.8157 | 0.9031 |
| No log | 47.2 | 472 | 0.8223 | 0.2261 | 0.8223 | 0.9068 |
| No log | 47.4 | 474 | 0.8078 | 0.2063 | 0.8078 | 0.8988 |
| No log | 47.6 | 476 | 0.7929 | 0.2063 | 0.7929 | 0.8904 |
| No log | 47.8 | 478 | 0.7951 | 0.2063 | 0.7951 | 0.8917 |
| No log | 48.0 | 480 | 0.8073 | 0.2379 | 0.8073 | 0.8985 |
| No log | 48.2 | 482 | 0.8317 | 0.2981 | 0.8317 | 0.9120 |
| No log | 48.4 | 484 | 0.8679 | 0.3737 | 0.8679 | 0.9316 |
| No log | 48.6 | 486 | 0.8842 | 0.3544 | 0.8842 | 0.9403 |
| No log | 48.8 | 488 | 0.8807 | 0.3737 | 0.8807 | 0.9384 |
| No log | 49.0 | 490 | 0.8616 | 0.3737 | 0.8616 | 0.9282 |
| No log | 49.2 | 492 | 0.8484 | 0.3471 | 0.8484 | 0.9211 |
| No log | 49.4 | 494 | 0.8225 | 0.2319 | 0.8225 | 0.9069 |
| No log | 49.6 | 496 | 0.8099 | 0.2379 | 0.8099 | 0.9000 |
| No log | 49.8 | 498 | 0.8027 | 0.2379 | 0.8027 | 0.8960 |
| 0.2553 | 50.0 | 500 | 0.8057 | 0.2379 | 0.8057 | 0.8976 |
| 0.2553 | 50.2 | 502 | 0.8048 | 0.2379 | 0.8048 | 0.8971 |
| 0.2553 | 50.4 | 504 | 0.7961 | 0.1988 | 0.7961 | 0.8922 |
| 0.2553 | 50.6 | 506 | 0.7901 | 0.2685 | 0.7901 | 0.8889 |
| 0.2553 | 50.8 | 508 | 0.7940 | 0.2685 | 0.7940 | 0.8911 |
| 0.2553 | 51.0 | 510 | 0.8051 | 0.2847 | 0.8051 | 0.8973 |
| 0.2553 | 51.2 | 512 | 0.8122 | 0.2847 | 0.8122 | 0.9012 |
| 0.2553 | 51.4 | 514 | 0.8100 | 0.2847 | 0.8100 | 0.9000 |
| 0.2553 | 51.6 | 516 | 0.8060 | 0.2847 | 0.8060 | 0.8978 |
| 0.2553 | 51.8 | 518 | 0.8079 | 0.2847 | 0.8079 | 0.8988 |
| 0.2553 | 52.0 | 520 | 0.7941 | 0.2621 | 0.7941 | 0.8911 |
| 0.2553 | 52.2 | 522 | 0.7804 | 0.2685 | 0.7804 | 0.8834 |
| 0.2553 | 52.4 | 524 | 0.7663 | 0.2685 | 0.7663 | 0.8754 |
| 0.2553 | 52.6 | 526 | 0.7580 | 0.1737 | 0.7580 | 0.8706 |
| 0.2553 | 52.8 | 528 | 0.7571 | 0.1737 | 0.7571 | 0.8701 |
| 0.2553 | 53.0 | 530 | 0.7622 | 0.2294 | 0.7622 | 0.8730 |
| 0.2553 | 53.2 | 532 | 0.7758 | 0.2685 | 0.7758 | 0.8808 |
| 0.2553 | 53.4 | 534 | 0.8022 | 0.3088 | 0.8022 | 0.8957 |
| 0.2553 | 53.6 | 536 | 0.8215 | 0.3287 | 0.8215 | 0.9064 |
| 0.2553 | 53.8 | 538 | 0.8320 | 0.3287 | 0.8320 | 0.9121 |
| 0.2553 | 54.0 | 540 | 0.8215 | 0.3287 | 0.8215 | 0.9063 |
| 0.2553 | 54.2 | 542 | 0.7971 | 0.3688 | 0.7971 | 0.8928 |
| 0.2553 | 54.4 | 544 | 0.7801 | 0.2652 | 0.7801 | 0.8832 |
| 0.2553 | 54.6 | 546 | 0.7654 | 0.1673 | 0.7654 | 0.8749 |
| 0.2553 | 54.8 | 548 | 0.7574 | 0.0971 | 0.7574 | 0.8703 |
| 0.2553 | 55.0 | 550 | 0.7524 | 0.0971 | 0.7524 | 0.8674 |
| 0.2553 | 55.2 | 552 | 0.7489 | 0.0283 | 0.7489 | 0.8654 |
| 0.2553 | 55.4 | 554 | 0.7467 | 0.0697 | 0.7467 | 0.8641 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
kostiantynk/19c4fa41-fdff-4798-a49e-259eac91f176
|
kostiantynk
| 2025-01-21T12:31:01Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-01-21T12:24:58Z |
---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 19c4fa41-fdff-4798-a49e-259eac91f176
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dddb0489dc663e1a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dddb0489dc663e1a_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answers
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/19c4fa41-fdff-4798-a49e-259eac91f176
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/dddb0489dc663e1a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
wandb_project: Birthday-SN56-7-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 19c4fa41-fdff-4798-a49e-259eac91f176
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0008 | 3 | nan |
| 0.0 | 0.0017 | 6 | nan |
| 0.0 | 0.0025 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
akh99/DeepSeek-R1-Distill-Qwen-7B_AWQ-tok-norm
|
akh99
| 2025-01-21T12:30:25Z | 141 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2025-01-21T12:29:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhung01/37ee035e-1e50-44b7-aa8a-1086267f8630
|
nhung01
| 2025-01-21T12:29:49Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:17:26Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 37ee035e-1e50-44b7-aa8a-1086267f8630
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b88fbb911025d0a2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b88fbb911025d0a2_train_data.json
type:
field_input: Primary Keyword
field_instruction: Long Description
field_output: Position
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/37ee035e-1e50-44b7-aa8a-1086267f8630
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b88fbb911025d0a2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9ff18266-1053-4343-a163-393cb535b12c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9ff18266-1053-4343-a163-393cb535b12c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 37ee035e-1e50-44b7-aa8a-1086267f8630
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.0779 | 0.0119 | 200 | 2.0207 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
josty11/roberta-optimized
|
josty11
| 2025-01-21T12:29:35Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T12:29:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso11/d853c065-1f63-4c2f-99c9-18ee9227719f
|
lesso11
| 2025-01-21T12:29:00Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:20:37Z |
---
library_name: peft
license: mit
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d853c065-1f63-4c2f-99c9-18ee9227719f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 5fe705ae677c52cd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5fe705ae677c52cd_train_data.json
type:
field_input: code_before
field_instruction: func_before
field_output: code_after
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/d853c065-1f63-4c2f-99c9-18ee9227719f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/5fe705ae677c52cd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c9913a62-6036-4cc0-92bf-1f189dbde5c4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c9913a62-6036-4cc0-92bf-1f189dbde5c4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d853c065-1f63-4c2f-99c9-18ee9227719f
This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.286 | 0.0020 | 1 | 0.6638 |
| 2.5694 | 0.0101 | 5 | 0.6637 |
| 2.2073 | 0.0202 | 10 | 0.6503 |
| 2.4329 | 0.0302 | 15 | 0.6293 |
| 1.1466 | 0.0403 | 20 | 0.6194 |
| 2.8759 | 0.0504 | 25 | 0.6170 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cvoffer/ddc7dbca-7bbb-4a64-9065-11f4e2616878
|
cvoffer
| 2025-01-21T12:27:43Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T08:34:57Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ddc7dbca-7bbb-4a64-9065-11f4e2616878
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 124bc05ddbf5ee81_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/124bc05ddbf5ee81_train_data.json
type:
field_instruction: docstring
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cvoffer/ddc7dbca-7bbb-4a64-9065-11f4e2616878
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/124bc05ddbf5ee81_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5fe9995d-0a95-46fa-b89c-25f97cbb6eb6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5fe9995d-0a95-46fa-b89c-25f97cbb6eb6
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# ddc7dbca-7bbb-4a64-9065-11f4e2616878
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 5 | nan |
| 0.0 | 0.0002 | 10 | nan |
| 0.0 | 0.0003 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thaffggg/c0a3ae1e-da6c-43d4-9442-7848c0ffddeb
|
thaffggg
| 2025-01-21T12:27:07Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:13:58Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c0a3ae1e-da6c-43d4-9442-7848c0ffddeb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b73b66c21220caaf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b73b66c21220caaf_train_data.json
type:
field_input: categories
field_instruction: title
field_output: markdown
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/c0a3ae1e-da6c-43d4-9442-7848c0ffddeb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b73b66c21220caaf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fc2a71fc-f606-4cf6-887a-e73c961e3be1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fc2a71fc-f606-4cf6-887a-e73c961e3be1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c0a3ae1e-da6c-43d4-9442-7848c0ffddeb
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9254 | 0.0378 | 200 | 1.8160 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nadejdatarabukina/c663e43a-fce1-4aed-aabb-ac7bc046445f
|
nadejdatarabukina
| 2025-01-21T12:27:02Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T11:55:46Z |
---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c663e43a-fce1-4aed-aabb-ac7bc046445f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b5fe4b652f9222e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b5fe4b652f9222e_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/c663e43a-fce1-4aed-aabb-ac7bc046445f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/1b5fe4b652f9222e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 857fa1e7-73d3-440e-a388-76fc6a5b2495
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 857fa1e7-73d3-440e-a388-76fc6a5b2495
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c663e43a-fce1-4aed-aabb-ac7bc046445f
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0010 | 5 | nan |
| 0.0 | 0.0019 | 10 | nan |
| 0.0 | 0.0029 | 15 | nan |
| 0.0 | 0.0039 | 20 | nan |
| 0.0 | 0.0049 | 25 | nan |
| 0.0 | 0.0058 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
filipesantoscv11/87dd10c7-f67d-4031-94e7-cba82e9cbe5a
|
filipesantoscv11
| 2025-01-21T12:26:56Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-21T12:18:45Z |
---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 87dd10c7-f67d-4031-94e7-cba82e9cbe5a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dabeeced6597d53e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dabeeced6597d53e_train_data.json
type:
field_instruction: sentence1_en
field_output: sentence2_en
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: filipesantoscv11/87dd10c7-f67d-4031-94e7-cba82e9cbe5a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/dabeeced6597d53e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8140ab10-5da7-47df-b106-2846f0a02738
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8140ab10-5da7-47df-b106-2846f0a02738
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 87dd10c7-f67d-4031-94e7-cba82e9cbe5a
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0006 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik87/6effa428-7d2c-4d09-8936-920f11f80aa1
|
dimasik87
| 2025-01-21T12:25:55Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-21T12:17:08Z |
---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6effa428-7d2c-4d09-8936-920f11f80aa1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dabeeced6597d53e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dabeeced6597d53e_train_data.json
type:
field_instruction: sentence1_en
field_output: sentence2_en
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik87/6effa428-7d2c-4d09-8936-920f11f80aa1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/dabeeced6597d53e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8140ab10-5da7-47df-b106-2846f0a02738
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8140ab10-5da7-47df-b106-2846f0a02738
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 6effa428-7d2c-4d09-8936-920f11f80aa1
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0006 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrHungddddh/ee0ee47e-2d5b-4367-8e93-9087d184a92e
|
mrHungddddh
| 2025-01-21T12:25:51Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama",
"base_model:adapter:unsloth/tinyllama",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:17:08Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ee0ee47e-2d5b-4367-8e93-9087d184a92e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 487571a5edd806c1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/487571a5edd806c1_train_data.json
type:
field_input: strategy
field_instruction: original_text
field_output: reframed_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/ee0ee47e-2d5b-4367-8e93-9087d184a92e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/487571a5edd806c1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 37054b4e-d0d5-4aaa-9839-6cdc15c24dbf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 37054b4e-d0d5-4aaa-9839-6cdc15c24dbf
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ee0ee47e-2d5b-4367-8e93-9087d184a92e
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7227 | 0.2042 | 200 | 3.6298 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
tarabukinivan/1066e8a2-3893-4cd1-8fd8-e1b505b5a1b3
|
tarabukinivan
| 2025-01-21T12:25:00Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | 2025-01-21T12:20:06Z |
---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1066e8a2-3893-4cd1-8fd8-e1b505b5a1b3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 807edbe01d3143fb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/807edbe01d3143fb_train_data.json
type:
field_input: question
field_instruction: answer
field_output: context
field_system: distractors
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/1066e8a2-3893-4cd1-8fd8-e1b505b5a1b3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/807edbe01d3143fb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 87d40317-ca50-4c35-ad9f-1a82b7dfae06
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 87d40317-ca50-4c35-ad9f-1a82b7dfae06
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1066e8a2-3893-4cd1-8fd8-e1b505b5a1b3
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.4432 |
| 14.4001 | 0.0007 | 5 | 3.4202 |
| 13.9917 | 0.0014 | 10 | 3.3622 |
| 13.5667 | 0.0021 | 15 | 3.3026 |
| 13.4465 | 0.0028 | 20 | 3.2633 |
| 13.4339 | 0.0035 | 25 | 3.2522 |
| 13.0761 | 0.0042 | 30 | 3.2501 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Ancastal/mistral-7b-literary-creamt-ita-v2
|
Ancastal
| 2025-01-21T12:24:44Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-01-21T12:22:09Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kokovova/a68a6605-dbd4-4a06-94e6-cf4a93ba65b4
|
kokovova
| 2025-01-21T12:22:52Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | 2025-01-21T12:20:06Z |
---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a68a6605-dbd4-4a06-94e6-cf4a93ba65b4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 807edbe01d3143fb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/807edbe01d3143fb_train_data.json
type:
field_input: question
field_instruction: answer
field_output: context
field_system: distractors
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: kokovova/a68a6605-dbd4-4a06-94e6-cf4a93ba65b4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/807edbe01d3143fb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 87d40317-ca50-4c35-ad9f-1a82b7dfae06
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 87d40317-ca50-4c35-ad9f-1a82b7dfae06
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# a68a6605-dbd4-4a06-94e6-cf4a93ba65b4
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 3.5459 |
| 12.8135 | 0.0014 | 5 | 3.5057 |
| 13.1425 | 0.0028 | 10 | 3.4401 |
| 13.0512 | 0.0042 | 15 | 3.3813 |
| 12.9854 | 0.0056 | 20 | 3.3556 |
| 13.2067 | 0.0070 | 25 | 3.3501 |
| 12.8119 | 0.0084 | 30 | 3.3495 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/55ee309d-0518-4ff9-9817-91bd856ea2b9
|
ClarenceDan
| 2025-01-21T12:21:37Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"region:us"
] | null | 2025-01-21T12:19:59Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 55ee309d-0518-4ff9-9817-91bd856ea2b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a6fd907479ec4d1c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a6fd907479ec4d1c_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/55ee309d-0518-4ff9-9817-91bd856ea2b9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a6fd907479ec4d1c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fded3919-5ba0-4f07-810d-e0ea78b083dd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fded3919-5ba0-4f07-810d-e0ea78b083dd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 55ee309d-0518-4ff9-9817-91bd856ea2b9
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8175 | 0.0028 | 1 | 3.9889 |
| 3.7012 | 0.0085 | 3 | 3.9837 |
| 4.6257 | 0.0171 | 6 | 3.9136 |
| 3.7991 | 0.0256 | 9 | 3.5376 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso15/c922ade5-6e42-482f-8b7f-f2fb56073597
|
lesso15
| 2025-01-21T12:21:27Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:20:52Z |
---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c922ade5-6e42-482f-8b7f-f2fb56073597
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
datasets:
- data_files:
- eb2c9ecdcfd4c8a6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/eb2c9ecdcfd4c8a6_train_data.json
type:
field_instruction: image
field_output: description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: true
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso15/c922ade5-6e42-482f-8b7f-f2fb56073597
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/eb2c9ecdcfd4c8a6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 99d526df-9766-4d1f-80b3-c918100a230c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 99d526df-9766-4d1f-80b3-c918100a230c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# c922ade5-6e42-482f-8b7f-f2fb56073597
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 5.6256 |
| 5.6383 | 0.0058 | 5 | 5.3829 |
| 4.8286 | 0.0117 | 10 | 4.4874 |
| 3.6903 | 0.0175 | 15 | 3.6561 |
| 3.5467 | 0.0234 | 20 | 3.1049 |
| 2.657 | 0.0292 | 25 | 3.0091 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
demohong/b7c5fd23-cbe4-469e-bdc5-e977a9051452
|
demohong
| 2025-01-21T12:21:27Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:54:20Z |
---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b7c5fd23-cbe4-469e-bdc5-e977a9051452
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e61b15027cdb8f0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e61b15027cdb8f0f_train_data.json
type:
field_instruction: text_description
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/b7c5fd23-cbe4-469e-bdc5-e977a9051452
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e61b15027cdb8f0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 52dcb611-f58d-420b-a954-552a3249dfec
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 52dcb611-f58d-420b-a954-552a3249dfec
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b7c5fd23-cbe4-469e-bdc5-e977a9051452
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9469 | 0.0800 | 200 | 2.1081 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
minsangK/20250120-bge-m3-8192-bs-4-1-epoch-5e-6-hn-2
|
minsangK
| 2025-01-21T12:21:19Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-01-21T05:05:07Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: 20250120-bge-m3-8192-bs-4-1-epoch-5e-6-hn-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20250120-bge-m3-8192-bs-4-1-epoch-5e-6-hn-2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.19.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k2_task7_organization
|
MayBashendy
| 2025-01-21T12:21:14Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T12:17:18Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k2_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k2_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0238
- Qwk: 0.2898
- Mse: 1.0238
- Rmse: 1.0119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.4 | 2 | 2.5043 | -0.0788 | 2.5043 | 1.5825 |
| No log | 0.8 | 4 | 1.1496 | 0.1284 | 1.1496 | 1.0722 |
| No log | 1.2 | 6 | 0.8398 | 0.0535 | 0.8398 | 0.9164 |
| No log | 1.6 | 8 | 0.8665 | 0.0313 | 0.8665 | 0.9309 |
| No log | 2.0 | 10 | 0.9418 | 0.1181 | 0.9418 | 0.9705 |
| No log | 2.4 | 12 | 0.8754 | 0.1268 | 0.8754 | 0.9356 |
| No log | 2.8 | 14 | 0.7780 | 0.0804 | 0.7780 | 0.8821 |
| No log | 3.2 | 16 | 0.7806 | 0.0444 | 0.7806 | 0.8835 |
| No log | 3.6 | 18 | 0.7911 | 0.0444 | 0.7911 | 0.8895 |
| No log | 4.0 | 20 | 0.8116 | 0.0481 | 0.8116 | 0.9009 |
| No log | 4.4 | 22 | 0.7852 | 0.0444 | 0.7852 | 0.8861 |
| No log | 4.8 | 24 | 0.7556 | 0.1187 | 0.7556 | 0.8693 |
| No log | 5.2 | 26 | 0.7550 | 0.1094 | 0.7550 | 0.8689 |
| No log | 5.6 | 28 | 0.7873 | 0.2285 | 0.7873 | 0.8873 |
| No log | 6.0 | 30 | 0.8195 | 0.1867 | 0.8195 | 0.9053 |
| No log | 6.4 | 32 | 0.7905 | 0.1584 | 0.7905 | 0.8891 |
| No log | 6.8 | 34 | 0.8006 | 0.1542 | 0.8006 | 0.8947 |
| No log | 7.2 | 36 | 0.8280 | 0.1946 | 0.8280 | 0.9100 |
| No log | 7.6 | 38 | 1.0096 | 0.1501 | 1.0096 | 1.0048 |
| No log | 8.0 | 40 | 1.0556 | 0.1867 | 1.0556 | 1.0274 |
| No log | 8.4 | 42 | 0.9515 | 0.0241 | 0.9515 | 0.9754 |
| No log | 8.8 | 44 | 0.8984 | 0.1289 | 0.8984 | 0.9478 |
| No log | 9.2 | 46 | 0.9670 | 0.1385 | 0.9670 | 0.9834 |
| No log | 9.6 | 48 | 1.0413 | 0.2119 | 1.0413 | 1.0204 |
| No log | 10.0 | 50 | 1.1556 | 0.1115 | 1.1556 | 1.0750 |
| No log | 10.4 | 52 | 1.1839 | 0.0845 | 1.1839 | 1.0881 |
| No log | 10.8 | 54 | 1.2097 | 0.0686 | 1.2097 | 1.0998 |
| No log | 11.2 | 56 | 1.2016 | 0.0686 | 1.2016 | 1.0962 |
| No log | 11.6 | 58 | 1.1247 | 0.0713 | 1.1247 | 1.0605 |
| No log | 12.0 | 60 | 1.0039 | 0.1775 | 1.0039 | 1.0020 |
| No log | 12.4 | 62 | 0.9073 | 0.2239 | 0.9073 | 0.9525 |
| No log | 12.8 | 64 | 1.0174 | 0.1014 | 1.0174 | 1.0087 |
| No log | 13.2 | 66 | 1.2606 | 0.1176 | 1.2606 | 1.1228 |
| No log | 13.6 | 68 | 1.3323 | 0.1479 | 1.3323 | 1.1542 |
| No log | 14.0 | 70 | 1.3229 | 0.0704 | 1.3229 | 1.1502 |
| No log | 14.4 | 72 | 1.0368 | 0.1843 | 1.0368 | 1.0182 |
| No log | 14.8 | 74 | 0.9368 | 0.2124 | 0.9368 | 0.9679 |
| No log | 15.2 | 76 | 0.9609 | 0.2076 | 0.9609 | 0.9803 |
| No log | 15.6 | 78 | 1.1684 | 0.1086 | 1.1684 | 1.0809 |
| No log | 16.0 | 80 | 1.2916 | 0.1417 | 1.2916 | 1.1365 |
| No log | 16.4 | 82 | 1.2723 | 0.1145 | 1.2723 | 1.1279 |
| No log | 16.8 | 84 | 1.1180 | 0.2070 | 1.1180 | 1.0573 |
| No log | 17.2 | 86 | 0.9465 | 0.3134 | 0.9465 | 0.9729 |
| No log | 17.6 | 88 | 0.9163 | 0.3194 | 0.9163 | 0.9572 |
| No log | 18.0 | 90 | 1.0376 | 0.2209 | 1.0376 | 1.0186 |
| No log | 18.4 | 92 | 1.2168 | 0.2070 | 1.2168 | 1.1031 |
| No log | 18.8 | 94 | 1.2754 | 0.0727 | 1.2754 | 1.1293 |
| No log | 19.2 | 96 | 1.2909 | 0.1200 | 1.2909 | 1.1362 |
| No log | 19.6 | 98 | 1.2740 | 0.2138 | 1.2740 | 1.1287 |
| No log | 20.0 | 100 | 1.0757 | 0.2354 | 1.0757 | 1.0372 |
| No log | 20.4 | 102 | 0.9648 | 0.3134 | 0.9648 | 0.9823 |
| No log | 20.8 | 104 | 0.9900 | 0.2537 | 0.9900 | 0.9950 |
| No log | 21.2 | 106 | 1.0386 | 0.1549 | 1.0386 | 1.0191 |
| No log | 21.6 | 108 | 1.1480 | 0.1206 | 1.1480 | 1.0714 |
| No log | 22.0 | 110 | 1.2393 | 0.1254 | 1.2393 | 1.1133 |
| No log | 22.4 | 112 | 1.3027 | 0.1198 | 1.3027 | 1.1414 |
| No log | 22.8 | 114 | 1.2327 | 0.1473 | 1.2327 | 1.1102 |
| No log | 23.2 | 116 | 1.0803 | 0.1176 | 1.0803 | 1.0394 |
| No log | 23.6 | 118 | 1.0485 | 0.2330 | 1.0485 | 1.0240 |
| No log | 24.0 | 120 | 1.0691 | 0.2545 | 1.0691 | 1.0340 |
| No log | 24.4 | 122 | 1.1256 | 0.2580 | 1.1256 | 1.0610 |
| No log | 24.8 | 124 | 1.1114 | 0.1764 | 1.1114 | 1.0542 |
| No log | 25.2 | 126 | 1.0668 | 0.1576 | 1.0668 | 1.0328 |
| No log | 25.6 | 128 | 0.9976 | 0.1726 | 0.9976 | 0.9988 |
| No log | 26.0 | 130 | 0.9405 | 0.2389 | 0.9405 | 0.9698 |
| No log | 26.4 | 132 | 0.9616 | 0.2507 | 0.9616 | 0.9806 |
| No log | 26.8 | 134 | 0.9992 | 0.2872 | 0.9992 | 0.9996 |
| No log | 27.2 | 136 | 0.9222 | 0.3022 | 0.9222 | 0.9603 |
| No log | 27.6 | 138 | 0.8651 | 0.3319 | 0.8651 | 0.9301 |
| No log | 28.0 | 140 | 0.8733 | 0.2328 | 0.8733 | 0.9345 |
| No log | 28.4 | 142 | 0.8902 | 0.2440 | 0.8902 | 0.9435 |
| No log | 28.8 | 144 | 0.9273 | 0.2116 | 0.9273 | 0.9629 |
| No log | 29.2 | 146 | 0.9950 | 0.1672 | 0.9950 | 0.9975 |
| No log | 29.6 | 148 | 1.0746 | 0.1238 | 1.0746 | 1.0366 |
| No log | 30.0 | 150 | 1.2142 | 0.1583 | 1.2142 | 1.1019 |
| No log | 30.4 | 152 | 1.2794 | 0.1174 | 1.2794 | 1.1311 |
| No log | 30.8 | 154 | 1.2352 | 0.1956 | 1.2352 | 1.1114 |
| No log | 31.2 | 156 | 1.1476 | 0.1884 | 1.1476 | 1.0712 |
| No log | 31.6 | 158 | 1.0861 | 0.1451 | 1.0861 | 1.0422 |
| No log | 32.0 | 160 | 1.0964 | 0.0648 | 1.0964 | 1.0471 |
| No log | 32.4 | 162 | 1.1565 | 0.0872 | 1.1565 | 1.0754 |
| No log | 32.8 | 164 | 1.1985 | 0.1320 | 1.1985 | 1.0948 |
| No log | 33.2 | 166 | 1.1957 | 0.1550 | 1.1957 | 1.0935 |
| No log | 33.6 | 168 | 1.1242 | 0.1774 | 1.1242 | 1.0603 |
| No log | 34.0 | 170 | 1.0184 | 0.2166 | 1.0184 | 1.0092 |
| No log | 34.4 | 172 | 1.0672 | 0.2031 | 1.0672 | 1.0330 |
| No log | 34.8 | 174 | 1.0674 | 0.2961 | 1.0674 | 1.0332 |
| No log | 35.2 | 176 | 1.0550 | 0.2288 | 1.0550 | 1.0272 |
| No log | 35.6 | 178 | 1.0939 | 0.1618 | 1.0939 | 1.0459 |
| No log | 36.0 | 180 | 1.1366 | 0.0953 | 1.1366 | 1.0661 |
| No log | 36.4 | 182 | 1.1833 | 0.1618 | 1.1833 | 1.0878 |
| No log | 36.8 | 184 | 1.2085 | 0.2247 | 1.2085 | 1.0993 |
| No log | 37.2 | 186 | 1.1489 | 0.2330 | 1.1489 | 1.0719 |
| No log | 37.6 | 188 | 1.0133 | 0.3059 | 1.0133 | 1.0066 |
| No log | 38.0 | 190 | 0.9738 | 0.3110 | 0.9738 | 0.9868 |
| No log | 38.4 | 192 | 0.9567 | 0.2554 | 0.9567 | 0.9781 |
| No log | 38.8 | 194 | 1.0198 | 0.2554 | 1.0198 | 1.0099 |
| No log | 39.2 | 196 | 1.0940 | 0.2192 | 1.0940 | 1.0459 |
| No log | 39.6 | 198 | 1.0901 | 0.1726 | 1.0901 | 1.0441 |
| No log | 40.0 | 200 | 1.0486 | 0.2323 | 1.0486 | 1.0240 |
| No log | 40.4 | 202 | 1.0287 | 0.2507 | 1.0287 | 1.0142 |
| No log | 40.8 | 204 | 1.0958 | 0.2330 | 1.0958 | 1.0468 |
| No log | 41.2 | 206 | 1.1211 | 0.2330 | 1.1211 | 1.0588 |
| No log | 41.6 | 208 | 1.1175 | 0.2330 | 1.1175 | 1.0571 |
| No log | 42.0 | 210 | 1.0222 | 0.2461 | 1.0222 | 1.0110 |
| No log | 42.4 | 212 | 0.9571 | 0.2934 | 0.9571 | 0.9783 |
| No log | 42.8 | 214 | 0.9372 | 0.2934 | 0.9372 | 0.9681 |
| No log | 43.2 | 216 | 0.9523 | 0.2830 | 0.9523 | 0.9759 |
| No log | 43.6 | 218 | 0.9940 | 0.2682 | 0.9940 | 0.9970 |
| No log | 44.0 | 220 | 1.0150 | 0.2682 | 1.0150 | 1.0075 |
| No log | 44.4 | 222 | 0.9591 | 0.2898 | 0.9591 | 0.9793 |
| No log | 44.8 | 224 | 0.9037 | 0.3579 | 0.9037 | 0.9506 |
| No log | 45.2 | 226 | 0.8545 | 0.3777 | 0.8545 | 0.9244 |
| No log | 45.6 | 228 | 0.8533 | 0.3777 | 0.8533 | 0.9237 |
| No log | 46.0 | 230 | 0.8554 | 0.3409 | 0.8554 | 0.9249 |
| No log | 46.4 | 232 | 0.7829 | 0.3494 | 0.7829 | 0.8848 |
| No log | 46.8 | 234 | 0.7568 | 0.3737 | 0.7568 | 0.8700 |
| No log | 47.2 | 236 | 0.7869 | 0.3425 | 0.7869 | 0.8871 |
| No log | 47.6 | 238 | 0.8789 | 0.3579 | 0.8789 | 0.9375 |
| No log | 48.0 | 240 | 0.9512 | 0.3161 | 0.9512 | 0.9753 |
| No log | 48.4 | 242 | 0.9939 | 0.2982 | 0.9939 | 0.9969 |
| No log | 48.8 | 244 | 0.9759 | 0.2982 | 0.9759 | 0.9879 |
| No log | 49.2 | 246 | 0.9258 | 0.3161 | 0.9258 | 0.9622 |
| No log | 49.6 | 248 | 0.8924 | 0.3269 | 0.8924 | 0.9447 |
| No log | 50.0 | 250 | 0.8939 | 0.3269 | 0.8939 | 0.9455 |
| No log | 50.4 | 252 | 0.8985 | 0.3214 | 0.8985 | 0.9479 |
| No log | 50.8 | 254 | 0.9009 | 0.3359 | 0.9009 | 0.9492 |
| No log | 51.2 | 256 | 0.9503 | 0.3302 | 0.9503 | 0.9748 |
| No log | 51.6 | 258 | 1.0464 | 0.2643 | 1.0464 | 1.0229 |
| No log | 52.0 | 260 | 1.1513 | 0.2518 | 1.1513 | 1.0730 |
| No log | 52.4 | 262 | 1.1727 | 0.2518 | 1.1727 | 1.0829 |
| No log | 52.8 | 264 | 1.1205 | 0.2843 | 1.1205 | 1.0585 |
| No log | 53.2 | 266 | 1.0426 | 0.3010 | 1.0426 | 1.0211 |
| No log | 53.6 | 268 | 0.9458 | 0.3110 | 0.9458 | 0.9725 |
| No log | 54.0 | 270 | 0.9039 | 0.3110 | 0.9039 | 0.9507 |
| No log | 54.4 | 272 | 0.8963 | 0.3214 | 0.8963 | 0.9467 |
| No log | 54.8 | 274 | 0.9164 | 0.2682 | 0.9164 | 0.9573 |
| No log | 55.2 | 276 | 0.9534 | 0.2682 | 0.9534 | 0.9764 |
| No log | 55.6 | 278 | 1.0053 | 0.2682 | 1.0053 | 1.0026 |
| No log | 56.0 | 280 | 1.0373 | 0.2682 | 1.0373 | 1.0185 |
| No log | 56.4 | 282 | 1.0683 | 0.2682 | 1.0683 | 1.0336 |
| No log | 56.8 | 284 | 1.0886 | 0.2850 | 1.0886 | 1.0433 |
| No log | 57.2 | 286 | 1.1172 | 0.2687 | 1.1172 | 1.0570 |
| No log | 57.6 | 288 | 1.1151 | 0.2687 | 1.1151 | 1.0560 |
| No log | 58.0 | 290 | 1.0486 | 0.2682 | 1.0486 | 1.0240 |
| No log | 58.4 | 292 | 0.9765 | 0.2934 | 0.9765 | 0.9882 |
| No log | 58.8 | 294 | 0.9303 | 0.2754 | 0.9303 | 0.9645 |
| No log | 59.2 | 296 | 0.8948 | 0.2807 | 0.8948 | 0.9459 |
| No log | 59.6 | 298 | 0.9041 | 0.2754 | 0.9041 | 0.9508 |
| No log | 60.0 | 300 | 0.9135 | 0.3217 | 0.9135 | 0.9558 |
| No log | 60.4 | 302 | 0.9702 | 0.2999 | 0.9702 | 0.9850 |
| No log | 60.8 | 304 | 1.0872 | 0.2732 | 1.0872 | 1.0427 |
| No log | 61.2 | 306 | 1.1963 | 0.2601 | 1.1963 | 1.0937 |
| No log | 61.6 | 308 | 1.2333 | 0.2319 | 1.2333 | 1.1105 |
| No log | 62.0 | 310 | 1.2058 | 0.2319 | 1.2058 | 1.0981 |
| No log | 62.4 | 312 | 1.1687 | 0.2643 | 1.1687 | 1.0811 |
| No log | 62.8 | 314 | 1.1065 | 0.2330 | 1.1065 | 1.0519 |
| No log | 63.2 | 316 | 1.0488 | 0.2416 | 1.0488 | 1.0241 |
| No log | 63.6 | 318 | 1.0236 | 0.2461 | 1.0236 | 1.0117 |
| No log | 64.0 | 320 | 1.0416 | 0.2461 | 1.0416 | 1.0206 |
| No log | 64.4 | 322 | 1.0806 | 0.2312 | 1.0806 | 1.0395 |
| No log | 64.8 | 324 | 1.0938 | 0.2312 | 1.0938 | 1.0458 |
| No log | 65.2 | 326 | 1.0957 | 0.2312 | 1.0957 | 1.0468 |
| No log | 65.6 | 328 | 1.1094 | 0.2231 | 1.1094 | 1.0533 |
| No log | 66.0 | 330 | 1.0897 | 0.2372 | 1.0897 | 1.0439 |
| No log | 66.4 | 332 | 1.0418 | 0.2507 | 1.0418 | 1.0207 |
| No log | 66.8 | 334 | 1.0109 | 0.2554 | 1.0109 | 1.0054 |
| No log | 67.2 | 336 | 0.9840 | 0.2537 | 0.9840 | 0.9919 |
| No log | 67.6 | 338 | 0.9836 | 0.2728 | 0.9836 | 0.9918 |
| No log | 68.0 | 340 | 1.0211 | 0.2507 | 1.0211 | 1.0105 |
| No log | 68.4 | 342 | 1.0395 | 0.2461 | 1.0395 | 1.0196 |
| No log | 68.8 | 344 | 1.0772 | 0.2635 | 1.0772 | 1.0379 |
| No log | 69.2 | 346 | 1.1149 | 0.2590 | 1.1149 | 1.0559 |
| No log | 69.6 | 348 | 1.1039 | 0.2590 | 1.1039 | 1.0507 |
| No log | 70.0 | 350 | 1.0739 | 0.2372 | 1.0739 | 1.0363 |
| No log | 70.4 | 352 | 1.0404 | 0.2507 | 1.0404 | 1.0200 |
| No log | 70.8 | 354 | 0.9935 | 0.2881 | 0.9935 | 0.9968 |
| No log | 71.2 | 356 | 0.9676 | 0.2781 | 0.9676 | 0.9837 |
| No log | 71.6 | 358 | 0.9752 | 0.3052 | 0.9752 | 0.9875 |
| No log | 72.0 | 360 | 0.9769 | 0.3052 | 0.9769 | 0.9884 |
| No log | 72.4 | 362 | 1.0081 | 0.3052 | 1.0081 | 1.0041 |
| No log | 72.8 | 364 | 1.0776 | 0.2524 | 1.0776 | 1.0381 |
| No log | 73.2 | 366 | 1.1453 | 0.1943 | 1.1453 | 1.0702 |
| No log | 73.6 | 368 | 1.1644 | 0.1873 | 1.1644 | 1.0791 |
| No log | 74.0 | 370 | 1.1727 | 0.1873 | 1.1727 | 1.0829 |
| No log | 74.4 | 372 | 1.1770 | 0.1873 | 1.1770 | 1.0849 |
| No log | 74.8 | 374 | 1.1415 | 0.1873 | 1.1415 | 1.0684 |
| No log | 75.2 | 376 | 1.1092 | 0.2115 | 1.1092 | 1.0532 |
| No log | 75.6 | 378 | 1.0808 | 0.2898 | 1.0808 | 1.0396 |
| No log | 76.0 | 380 | 1.0451 | 0.2948 | 1.0451 | 1.0223 |
| No log | 76.4 | 382 | 1.0226 | 0.2806 | 1.0226 | 1.0112 |
| No log | 76.8 | 384 | 1.0140 | 0.3029 | 1.0140 | 1.0070 |
| No log | 77.2 | 386 | 1.0344 | 0.3161 | 1.0344 | 1.0171 |
| No log | 77.6 | 388 | 1.0730 | 0.2802 | 1.0730 | 1.0359 |
| No log | 78.0 | 390 | 1.0829 | 0.2687 | 1.0829 | 1.0406 |
| No log | 78.4 | 392 | 1.0565 | 0.2802 | 1.0565 | 1.0279 |
| No log | 78.8 | 394 | 1.0195 | 0.3110 | 1.0195 | 1.0097 |
| No log | 79.2 | 396 | 0.9883 | 0.3082 | 0.9883 | 0.9941 |
| No log | 79.6 | 398 | 0.9735 | 0.3082 | 0.9735 | 0.9867 |
| No log | 80.0 | 400 | 0.9817 | 0.3082 | 0.9817 | 0.9908 |
| No log | 80.4 | 402 | 0.9895 | 0.3082 | 0.9895 | 0.9947 |
| No log | 80.8 | 404 | 1.0064 | 0.3082 | 1.0064 | 1.0032 |
| No log | 81.2 | 406 | 1.0443 | 0.3110 | 1.0443 | 1.0219 |
| No log | 81.6 | 408 | 1.0929 | 0.2601 | 1.0929 | 1.0454 |
| No log | 82.0 | 410 | 1.1361 | 0.2319 | 1.1361 | 1.0659 |
| No log | 82.4 | 412 | 1.1534 | 0.2280 | 1.1534 | 1.0740 |
| No log | 82.8 | 414 | 1.1552 | 0.2280 | 1.1552 | 1.0748 |
| No log | 83.2 | 416 | 1.1361 | 0.2280 | 1.1361 | 1.0659 |
| No log | 83.6 | 418 | 1.1090 | 0.2319 | 1.1090 | 1.0531 |
| No log | 84.0 | 420 | 1.0682 | 0.2643 | 1.0682 | 1.0335 |
| No log | 84.4 | 422 | 1.0314 | 0.2898 | 1.0314 | 1.0156 |
| No log | 84.8 | 424 | 1.0096 | 0.2577 | 1.0096 | 1.0048 |
| No log | 85.2 | 426 | 1.0108 | 0.2806 | 1.0108 | 1.0054 |
| No log | 85.6 | 428 | 1.0292 | 0.2948 | 1.0292 | 1.0145 |
| No log | 86.0 | 430 | 1.0497 | 0.2850 | 1.0497 | 1.0245 |
| No log | 86.4 | 432 | 1.0606 | 0.2545 | 1.0606 | 1.0298 |
| No log | 86.8 | 434 | 1.0645 | 0.2590 | 1.0645 | 1.0317 |
| No log | 87.2 | 436 | 1.0567 | 0.2590 | 1.0567 | 1.0279 |
| No log | 87.6 | 438 | 1.0379 | 0.2590 | 1.0379 | 1.0188 |
| No log | 88.0 | 440 | 1.0118 | 0.2756 | 1.0118 | 1.0059 |
| No log | 88.4 | 442 | 0.9808 | 0.3029 | 0.9808 | 0.9903 |
| No log | 88.8 | 444 | 0.9686 | 0.3029 | 0.9686 | 0.9842 |
| No log | 89.2 | 446 | 0.9716 | 0.3029 | 0.9716 | 0.9857 |
| No log | 89.6 | 448 | 0.9889 | 0.3029 | 0.9889 | 0.9944 |
| No log | 90.0 | 450 | 1.0088 | 0.2977 | 1.0088 | 1.0044 |
| No log | 90.4 | 452 | 1.0223 | 0.2898 | 1.0223 | 1.0111 |
| No log | 90.8 | 454 | 1.0382 | 0.2898 | 1.0382 | 1.0189 |
| No log | 91.2 | 456 | 1.0432 | 0.2590 | 1.0432 | 1.0214 |
| No log | 91.6 | 458 | 1.0437 | 0.2898 | 1.0437 | 1.0216 |
| No log | 92.0 | 460 | 1.0541 | 0.2590 | 1.0541 | 1.0267 |
| No log | 92.4 | 462 | 1.0675 | 0.2590 | 1.0675 | 1.0332 |
| No log | 92.8 | 464 | 1.0758 | 0.2590 | 1.0758 | 1.0372 |
| No log | 93.2 | 466 | 1.0764 | 0.2481 | 1.0764 | 1.0375 |
| No log | 93.6 | 468 | 1.0694 | 0.2590 | 1.0694 | 1.0341 |
| No log | 94.0 | 470 | 1.0591 | 0.2898 | 1.0591 | 1.0291 |
| No log | 94.4 | 472 | 1.0470 | 0.2898 | 1.0470 | 1.0232 |
| No log | 94.8 | 474 | 1.0427 | 0.2898 | 1.0427 | 1.0211 |
| No log | 95.2 | 476 | 1.0401 | 0.2898 | 1.0401 | 1.0199 |
| No log | 95.6 | 478 | 1.0364 | 0.2898 | 1.0364 | 1.0180 |
| No log | 96.0 | 480 | 1.0309 | 0.2898 | 1.0309 | 1.0153 |
| No log | 96.4 | 482 | 1.0296 | 0.2898 | 1.0296 | 1.0147 |
| No log | 96.8 | 484 | 1.0257 | 0.2898 | 1.0257 | 1.0128 |
| No log | 97.2 | 486 | 1.0236 | 0.2898 | 1.0236 | 1.0117 |
| No log | 97.6 | 488 | 1.0233 | 0.2898 | 1.0233 | 1.0116 |
| No log | 98.0 | 490 | 1.0236 | 0.2898 | 1.0236 | 1.0117 |
| No log | 98.4 | 492 | 1.0240 | 0.2898 | 1.0240 | 1.0119 |
| No log | 98.8 | 494 | 1.0237 | 0.2898 | 1.0237 | 1.0118 |
| No log | 99.2 | 496 | 1.0235 | 0.2898 | 1.0235 | 1.0117 |
| No log | 99.6 | 498 | 1.0237 | 0.2898 | 1.0237 | 1.0118 |
| 0.1613 | 100.0 | 500 | 1.0238 | 0.2898 | 1.0238 | 1.0119 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mamung/345fef6a-8237-4ee7-82b7-ba614660cdd1
|
mamung
| 2025-01-21T12:20:18Z | 14 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-21T12:18:20Z |
---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 345fef6a-8237-4ee7-82b7-ba614660cdd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 570f06fa330a02a8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/570f06fa330a02a8_train_data.json
type:
field_input: starter_code
field_instruction: question_content
field_output: test
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: mamung/345fef6a-8237-4ee7-82b7-ba614660cdd1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/570f06fa330a02a8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: eddysang
wandb_mode: online
wandb_name: 835a7d05-70b3-4946-9a54-04ee779c4f19
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 835a7d05-70b3-4946-9a54-04ee779c4f19
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# 345fef6a-8237-4ee7-82b7-ba614660cdd1
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 86
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0350 | 1 | 10.3792 |
| 10.3758 | 0.2801 | 8 | 10.3774 |
| 10.3712 | 0.5602 | 16 | 10.3714 |
| 10.3635 | 0.8403 | 24 | 10.3584 |
| 13.5544 | 1.1368 | 32 | 10.3373 |
| 10.2418 | 1.4168 | 40 | 10.3136 |
| 10.2948 | 1.6969 | 48 | 10.2994 |
| 10.258 | 1.9770 | 56 | 10.2935 |
| 10.1202 | 2.2735 | 64 | 10.2904 |
| 10.3454 | 2.5536 | 72 | 10.2889 |
| 10.0236 | 2.8337 | 80 | 10.2883 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
tolgaakar/Mistral-Small-Instruct-2409-FP8-Dynamic
|
tolgaakar
| 2025-01-21T12:20:13Z | 28 | 0 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:quantized:mistralai/Mistral-Small-Instruct-2409",
"license:other",
"compressed-tensors",
"region:us"
] | null | 2025-01-21T11:40:26Z |
---
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
base_model:
- mistralai/Mistral-Small-Instruct-2409
---
This model was converted to FP8 format from mistralai/Mistral-Small-Instruct-2409 using the llmcompressor library by vLLM. Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) for more details on the model.
|
chauhoang/e3c40096-d899-4c00-83fe-a263498a511e
|
chauhoang
| 2025-01-21T12:20:02Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:16:49Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e3c40096-d899-4c00-83fe-a263498a511e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5b44368ea0d7d142_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5b44368ea0d7d142_train_data.json
type:
field_input: aspect
field_instruction: document
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/e3c40096-d899-4c00-83fe-a263498a511e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/5b44368ea0d7d142_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 074be582-c2ba-4209-b6c2-0d56b0df9cdc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 074be582-c2ba-4209-b6c2-0d56b0df9cdc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e3c40096-d899-4c00-83fe-a263498a511e
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0047 | 1 | 2.4446 |
| 2.5709 | 0.0468 | 10 | 2.4348 |
| 2.3494 | 0.0936 | 20 | 2.3377 |
| 2.227 | 0.1404 | 30 | 2.2550 |
| 2.2299 | 0.1871 | 40 | 2.2244 |
| 2.3316 | 0.2339 | 50 | 2.2210 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trenden/51365222-6e8d-44e5-b9fb-c99768921c36
|
trenden
| 2025-01-21T12:18:36Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-21T12:05:23Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 51365222-6e8d-44e5-b9fb-c99768921c36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d23a80b910821333_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d23a80b910821333_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/51365222-6e8d-44e5-b9fb-c99768921c36
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d23a80b910821333_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c7400a48-f57f-4a5f-8c57-bf09a3ce88d3
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: c7400a48-f57f-4a5f-8c57-bf09a3ce88d3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 51365222-6e8d-44e5-b9fb-c99768921c36
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.658 | 0.0001 | 1 | 2.5837 |
| 10.4572 | 0.0002 | 3 | 2.5780 |
| 9.8738 | 0.0004 | 6 | 2.5291 |
| 8.7801 | 0.0006 | 9 | 2.4281 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k1_task7_organization
|
MayBashendy
| 2025-01-21T12:16:56Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T12:14:27Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k1_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k1_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7404
- Qwk: 0.3238
- Mse: 0.7404
- Rmse: 0.8605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.6667 | 2 | 2.6632 | -0.1213 | 2.6632 | 1.6319 |
| No log | 1.3333 | 4 | 1.3882 | 0.1265 | 1.3882 | 1.1782 |
| No log | 2.0 | 6 | 1.1319 | -0.0970 | 1.1319 | 1.0639 |
| No log | 2.6667 | 8 | 1.0927 | -0.0810 | 1.0927 | 1.0453 |
| No log | 3.3333 | 10 | 0.9247 | 0.0058 | 0.9247 | 0.9616 |
| No log | 4.0 | 12 | 0.8601 | 0.0236 | 0.8601 | 0.9274 |
| No log | 4.6667 | 14 | 0.8247 | 0.0393 | 0.8247 | 0.9081 |
| No log | 5.3333 | 16 | 0.7599 | 0.0 | 0.7599 | 0.8717 |
| No log | 6.0 | 18 | 0.7561 | 0.0481 | 0.7561 | 0.8696 |
| No log | 6.6667 | 20 | 0.7398 | 0.0937 | 0.7398 | 0.8601 |
| No log | 7.3333 | 22 | 0.6974 | 0.0889 | 0.6974 | 0.8351 |
| No log | 8.0 | 24 | 0.7067 | 0.0840 | 0.7067 | 0.8407 |
| No log | 8.6667 | 26 | 0.7288 | 0.1139 | 0.7288 | 0.8537 |
| No log | 9.3333 | 28 | 0.7213 | 0.0327 | 0.7213 | 0.8493 |
| No log | 10.0 | 30 | 0.6967 | 0.0393 | 0.6967 | 0.8347 |
| No log | 10.6667 | 32 | 0.6970 | 0.0846 | 0.6970 | 0.8349 |
| No log | 11.3333 | 34 | 0.6807 | 0.1327 | 0.6807 | 0.8251 |
| No log | 12.0 | 36 | 0.6864 | 0.2783 | 0.6864 | 0.8285 |
| No log | 12.6667 | 38 | 0.7758 | 0.1341 | 0.7758 | 0.8808 |
| No log | 13.3333 | 40 | 1.0549 | 0.1650 | 1.0549 | 1.0271 |
| No log | 14.0 | 42 | 1.0926 | 0.1753 | 1.0926 | 1.0453 |
| No log | 14.6667 | 44 | 0.9507 | 0.1323 | 0.9507 | 0.9750 |
| No log | 15.3333 | 46 | 0.9933 | 0.2153 | 0.9933 | 0.9966 |
| No log | 16.0 | 48 | 1.0836 | 0.1926 | 1.0836 | 1.0410 |
| No log | 16.6667 | 50 | 1.1782 | 0.1077 | 1.1782 | 1.0854 |
| No log | 17.3333 | 52 | 1.0397 | 0.1339 | 1.0397 | 1.0196 |
| No log | 18.0 | 54 | 0.8137 | 0.2715 | 0.8137 | 0.9021 |
| No log | 18.6667 | 56 | 0.7808 | 0.3541 | 0.7808 | 0.8836 |
| No log | 19.3333 | 58 | 0.7654 | 0.3622 | 0.7654 | 0.8749 |
| No log | 20.0 | 60 | 0.9714 | 0.2239 | 0.9714 | 0.9856 |
| No log | 20.6667 | 62 | 1.0619 | 0.2507 | 1.0619 | 1.0305 |
| No log | 21.3333 | 64 | 1.0113 | 0.1856 | 1.0113 | 1.0056 |
| No log | 22.0 | 66 | 0.8359 | 0.2662 | 0.8359 | 0.9143 |
| No log | 22.6667 | 68 | 0.7538 | 0.2414 | 0.7538 | 0.8682 |
| No log | 23.3333 | 70 | 0.7411 | 0.2237 | 0.7411 | 0.8609 |
| No log | 24.0 | 72 | 0.8121 | 0.2096 | 0.8121 | 0.9012 |
| No log | 24.6667 | 74 | 0.9212 | 0.2253 | 0.9212 | 0.9598 |
| No log | 25.3333 | 76 | 0.9368 | 0.3586 | 0.9368 | 0.9679 |
| No log | 26.0 | 78 | 0.8852 | 0.3344 | 0.8852 | 0.9409 |
| No log | 26.6667 | 80 | 0.8121 | 0.3261 | 0.8121 | 0.9012 |
| No log | 27.3333 | 82 | 0.8265 | 0.3918 | 0.8265 | 0.9091 |
| No log | 28.0 | 84 | 0.8877 | 0.3234 | 0.8877 | 0.9422 |
| No log | 28.6667 | 86 | 0.8820 | 0.3099 | 0.8820 | 0.9392 |
| No log | 29.3333 | 88 | 0.8031 | 0.3196 | 0.8031 | 0.8962 |
| No log | 30.0 | 90 | 0.8070 | 0.3127 | 0.8070 | 0.8983 |
| No log | 30.6667 | 92 | 0.9023 | 0.2967 | 0.9023 | 0.9499 |
| No log | 31.3333 | 94 | 0.9381 | 0.2692 | 0.9381 | 0.9686 |
| No log | 32.0 | 96 | 0.8218 | 0.2967 | 0.8218 | 0.9066 |
| No log | 32.6667 | 98 | 0.6896 | 0.2099 | 0.6896 | 0.8304 |
| No log | 33.3333 | 100 | 0.6649 | 0.2317 | 0.6649 | 0.8154 |
| No log | 34.0 | 102 | 0.6654 | 0.2476 | 0.6654 | 0.8157 |
| No log | 34.6667 | 104 | 0.7198 | 0.3840 | 0.7198 | 0.8484 |
| No log | 35.3333 | 106 | 0.7595 | 0.4020 | 0.7595 | 0.8715 |
| No log | 36.0 | 108 | 0.7477 | 0.3498 | 0.7477 | 0.8647 |
| No log | 36.6667 | 110 | 0.7214 | 0.3572 | 0.7214 | 0.8494 |
| No log | 37.3333 | 112 | 0.7180 | 0.2379 | 0.7180 | 0.8474 |
| No log | 38.0 | 114 | 0.7629 | 0.1972 | 0.7629 | 0.8735 |
| No log | 38.6667 | 116 | 0.7824 | 0.1972 | 0.7824 | 0.8845 |
| No log | 39.3333 | 118 | 0.7750 | 0.1972 | 0.7750 | 0.8804 |
| No log | 40.0 | 120 | 0.7531 | 0.2652 | 0.7531 | 0.8678 |
| No log | 40.6667 | 122 | 0.7773 | 0.2171 | 0.7773 | 0.8817 |
| No log | 41.3333 | 124 | 0.8507 | 0.2692 | 0.8507 | 0.9223 |
| No log | 42.0 | 126 | 0.9124 | 0.3234 | 0.9124 | 0.9552 |
| No log | 42.6667 | 128 | 0.8637 | 0.2967 | 0.8637 | 0.9293 |
| No log | 43.3333 | 130 | 0.8389 | 0.2967 | 0.8389 | 0.9159 |
| No log | 44.0 | 132 | 0.7617 | 0.2498 | 0.7617 | 0.8727 |
| No log | 44.6667 | 134 | 0.7038 | 0.3341 | 0.7038 | 0.8390 |
| No log | 45.3333 | 136 | 0.6932 | 0.2981 | 0.6932 | 0.8326 |
| No log | 46.0 | 138 | 0.6988 | 0.3894 | 0.6988 | 0.8359 |
| No log | 46.6667 | 140 | 0.7067 | 0.3894 | 0.7067 | 0.8406 |
| No log | 47.3333 | 142 | 0.7387 | 0.3399 | 0.7387 | 0.8595 |
| No log | 48.0 | 144 | 0.7523 | 0.3590 | 0.7523 | 0.8673 |
| No log | 48.6667 | 146 | 0.7570 | 0.3590 | 0.7570 | 0.8701 |
| No log | 49.3333 | 148 | 0.8021 | 0.3940 | 0.8021 | 0.8956 |
| No log | 50.0 | 150 | 0.7996 | 0.4329 | 0.7996 | 0.8942 |
| No log | 50.6667 | 152 | 0.7466 | 0.3590 | 0.7466 | 0.8640 |
| No log | 51.3333 | 154 | 0.6695 | 0.3788 | 0.6695 | 0.8182 |
| No log | 52.0 | 156 | 0.6423 | 0.3070 | 0.6423 | 0.8015 |
| No log | 52.6667 | 158 | 0.6455 | 0.3336 | 0.6455 | 0.8034 |
| No log | 53.3333 | 160 | 0.6411 | 0.3070 | 0.6411 | 0.8007 |
| No log | 54.0 | 162 | 0.6462 | 0.3599 | 0.6462 | 0.8039 |
| No log | 54.6667 | 164 | 0.6690 | 0.3524 | 0.6690 | 0.8179 |
| No log | 55.3333 | 166 | 0.6858 | 0.3267 | 0.6858 | 0.8281 |
| No log | 56.0 | 168 | 0.7004 | 0.3545 | 0.7004 | 0.8369 |
| No log | 56.6667 | 170 | 0.7119 | 0.3545 | 0.7119 | 0.8437 |
| No log | 57.3333 | 172 | 0.7268 | 0.3060 | 0.7268 | 0.8525 |
| No log | 58.0 | 174 | 0.7169 | 0.3127 | 0.7169 | 0.8467 |
| No log | 58.6667 | 176 | 0.6937 | 0.3498 | 0.6937 | 0.8329 |
| No log | 59.3333 | 178 | 0.6644 | 0.3524 | 0.6644 | 0.8151 |
| No log | 60.0 | 180 | 0.6653 | 0.3170 | 0.6653 | 0.8157 |
| No log | 60.6667 | 182 | 0.6667 | 0.3481 | 0.6667 | 0.8165 |
| No log | 61.3333 | 184 | 0.6775 | 0.3860 | 0.6775 | 0.8231 |
| No log | 62.0 | 186 | 0.6774 | 0.3806 | 0.6774 | 0.8230 |
| No log | 62.6667 | 188 | 0.6643 | 0.2652 | 0.6643 | 0.8150 |
| No log | 63.3333 | 190 | 0.6663 | 0.3280 | 0.6663 | 0.8163 |
| No log | 64.0 | 192 | 0.6705 | 0.3280 | 0.6705 | 0.8188 |
| No log | 64.6667 | 194 | 0.6854 | 0.2973 | 0.6854 | 0.8279 |
| No log | 65.3333 | 196 | 0.6973 | 0.3524 | 0.6973 | 0.8350 |
| No log | 66.0 | 198 | 0.7181 | 0.3840 | 0.7181 | 0.8474 |
| No log | 66.6667 | 200 | 0.7369 | 0.3425 | 0.7369 | 0.8584 |
| No log | 67.3333 | 202 | 0.7561 | 0.3060 | 0.7561 | 0.8695 |
| No log | 68.0 | 204 | 0.7591 | 0.2754 | 0.7591 | 0.8712 |
| No log | 68.6667 | 206 | 0.7362 | 0.3060 | 0.7362 | 0.8580 |
| No log | 69.3333 | 208 | 0.7107 | 0.2558 | 0.7107 | 0.8430 |
| No log | 70.0 | 210 | 0.7025 | 0.3267 | 0.7025 | 0.8382 |
| No log | 70.6667 | 212 | 0.6982 | 0.3267 | 0.6982 | 0.8356 |
| No log | 71.3333 | 214 | 0.7089 | 0.3267 | 0.7089 | 0.8420 |
| No log | 72.0 | 216 | 0.7317 | 0.3545 | 0.7317 | 0.8554 |
| No log | 72.6667 | 218 | 0.7634 | 0.2754 | 0.7634 | 0.8737 |
| No log | 73.3333 | 220 | 0.8069 | 0.3169 | 0.8069 | 0.8983 |
| No log | 74.0 | 222 | 0.8195 | 0.3675 | 0.8195 | 0.9053 |
| No log | 74.6667 | 224 | 0.8127 | 0.3564 | 0.8127 | 0.9015 |
| No log | 75.3333 | 226 | 0.8149 | 0.3564 | 0.8149 | 0.9027 |
| No log | 76.0 | 228 | 0.7888 | 0.3032 | 0.7888 | 0.8881 |
| No log | 76.6667 | 230 | 0.7572 | 0.3518 | 0.7572 | 0.8702 |
| No log | 77.3333 | 232 | 0.7508 | 0.3518 | 0.7508 | 0.8665 |
| No log | 78.0 | 234 | 0.7350 | 0.3312 | 0.7350 | 0.8573 |
| No log | 78.6667 | 236 | 0.7308 | 0.3312 | 0.7308 | 0.8549 |
| No log | 79.3333 | 238 | 0.7419 | 0.3594 | 0.7419 | 0.8613 |
| No log | 80.0 | 240 | 0.7610 | 0.3518 | 0.7610 | 0.8724 |
| No log | 80.6667 | 242 | 0.7882 | 0.3302 | 0.7882 | 0.8878 |
| No log | 81.3333 | 244 | 0.8140 | 0.3564 | 0.8140 | 0.9022 |
| No log | 82.0 | 246 | 0.8316 | 0.3819 | 0.8316 | 0.9119 |
| No log | 82.6667 | 248 | 0.8344 | 0.3819 | 0.8344 | 0.9135 |
| No log | 83.3333 | 250 | 0.8211 | 0.3819 | 0.8211 | 0.9062 |
| No log | 84.0 | 252 | 0.7963 | 0.3564 | 0.7963 | 0.8924 |
| No log | 84.6667 | 254 | 0.7777 | 0.3302 | 0.7777 | 0.8819 |
| No log | 85.3333 | 256 | 0.7631 | 0.3444 | 0.7631 | 0.8736 |
| No log | 86.0 | 258 | 0.7530 | 0.3444 | 0.7530 | 0.8678 |
| No log | 86.6667 | 260 | 0.7475 | 0.3444 | 0.7475 | 0.8646 |
| No log | 87.3333 | 262 | 0.7487 | 0.3444 | 0.7487 | 0.8652 |
| No log | 88.0 | 264 | 0.7545 | 0.3032 | 0.7545 | 0.8686 |
| No log | 88.6667 | 266 | 0.7650 | 0.3032 | 0.7650 | 0.8747 |
| No log | 89.3333 | 268 | 0.7701 | 0.3032 | 0.7701 | 0.8776 |
| No log | 90.0 | 270 | 0.7715 | 0.3032 | 0.7715 | 0.8783 |
| No log | 90.6667 | 272 | 0.7700 | 0.3032 | 0.7700 | 0.8775 |
| No log | 91.3333 | 274 | 0.7710 | 0.3032 | 0.7710 | 0.8781 |
| No log | 92.0 | 276 | 0.7701 | 0.3032 | 0.7701 | 0.8775 |
| No log | 92.6667 | 278 | 0.7653 | 0.3032 | 0.7653 | 0.8748 |
| No log | 93.3333 | 280 | 0.7617 | 0.3032 | 0.7617 | 0.8727 |
| No log | 94.0 | 282 | 0.7548 | 0.3444 | 0.7548 | 0.8688 |
| No log | 94.6667 | 284 | 0.7498 | 0.3167 | 0.7498 | 0.8659 |
| No log | 95.3333 | 286 | 0.7451 | 0.3238 | 0.7451 | 0.8632 |
| No log | 96.0 | 288 | 0.7421 | 0.3238 | 0.7421 | 0.8615 |
| No log | 96.6667 | 290 | 0.7399 | 0.3238 | 0.7399 | 0.8602 |
| No log | 97.3333 | 292 | 0.7388 | 0.3238 | 0.7388 | 0.8595 |
| No log | 98.0 | 294 | 0.7384 | 0.3238 | 0.7384 | 0.8593 |
| No log | 98.6667 | 296 | 0.7393 | 0.3238 | 0.7393 | 0.8598 |
| No log | 99.3333 | 298 | 0.7400 | 0.3238 | 0.7400 | 0.8603 |
| No log | 100.0 | 300 | 0.7404 | 0.3238 | 0.7404 | 0.8605 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nhung02/909af27c-2380-46a2-abbd-cd15fe4d4de0
|
nhung02
| 2025-01-21T12:16:53Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:51:36Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 909af27c-2380-46a2-abbd-cd15fe4d4de0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0eba3e80d15355a6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0eba3e80d15355a6_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: accepted
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung02/909af27c-2380-46a2-abbd-cd15fe4d4de0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0eba3e80d15355a6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84f8a085-50df-4e7c-9e21-f8d55ac51824
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 84f8a085-50df-4e7c-9e21-f8d55ac51824
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 909af27c-2380-46a2-abbd-cd15fe4d4de0
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6899 | 0.0313 | 200 | 0.7229 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ivangrapher/744e6875-2fc5-49e8-9a0e-2975ca5870ac
|
ivangrapher
| 2025-01-21T12:15:54Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T08:34:31Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 744e6875-2fc5-49e8-9a0e-2975ca5870ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 124bc05ddbf5ee81_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/124bc05ddbf5ee81_train_data.json
type:
field_instruction: docstring
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ivangrapher/744e6875-2fc5-49e8-9a0e-2975ca5870ac
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/124bc05ddbf5ee81_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5fe9995d-0a95-46fa-b89c-25f97cbb6eb6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5fe9995d-0a95-46fa-b89c-25f97cbb6eb6
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 744e6875-2fc5-49e8-9a0e-2975ca5870ac
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 5 | nan |
| 0.0 | 0.0002 | 10 | nan |
| 0.0 | 0.0003 | 15 | nan |
| 0.0 | 0.0003 | 20 | nan |
| 0.0 | 0.0004 | 25 | nan |
| 0.0 | 0.0005 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kokovova/28cfc357-be2e-4eb7-87e6-ede5f44cf913
|
kokovova
| 2025-01-21T12:15:28Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:mit",
"region:us"
] | null | 2025-01-21T11:32:50Z |
---
library_name: peft
license: mit
base_model: databricks/dolly-v2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 28cfc357-be2e-4eb7-87e6-ede5f44cf913
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: databricks/dolly-v2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d4ad1f4ec6a1fae0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4ad1f4ec6a1fae0_train_data.json
type:
field_instruction: Patient
field_output: Description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: kokovova/28cfc357-be2e-4eb7-87e6-ede5f44cf913
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/d4ad1f4ec6a1fae0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6874924e-0eae-4909-b19a-0c7087adfd79
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6874924e-0eae-4909-b19a-0c7087adfd79
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 28cfc357-be2e-4eb7-87e6-ede5f44cf913
This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.7270 |
| 13.4442 | 0.0003 | 5 | 3.4392 |
| 12.3538 | 0.0007 | 10 | 3.1945 |
| 11.8357 | 0.0010 | 15 | 3.1070 |
| 11.9725 | 0.0013 | 20 | 3.0723 |
| 12.3078 | 0.0016 | 25 | 3.0603 |
| 12.0133 | 0.0020 | 30 | 3.0565 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/7a6c4aff-60cf-4296-906d-a04240417885
|
ClarenceDan
| 2025-01-21T12:14:35Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-21T12:14:06Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a6c4aff-60cf-4296-906d-a04240417885
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5c4eef0d51e921ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5c4eef0d51e921ea_train_data.json
type:
field_input: world_literals
field_instruction: logical_form_pretty
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/7a6c4aff-60cf-4296-906d-a04240417885
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5c4eef0d51e921ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 847dcfd1-dbaf-4b00-af61-47e0ea3d66d1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 847dcfd1-dbaf-4b00-af61-47e0ea3d66d1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7a6c4aff-60cf-4296-906d-a04240417885
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.3622 | 0.0033 | 1 | 3.6128 |
| 14.3937 | 0.0098 | 3 | 3.5999 |
| 14.6205 | 0.0197 | 6 | 3.4420 |
| 12.5179 | 0.0295 | 9 | 3.1212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrHungddddh/fbb70a8d-5819-43ab-ad46-15fd560412cd
|
mrHungddddh
| 2025-01-21T12:13:42Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:41:48Z |
---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fbb70a8d-5819-43ab-ad46-15fd560412cd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc5c201d257f4800_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc5c201d257f4800_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/fbb70a8d-5819-43ab-ad46-15fd560412cd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc5c201d257f4800_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1fb620f4-588e-4556-8dd0-8ed7c42fd6cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1fb620f4-588e-4556-8dd0-8ed7c42fd6cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fbb70a8d-5819-43ab-ad46-15fd560412cd
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.39 | 0.0681 | 200 | 0.6449 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
vertings6/46326f04-0a71-4c3b-a8ef-4c0ff1144789
|
vertings6
| 2025-01-21T12:13:16Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | 2025-01-21T12:12:14Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 46326f04-0a71-4c3b-a8ef-4c0ff1144789
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6f632f47d3ee06ff_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6f632f47d3ee06ff_train_data.json
type:
field_input: choices
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vertings6/46326f04-0a71-4c3b-a8ef-4c0ff1144789
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/6f632f47d3ee06ff_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a2153675-d455-4bd4-a862-69a06baed90a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a2153675-d455-4bd4-a862-69a06baed90a
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 46326f04-0a71-4c3b-a8ef-4c0ff1144789
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.1538 | 1 | 1.5155 |
| 6.176 | 0.7692 | 5 | 1.2940 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/eaf0f1fd-8467-470a-aa52-e26bbce9c105
|
nhung01
| 2025-01-21T12:11:02Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:54:14Z |
---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eaf0f1fd-8467-470a-aa52-e26bbce9c105
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e61b15027cdb8f0f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e61b15027cdb8f0f_train_data.json
type:
field_instruction: text_description
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/eaf0f1fd-8467-470a-aa52-e26bbce9c105
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e61b15027cdb8f0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 52dcb611-f58d-420b-a954-552a3249dfec
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 52dcb611-f58d-420b-a954-552a3249dfec
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# eaf0f1fd-8467-470a-aa52-e26bbce9c105
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9689 | 0.0800 | 200 | 2.1119 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thaffggg/91ed7274-5dc1-4bef-99e1-48c9fd775ff6
|
thaffggg
| 2025-01-21T12:09:40Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:51:42Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 91ed7274-5dc1-4bef-99e1-48c9fd775ff6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0eba3e80d15355a6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0eba3e80d15355a6_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: accepted
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/91ed7274-5dc1-4bef-99e1-48c9fd775ff6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0eba3e80d15355a6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84f8a085-50df-4e7c-9e21-f8d55ac51824
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 84f8a085-50df-4e7c-9e21-f8d55ac51824
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 91ed7274-5dc1-4bef-99e1-48c9fd775ff6
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6918 | 0.0313 | 200 | 0.7231 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
philip-hightech/9312526f-695d-4bea-b99b-212790fd97bb
|
philip-hightech
| 2025-01-21T12:09:21Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:07:17Z |
---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9312526f-695d-4bea-b99b-212790fd97bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b5e06bf0e602bd38_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b5e06bf0e602bd38_train_data.json
type:
field_instruction: section
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/9312526f-695d-4bea-b99b-212790fd97bb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b5e06bf0e602bd38_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5312563a-16b4-452e-84f7-611f95b514ff
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5312563a-16b4-452e-84f7-611f95b514ff
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9312526f-695d-4bea-b99b-212790fd97bb
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0010 | 1 | nan |
| 0.0 | 0.0030 | 3 | nan |
| 0.0 | 0.0059 | 6 | nan |
| 0.0 | 0.0089 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk1205/96658c36-8901-4700-8dc4-7caa48990751
|
kostiantynk1205
| 2025-01-21T12:09:11Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-01-21T12:03:08Z |
---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 96658c36-8901-4700-8dc4-7caa48990751
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dddb0489dc663e1a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dddb0489dc663e1a_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answers
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/96658c36-8901-4700-8dc4-7caa48990751
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/dddb0489dc663e1a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 96658c36-8901-4700-8dc4-7caa48990751
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0008 | 3 | nan |
| 0.0 | 0.0017 | 6 | nan |
| 0.0 | 0.0025 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/fd941b3a-7d6d-4a90-a386-08b3ed99ba71
|
ClarenceDan
| 2025-01-21T12:08:08Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Artples/L-MChat-7b",
"base_model:adapter:Artples/L-MChat-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T12:06:27Z |
---
library_name: peft
license: apache-2.0
base_model: Artples/L-MChat-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd941b3a-7d6d-4a90-a386-08b3ed99ba71
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Artples/L-MChat-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- df03514e65800f80_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/df03514e65800f80_train_data.json
type:
field_instruction: input
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/fd941b3a-7d6d-4a90-a386-08b3ed99ba71
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/df03514e65800f80_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_turn|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e3508d62-5471-4cdf-8dba-5844f441931a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e3508d62-5471-4cdf-8dba-5844f441931a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fd941b3a-7d6d-4a90-a386-08b3ed99ba71
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0011 | 1 | nan |
| 0.0 | 0.0034 | 3 | nan |
| 0.0 | 0.0068 | 6 | nan |
| 0.0 | 0.0101 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
havinash-ai/0c7eaea0-bf84-47bb-838e-c5ec19a67fe9
|
havinash-ai
| 2025-01-21T12:08:02Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:mit",
"region:us"
] | null | 2025-01-21T11:50:34Z |
---
library_name: peft
license: mit
base_model: databricks/dolly-v2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0c7eaea0-bf84-47bb-838e-c5ec19a67fe9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: databricks/dolly-v2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d4ad1f4ec6a1fae0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4ad1f4ec6a1fae0_train_data.json
type:
field_instruction: Patient
field_output: Description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/0c7eaea0-bf84-47bb-838e-c5ec19a67fe9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d4ad1f4ec6a1fae0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6874924e-0eae-4909-b19a-0c7087adfd79
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6874924e-0eae-4909-b19a-0c7087adfd79
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0c7eaea0-bf84-47bb-838e-c5ec19a67fe9
This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.3662 | 0.0000 | 1 | 4.0295 |
| 13.6342 | 0.0001 | 3 | 4.0051 |
| 18.6691 | 0.0002 | 6 | 3.7901 |
| 13.7199 | 0.0003 | 9 | 3.1139 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhungphammmmm/a916d37c-5230-43a3-ad27-5b9669db8192
|
nhungphammmmm
| 2025-01-21T12:07:53Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:49:16Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a916d37c-5230-43a3-ad27-5b9669db8192
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d23a80b910821333_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d23a80b910821333_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/a916d37c-5230-43a3-ad27-5b9669db8192
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d23a80b910821333_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c7400a48-f57f-4a5f-8c57-bf09a3ce88d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c7400a48-f57f-4a5f-8c57-bf09a3ce88d3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a916d37c-5230-43a3-ad27-5b9669db8192
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.2072 | 0.0143 | 200 | 2.1734 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hongngo/d131082f-bd87-4d91-b9ee-d2089181769a
|
hongngo
| 2025-01-21T12:04:36Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:41:40Z |
---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d131082f-bd87-4d91-b9ee-d2089181769a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc5c201d257f4800_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc5c201d257f4800_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/d131082f-bd87-4d91-b9ee-d2089181769a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc5c201d257f4800_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1fb620f4-588e-4556-8dd0-8ed7c42fd6cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1fb620f4-588e-4556-8dd0-8ed7c42fd6cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d131082f-bd87-4d91-b9ee-d2089181769a
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3866 | 0.0681 | 200 | 0.6455 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trangtrannnnn/91e46b57-4c5a-411c-9cd3-585cb158ce5e
|
trangtrannnnn
| 2025-01-21T12:03:49Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:41:25Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 91e46b57-4c5a-411c-9cd3-585cb158ce5e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- db35a4b2827972f9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/db35a4b2827972f9_train_data.json
type:
field_input: rejected
field_instruction: context
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: trangtrannnnn/91e46b57-4c5a-411c-9cd3-585cb158ce5e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/db35a4b2827972f9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 881827a9-7bb9-4a3a-bfa5-bc8cbc8f588f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 881827a9-7bb9-4a3a-bfa5-bc8cbc8f588f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 91e46b57-4c5a-411c-9cd3-585cb158ce5e
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8807 | 0.0294 | 200 | 2.1095 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso02/afd91423-4ee5-4fb8-a211-976981cf01f6
|
lesso02
| 2025-01-21T12:03:00Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:38:23Z |
---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: afd91423-4ee5-4fb8-a211-976981cf01f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: true
chat_template: llama3
datasets:
- data_files:
- dddb0489dc663e1a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dddb0489dc663e1a_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answers
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso02/afd91423-4ee5-4fb8-a211-976981cf01f6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/dddb0489dc663e1a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# afd91423-4ee5-4fb8-a211-976981cf01f6
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0014 | 5 | nan |
| 0.0 | 0.0028 | 10 | nan |
| 0.0 | 0.0042 | 15 | nan |
| 0.0 | 0.0056 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk1205/524eee36-38c7-4c32-811d-314e799d4341
|
kostiantynk1205
| 2025-01-21T12:01:21Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:mit",
"region:us"
] | null | 2025-01-21T11:44:21Z |
---
library_name: peft
license: mit
base_model: databricks/dolly-v2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 524eee36-38c7-4c32-811d-314e799d4341
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: databricks/dolly-v2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d4ad1f4ec6a1fae0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4ad1f4ec6a1fae0_train_data.json
type:
field_instruction: Patient
field_output: Description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/524eee36-38c7-4c32-811d-314e799d4341
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d4ad1f4ec6a1fae0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6874924e-0eae-4909-b19a-0c7087adfd79
wandb_project: Birthday-SN56-6-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6874924e-0eae-4909-b19a-0c7087adfd79
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 524eee36-38c7-4c32-811d-314e799d4341
This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.3662 | 0.0000 | 1 | 4.0295 |
| 13.7001 | 0.0001 | 3 | 4.0041 |
| 18.6084 | 0.0002 | 6 | 3.7846 |
| 13.6494 | 0.0003 | 9 | 3.1108 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nblinh/7f6d68fe-a293-4c30-b25c-143527739229
|
nblinh
| 2025-01-21T12:00:24Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:35:38Z |
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f6d68fe-a293-4c30-b25c-143527739229
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fdd56d09ce656747_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fdd56d09ce656747_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/7f6d68fe-a293-4c30-b25c-143527739229
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fdd56d09ce656747_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7f6d68fe-a293-4c30-b25c-143527739229
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.59 | 0.1960 | 200 | 0.5545 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/1b016248-9d07-4011-a03a-975b03b2ce03
|
ClarenceDan
| 2025-01-21T12:00:14Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-21T11:56:03Z |
---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1b016248-9d07-4011-a03a-975b03b2ce03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc5c201d257f4800_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc5c201d257f4800_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/1b016248-9d07-4011-a03a-975b03b2ce03
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc5c201d257f4800_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1fb620f4-588e-4556-8dd0-8ed7c42fd6cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1fb620f4-588e-4556-8dd0-8ed7c42fd6cc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1b016248-9d07-4011-a03a-975b03b2ce03
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0010 | 3 | nan |
| 0.0 | 0.0020 | 6 | nan |
| 0.0 | 0.0031 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
havinash-ai/c0a35a3f-01eb-453e-b74f-a4f270d22a82
|
havinash-ai
| 2025-01-21T11:59:58Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T11:57:56Z |
---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c0a35a3f-01eb-453e-b74f-a4f270d22a82
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b5e06bf0e602bd38_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b5e06bf0e602bd38_train_data.json
type:
field_instruction: section
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/c0a35a3f-01eb-453e-b74f-a4f270d22a82
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b5e06bf0e602bd38_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5312563a-16b4-452e-84f7-611f95b514ff
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5312563a-16b4-452e-84f7-611f95b514ff
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c0a35a3f-01eb-453e-b74f-a4f270d22a82
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0010 | 1 | nan |
| 0.0 | 0.0030 | 3 | nan |
| 0.0 | 0.0059 | 6 | nan |
| 0.0 | 0.0089 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JetBrains-Research/deepseek-coder-1.3b-instruct-comment-resolution
|
JetBrains-Research
| 2025-01-21T11:59:54Z | 63 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-02T17:35:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k17_task7_organization
|
MayBashendy
| 2025-01-21T11:59:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T11:55:01Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k17_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k17_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0676
- Qwk: 0.2683
- Mse: 1.0676
- Rmse: 1.0333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.05 | 2 | 2.7815 | -0.0481 | 2.7815 | 1.6678 |
| No log | 0.1 | 4 | 1.7927 | 0.0061 | 1.7927 | 1.3389 |
| No log | 0.15 | 6 | 2.0211 | -0.1653 | 2.0211 | 1.4217 |
| No log | 0.2 | 8 | 1.3569 | -0.1328 | 1.3569 | 1.1648 |
| No log | 0.25 | 10 | 1.0144 | 0.0054 | 1.0144 | 1.0072 |
| No log | 0.3 | 12 | 0.9010 | 0.1461 | 0.9010 | 0.9492 |
| No log | 0.35 | 14 | 0.9014 | 0.1534 | 0.9014 | 0.9494 |
| No log | 0.4 | 16 | 0.8922 | 0.1636 | 0.8922 | 0.9445 |
| No log | 0.45 | 18 | 0.8487 | 0.0679 | 0.8487 | 0.9212 |
| No log | 0.5 | 20 | 0.9147 | 0.1511 | 0.9147 | 0.9564 |
| No log | 0.55 | 22 | 1.0285 | 0.1259 | 1.0285 | 1.0141 |
| No log | 0.6 | 24 | 1.0813 | 0.0986 | 1.0813 | 1.0398 |
| No log | 0.65 | 26 | 0.8837 | 0.2132 | 0.8837 | 0.9401 |
| No log | 0.7 | 28 | 0.7961 | 0.0937 | 0.7961 | 0.8922 |
| No log | 0.75 | 30 | 0.7689 | 0.0481 | 0.7689 | 0.8769 |
| No log | 0.8 | 32 | 0.7462 | 0.0481 | 0.7462 | 0.8638 |
| No log | 0.85 | 34 | 0.7381 | 0.0884 | 0.7381 | 0.8591 |
| No log | 0.9 | 36 | 0.7520 | 0.0 | 0.7520 | 0.8672 |
| No log | 0.95 | 38 | 0.7783 | 0.0481 | 0.7783 | 0.8822 |
| No log | 1.0 | 40 | 0.7709 | 0.0 | 0.7709 | 0.8780 |
| No log | 1.05 | 42 | 0.7433 | 0.0 | 0.7433 | 0.8622 |
| No log | 1.1 | 44 | 0.7383 | 0.0 | 0.7383 | 0.8592 |
| No log | 1.15 | 46 | 0.7397 | 0.0 | 0.7397 | 0.8601 |
| No log | 1.2 | 48 | 0.7337 | 0.0884 | 0.7337 | 0.8566 |
| No log | 1.25 | 50 | 0.7301 | 0.1236 | 0.7301 | 0.8544 |
| No log | 1.3 | 52 | 0.7242 | 0.1456 | 0.7242 | 0.8510 |
| No log | 1.35 | 54 | 0.7434 | 0.1807 | 0.7434 | 0.8622 |
| No log | 1.4 | 56 | 0.7364 | 0.1508 | 0.7364 | 0.8581 |
| No log | 1.45 | 58 | 0.7273 | 0.1187 | 0.7273 | 0.8528 |
| No log | 1.5 | 60 | 0.7258 | 0.0840 | 0.7258 | 0.8520 |
| No log | 1.55 | 62 | 0.7335 | 0.0444 | 0.7335 | 0.8564 |
| No log | 1.6 | 64 | 0.7513 | 0.0937 | 0.7513 | 0.8668 |
| No log | 1.65 | 66 | 0.7398 | 0.0481 | 0.7398 | 0.8601 |
| No log | 1.7 | 68 | 0.7442 | 0.0 | 0.7442 | 0.8627 |
| No log | 1.75 | 70 | 0.7480 | 0.0 | 0.7480 | 0.8649 |
| No log | 1.8 | 72 | 0.7431 | -0.0027 | 0.7431 | 0.8620 |
| No log | 1.85 | 74 | 0.7483 | 0.0893 | 0.7483 | 0.8651 |
| No log | 1.9 | 76 | 0.7506 | 0.0026 | 0.7506 | 0.8664 |
| No log | 1.95 | 78 | 0.7455 | 0.0026 | 0.7455 | 0.8634 |
| No log | 2.0 | 80 | 0.7296 | 0.0764 | 0.7296 | 0.8542 |
| No log | 2.05 | 82 | 0.7244 | 0.0410 | 0.7244 | 0.8511 |
| No log | 2.1 | 84 | 0.7185 | 0.0481 | 0.7185 | 0.8476 |
| No log | 2.15 | 86 | 0.7201 | 0.0481 | 0.7201 | 0.8486 |
| No log | 2.2 | 88 | 0.7684 | 0.0688 | 0.7684 | 0.8766 |
| No log | 2.25 | 90 | 0.8471 | -0.0047 | 0.8471 | 0.9204 |
| No log | 2.3 | 92 | 0.9219 | 0.0336 | 0.9219 | 0.9602 |
| No log | 2.35 | 94 | 0.8593 | 0.0661 | 0.8593 | 0.9270 |
| No log | 2.4 | 96 | 0.7927 | 0.1448 | 0.7927 | 0.8903 |
| No log | 2.45 | 98 | 0.7396 | 0.2158 | 0.7396 | 0.8600 |
| No log | 2.5 | 100 | 0.7441 | 0.2158 | 0.7441 | 0.8626 |
| No log | 2.55 | 102 | 0.7275 | 0.1867 | 0.7275 | 0.8530 |
| No log | 2.6 | 104 | 0.7325 | 0.2509 | 0.7325 | 0.8559 |
| No log | 2.65 | 106 | 0.7702 | 0.2218 | 0.7702 | 0.8776 |
| No log | 2.7 | 108 | 0.7711 | 0.2158 | 0.7711 | 0.8781 |
| No log | 2.75 | 110 | 0.7585 | 0.2158 | 0.7585 | 0.8709 |
| No log | 2.8 | 112 | 0.7625 | 0.2158 | 0.7625 | 0.8732 |
| No log | 2.85 | 114 | 0.7762 | 0.2413 | 0.7762 | 0.8810 |
| No log | 2.9 | 116 | 0.7775 | 0.1901 | 0.7775 | 0.8818 |
| No log | 2.95 | 118 | 0.7895 | 0.2847 | 0.7895 | 0.8886 |
| No log | 3.0 | 120 | 0.7612 | 0.1624 | 0.7612 | 0.8724 |
| No log | 3.05 | 122 | 0.7445 | 0.2158 | 0.7445 | 0.8629 |
| No log | 3.1 | 124 | 0.7593 | 0.1010 | 0.7593 | 0.8714 |
| No log | 3.15 | 126 | 0.8076 | 0.0971 | 0.8076 | 0.8986 |
| No log | 3.2 | 128 | 0.7975 | 0.0971 | 0.7975 | 0.8930 |
| No log | 3.25 | 130 | 0.7766 | 0.0697 | 0.7766 | 0.8812 |
| No log | 3.3 | 132 | 0.7984 | 0.1051 | 0.7984 | 0.8935 |
| No log | 3.35 | 134 | 0.9101 | 0.2149 | 0.9101 | 0.9540 |
| No log | 3.4 | 136 | 1.0361 | 0.2521 | 1.0361 | 1.0179 |
| No log | 3.45 | 138 | 1.0476 | 0.2364 | 1.0476 | 1.0235 |
| No log | 3.5 | 140 | 0.9632 | 0.1995 | 0.9632 | 0.9814 |
| No log | 3.55 | 142 | 0.9064 | 0.0584 | 0.9064 | 0.9521 |
| No log | 3.6 | 144 | 0.8631 | 0.0697 | 0.8631 | 0.9290 |
| No log | 3.65 | 146 | 0.9331 | 0.0975 | 0.9331 | 0.9660 |
| No log | 3.7 | 148 | 0.9420 | 0.0856 | 0.9420 | 0.9706 |
| No log | 3.75 | 150 | 0.9851 | 0.2193 | 0.9851 | 0.9925 |
| No log | 3.8 | 152 | 0.9770 | 0.2892 | 0.9770 | 0.9885 |
| No log | 3.85 | 154 | 0.9237 | 0.2439 | 0.9237 | 0.9611 |
| No log | 3.9 | 156 | 0.8649 | 0.2943 | 0.8649 | 0.9300 |
| No log | 3.95 | 158 | 0.8627 | 0.3369 | 0.8627 | 0.9288 |
| No log | 4.0 | 160 | 0.9110 | 0.2912 | 0.9110 | 0.9544 |
| No log | 4.05 | 162 | 0.8747 | 0.3115 | 0.8747 | 0.9353 |
| No log | 4.1 | 164 | 0.8485 | 0.3157 | 0.8485 | 0.9211 |
| No log | 4.15 | 166 | 0.8876 | 0.2059 | 0.8876 | 0.9421 |
| No log | 4.2 | 168 | 0.8396 | 0.2662 | 0.8396 | 0.9163 |
| No log | 4.25 | 170 | 0.7145 | 0.3020 | 0.7145 | 0.8453 |
| No log | 4.3 | 172 | 0.6763 | 0.1829 | 0.6763 | 0.8224 |
| No log | 4.35 | 174 | 0.6848 | 0.2181 | 0.6848 | 0.8275 |
| No log | 4.4 | 176 | 0.7393 | 0.4052 | 0.7393 | 0.8598 |
| No log | 4.45 | 178 | 0.8544 | 0.4251 | 0.8544 | 0.9243 |
| No log | 4.5 | 180 | 0.8607 | 0.3754 | 0.8607 | 0.9277 |
| No log | 4.55 | 182 | 0.8286 | 0.4251 | 0.8286 | 0.9103 |
| No log | 4.6 | 184 | 0.7767 | 0.3167 | 0.7767 | 0.8813 |
| No log | 4.65 | 186 | 0.7748 | 0.3167 | 0.7748 | 0.8802 |
| No log | 4.7 | 188 | 0.7632 | 0.3622 | 0.7632 | 0.8736 |
| No log | 4.75 | 190 | 0.7500 | 0.3341 | 0.7500 | 0.8660 |
| No log | 4.8 | 192 | 0.7483 | 0.2950 | 0.7483 | 0.8651 |
| No log | 4.85 | 194 | 0.7769 | 0.4642 | 0.7769 | 0.8814 |
| No log | 4.9 | 196 | 0.7819 | 0.5120 | 0.7819 | 0.8843 |
| No log | 4.95 | 198 | 0.8057 | 0.3789 | 0.8057 | 0.8976 |
| No log | 5.0 | 200 | 0.8061 | 0.2950 | 0.8061 | 0.8978 |
| No log | 5.05 | 202 | 0.8570 | 0.2967 | 0.8570 | 0.9257 |
| No log | 5.1 | 204 | 0.8946 | 0.4462 | 0.8946 | 0.9458 |
| No log | 5.15 | 206 | 0.8111 | 0.3372 | 0.8111 | 0.9006 |
| No log | 5.2 | 208 | 0.7667 | 0.2847 | 0.7667 | 0.8756 |
| No log | 5.25 | 210 | 0.8306 | 0.4247 | 0.8306 | 0.9114 |
| No log | 5.3 | 212 | 0.9020 | 0.3333 | 0.9020 | 0.9498 |
| No log | 5.35 | 214 | 0.9648 | 0.3727 | 0.9648 | 0.9822 |
| No log | 5.4 | 216 | 0.9330 | 0.3012 | 0.9330 | 0.9659 |
| No log | 5.45 | 218 | 0.9431 | 0.2779 | 0.9431 | 0.9712 |
| No log | 5.5 | 220 | 0.8969 | 0.1029 | 0.8969 | 0.9471 |
| No log | 5.55 | 222 | 0.8776 | 0.1918 | 0.8776 | 0.9368 |
| No log | 5.6 | 224 | 0.8238 | 0.1935 | 0.8238 | 0.9076 |
| No log | 5.65 | 226 | 0.7614 | 0.3127 | 0.7614 | 0.8726 |
| No log | 5.7 | 228 | 0.7729 | 0.3399 | 0.7729 | 0.8791 |
| No log | 5.75 | 230 | 0.8301 | 0.3425 | 0.8301 | 0.9111 |
| No log | 5.8 | 232 | 0.9000 | 0.3579 | 0.9000 | 0.9487 |
| No log | 5.85 | 234 | 0.9826 | 0.2886 | 0.9826 | 0.9913 |
| No log | 5.9 | 236 | 1.0048 | 0.3059 | 1.0048 | 1.0024 |
| No log | 5.95 | 238 | 0.9560 | 0.2886 | 0.9560 | 0.9778 |
| No log | 6.0 | 240 | 0.9846 | 0.2886 | 0.9846 | 0.9923 |
| No log | 6.05 | 242 | 1.0053 | 0.3247 | 1.0053 | 1.0026 |
| No log | 6.1 | 244 | 0.8946 | 0.2923 | 0.8946 | 0.9458 |
| No log | 6.15 | 246 | 0.8653 | 0.2518 | 0.8653 | 0.9302 |
| No log | 6.2 | 248 | 0.8387 | 0.3127 | 0.8387 | 0.9158 |
| No log | 6.25 | 250 | 0.8439 | 0.3060 | 0.8439 | 0.9187 |
| No log | 6.3 | 252 | 0.8582 | 0.2632 | 0.8582 | 0.9264 |
| No log | 6.35 | 254 | 0.9131 | 0.4113 | 0.9131 | 0.9556 |
| No log | 6.4 | 256 | 0.9065 | 0.4113 | 0.9065 | 0.9521 |
| No log | 6.45 | 258 | 0.8590 | 0.4462 | 0.8590 | 0.9268 |
| No log | 6.5 | 260 | 0.8227 | 0.3169 | 0.8227 | 0.9070 |
| No log | 6.55 | 262 | 0.8932 | 0.3371 | 0.8932 | 0.9451 |
| No log | 6.6 | 264 | 1.0092 | 0.2802 | 1.0092 | 1.0046 |
| No log | 6.65 | 266 | 1.0151 | 0.2926 | 1.0151 | 1.0075 |
| No log | 6.7 | 268 | 0.9591 | 0.3417 | 0.9591 | 0.9794 |
| No log | 6.75 | 270 | 0.9389 | 0.3579 | 0.9389 | 0.9690 |
| No log | 6.8 | 272 | 0.9699 | 0.3302 | 0.9699 | 0.9848 |
| No log | 6.85 | 274 | 1.0013 | 0.2501 | 1.0013 | 1.0007 |
| No log | 6.9 | 276 | 1.0755 | 0.2264 | 1.0755 | 1.0371 |
| No log | 6.95 | 278 | 1.1285 | 0.2264 | 1.1285 | 1.0623 |
| No log | 7.0 | 280 | 1.0357 | 0.2796 | 1.0357 | 1.0177 |
| No log | 7.05 | 282 | 0.9956 | 0.3608 | 0.9956 | 0.9978 |
| No log | 7.1 | 284 | 0.9900 | 0.3557 | 0.9900 | 0.9950 |
| No log | 7.15 | 286 | 0.9507 | 0.3557 | 0.9507 | 0.9751 |
| No log | 7.2 | 288 | 0.8655 | 0.4862 | 0.8655 | 0.9303 |
| No log | 7.25 | 290 | 0.8401 | 0.4144 | 0.8401 | 0.9166 |
| No log | 7.3 | 292 | 0.8795 | 0.4462 | 0.8795 | 0.9378 |
| No log | 7.35 | 294 | 1.0027 | 0.2732 | 1.0027 | 1.0014 |
| No log | 7.4 | 296 | 1.0642 | 0.1981 | 1.0642 | 1.0316 |
| No log | 7.45 | 298 | 1.0066 | 0.3114 | 1.0066 | 1.0033 |
| No log | 7.5 | 300 | 0.8917 | 0.4541 | 0.8917 | 0.9443 |
| No log | 7.55 | 302 | 0.8794 | 0.5077 | 0.8794 | 0.9378 |
| No log | 7.6 | 304 | 0.9474 | 0.4044 | 0.9474 | 0.9733 |
| No log | 7.65 | 306 | 0.9984 | 0.3031 | 0.9984 | 0.9992 |
| No log | 7.7 | 308 | 0.9133 | 0.4114 | 0.9133 | 0.9557 |
| No log | 7.75 | 310 | 0.7844 | 0.2605 | 0.7844 | 0.8857 |
| No log | 7.8 | 312 | 0.7335 | 0.2809 | 0.7335 | 0.8564 |
| No log | 7.85 | 314 | 0.7312 | 0.2809 | 0.7312 | 0.8551 |
| No log | 7.9 | 316 | 0.7909 | 0.3121 | 0.7909 | 0.8893 |
| No log | 7.95 | 318 | 1.0015 | 0.2389 | 1.0015 | 1.0007 |
| No log | 8.0 | 320 | 1.2296 | 0.1391 | 1.2296 | 1.1089 |
| No log | 8.05 | 322 | 1.4373 | 0.1122 | 1.4373 | 1.1989 |
| No log | 8.1 | 324 | 1.4303 | 0.1122 | 1.4303 | 1.1960 |
| No log | 8.15 | 326 | 1.2038 | 0.1654 | 1.2038 | 1.0972 |
| No log | 8.2 | 328 | 0.9614 | 0.2075 | 0.9614 | 0.9805 |
| No log | 8.25 | 330 | 0.8355 | 0.3564 | 0.8355 | 0.9141 |
| No log | 8.3 | 332 | 0.8083 | 0.4329 | 0.8083 | 0.8991 |
| No log | 8.35 | 334 | 0.8333 | 0.4644 | 0.8333 | 0.9129 |
| No log | 8.4 | 336 | 0.8453 | 0.4627 | 0.8453 | 0.9194 |
| No log | 8.45 | 338 | 0.8036 | 0.4167 | 0.8036 | 0.8965 |
| No log | 8.5 | 340 | 0.7977 | 0.3399 | 0.7977 | 0.8931 |
| No log | 8.55 | 342 | 0.8403 | 0.3544 | 0.8403 | 0.9167 |
| No log | 8.6 | 344 | 0.8785 | 0.3121 | 0.8785 | 0.9373 |
| No log | 8.65 | 346 | 0.9207 | 0.3207 | 0.9207 | 0.9595 |
| No log | 8.7 | 348 | 0.9339 | 0.2669 | 0.9339 | 0.9664 |
| No log | 8.75 | 350 | 0.8966 | 0.3677 | 0.8966 | 0.9469 |
| No log | 8.8 | 352 | 0.8942 | 0.3329 | 0.8942 | 0.9456 |
| No log | 8.85 | 354 | 0.9491 | 0.3560 | 0.9491 | 0.9742 |
| No log | 8.9 | 356 | 1.1092 | 0.2520 | 1.1092 | 1.0532 |
| No log | 8.95 | 358 | 1.2857 | 0.2197 | 1.2857 | 1.1339 |
| No log | 9.0 | 360 | 1.3019 | 0.1793 | 1.3019 | 1.1410 |
| No log | 9.05 | 362 | 1.1466 | 0.2191 | 1.1466 | 1.0708 |
| No log | 9.1 | 364 | 0.9258 | 0.4113 | 0.9258 | 0.9622 |
| No log | 9.15 | 366 | 0.8345 | 0.3746 | 0.8345 | 0.9135 |
| No log | 9.2 | 368 | 0.8240 | 0.3972 | 0.8240 | 0.9077 |
| No log | 9.25 | 370 | 0.8596 | 0.4462 | 0.8596 | 0.9271 |
| No log | 9.3 | 372 | 0.9124 | 0.4008 | 0.9124 | 0.9552 |
| No log | 9.35 | 374 | 0.9869 | 0.3359 | 0.9869 | 0.9934 |
| No log | 9.4 | 376 | 1.0105 | 0.2926 | 1.0105 | 1.0052 |
| No log | 9.45 | 378 | 0.9363 | 0.4328 | 0.9363 | 0.9676 |
| No log | 9.5 | 380 | 0.8247 | 0.3564 | 0.8247 | 0.9081 |
| No log | 9.55 | 382 | 0.7809 | 0.2589 | 0.7809 | 0.8837 |
| No log | 9.6 | 384 | 0.7887 | 0.2589 | 0.7887 | 0.8881 |
| No log | 9.65 | 386 | 0.8558 | 0.3564 | 0.8558 | 0.9251 |
| No log | 9.7 | 388 | 1.0092 | 0.2659 | 1.0092 | 1.0046 |
| No log | 9.75 | 390 | 1.1964 | 0.1805 | 1.1964 | 1.0938 |
| No log | 9.8 | 392 | 1.2204 | 0.1479 | 1.2204 | 1.1047 |
| No log | 9.85 | 394 | 1.1127 | 0.2264 | 1.1127 | 1.0548 |
| No log | 9.9 | 396 | 0.9637 | 0.3516 | 0.9637 | 0.9817 |
| No log | 9.95 | 398 | 0.8570 | 0.4144 | 0.8570 | 0.9257 |
| No log | 10.0 | 400 | 0.8032 | 0.3099 | 0.8032 | 0.8962 |
| No log | 10.05 | 402 | 0.8018 | 0.2527 | 0.8018 | 0.8954 |
| No log | 10.1 | 404 | 0.8452 | 0.3564 | 0.8452 | 0.9193 |
| No log | 10.15 | 406 | 0.9750 | 0.2651 | 0.9750 | 0.9874 |
| No log | 10.2 | 408 | 1.1752 | 0.1508 | 1.1752 | 1.0841 |
| No log | 10.25 | 410 | 1.2968 | 0.1522 | 1.2968 | 1.1388 |
| No log | 10.3 | 412 | 1.3569 | 0.1414 | 1.3569 | 1.1649 |
| No log | 10.35 | 414 | 1.2820 | 0.1549 | 1.2820 | 1.1323 |
| No log | 10.4 | 416 | 1.1277 | 0.1568 | 1.1277 | 1.0619 |
| No log | 10.45 | 418 | 0.9659 | 0.3761 | 0.9659 | 0.9828 |
| No log | 10.5 | 420 | 0.8109 | 0.4247 | 0.8109 | 0.9005 |
| No log | 10.55 | 422 | 0.7443 | 0.2558 | 0.7443 | 0.8628 |
| No log | 10.6 | 424 | 0.7236 | 0.2261 | 0.7236 | 0.8506 |
| No log | 10.65 | 426 | 0.7324 | 0.2261 | 0.7324 | 0.8558 |
| No log | 10.7 | 428 | 0.7808 | 0.3399 | 0.7808 | 0.8836 |
| No log | 10.75 | 430 | 0.8909 | 0.4404 | 0.8909 | 0.9439 |
| No log | 10.8 | 432 | 1.0132 | 0.4044 | 1.0132 | 1.0066 |
| No log | 10.85 | 434 | 1.0463 | 0.2264 | 1.0463 | 1.0229 |
| No log | 10.9 | 436 | 1.0307 | 0.1858 | 1.0307 | 1.0152 |
| No log | 10.95 | 438 | 0.9524 | 0.2510 | 0.9524 | 0.9759 |
| No log | 11.0 | 440 | 0.9381 | 0.2866 | 0.9381 | 0.9686 |
| No log | 11.05 | 442 | 0.9587 | 0.2460 | 0.9587 | 0.9791 |
| No log | 11.1 | 444 | 0.9440 | 0.2510 | 0.9440 | 0.9716 |
| No log | 11.15 | 446 | 0.9761 | 0.1747 | 0.9761 | 0.9880 |
| No log | 11.2 | 448 | 1.1083 | 0.1870 | 1.1083 | 1.0527 |
| No log | 11.25 | 450 | 1.1684 | 0.1203 | 1.1684 | 1.0809 |
| No log | 11.3 | 452 | 1.1929 | 0.1530 | 1.1929 | 1.0922 |
| No log | 11.35 | 454 | 1.1310 | 0.1671 | 1.1310 | 1.0635 |
| No log | 11.4 | 456 | 1.0862 | 0.0922 | 1.0862 | 1.0422 |
| No log | 11.45 | 458 | 1.0900 | 0.1463 | 1.0900 | 1.0440 |
| No log | 11.5 | 460 | 1.0844 | 0.1671 | 1.0844 | 1.0414 |
| No log | 11.55 | 462 | 1.0999 | 0.1909 | 1.0999 | 1.0488 |
| No log | 11.6 | 464 | 1.1240 | 0.2020 | 1.1240 | 1.0602 |
| No log | 11.65 | 466 | 1.0884 | 0.2020 | 1.0884 | 1.0433 |
| No log | 11.7 | 468 | 1.0469 | 0.2732 | 1.0469 | 1.0232 |
| No log | 11.75 | 470 | 1.0083 | 0.2271 | 1.0083 | 1.0041 |
| No log | 11.8 | 472 | 0.9795 | 0.2119 | 0.9795 | 0.9897 |
| No log | 11.85 | 474 | 0.9908 | 0.2703 | 0.9908 | 0.9954 |
| No log | 11.9 | 476 | 1.0087 | 0.2886 | 1.0087 | 1.0044 |
| No log | 11.95 | 478 | 1.0151 | 0.2552 | 1.0151 | 1.0075 |
| No log | 12.0 | 480 | 1.0537 | 0.2045 | 1.0537 | 1.0265 |
| No log | 12.05 | 482 | 1.0959 | 0.2006 | 1.0959 | 1.0468 |
| No log | 12.1 | 484 | 1.0750 | 0.2006 | 1.0750 | 1.0368 |
| No log | 12.15 | 486 | 1.0557 | 0.2833 | 1.0557 | 1.0275 |
| No log | 12.2 | 488 | 1.0304 | 0.2886 | 1.0304 | 1.0151 |
| No log | 12.25 | 490 | 1.0110 | 0.2995 | 1.0110 | 1.0055 |
| No log | 12.3 | 492 | 0.9812 | 0.3110 | 0.9812 | 0.9906 |
| No log | 12.35 | 494 | 0.9318 | 0.3606 | 0.9318 | 0.9653 |
| No log | 12.4 | 496 | 0.9165 | 0.3359 | 0.9165 | 0.9573 |
| No log | 12.45 | 498 | 0.9682 | 0.2487 | 0.9682 | 0.9840 |
| 0.3594 | 12.5 | 500 | 1.0243 | 0.3193 | 1.0243 | 1.0121 |
| 0.3594 | 12.55 | 502 | 0.9971 | 0.3193 | 0.9971 | 0.9985 |
| 0.3594 | 12.6 | 504 | 0.9262 | 0.3846 | 0.9262 | 0.9624 |
| 0.3594 | 12.65 | 506 | 0.9074 | 0.3918 | 0.9074 | 0.9526 |
| 0.3594 | 12.7 | 508 | 0.9062 | 0.3991 | 0.9062 | 0.9520 |
| 0.3594 | 12.75 | 510 | 0.9505 | 0.2703 | 0.9505 | 0.9750 |
| 0.3594 | 12.8 | 512 | 1.0398 | 0.2683 | 1.0398 | 1.0197 |
| 0.3594 | 12.85 | 514 | 1.1322 | 0.2059 | 1.1322 | 1.0640 |
| 0.3594 | 12.9 | 516 | 1.1282 | 0.2059 | 1.1282 | 1.0622 |
| 0.3594 | 12.95 | 518 | 1.0676 | 0.2683 | 1.0676 | 1.0333 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
trenden/1c28c7f4-a783-47ef-8d8c-acd4814f8fca
|
trenden
| 2025-01-21T11:56:54Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:mit",
"region:us"
] | null | 2025-01-21T11:41:06Z |
---
library_name: peft
license: mit
base_model: databricks/dolly-v2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c28c7f4-a783-47ef-8d8c-acd4814f8fca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: databricks/dolly-v2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d4ad1f4ec6a1fae0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4ad1f4ec6a1fae0_train_data.json
type:
field_instruction: Patient
field_output: Description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/1c28c7f4-a783-47ef-8d8c-acd4814f8fca
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d4ad1f4ec6a1fae0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6874924e-0eae-4909-b19a-0c7087adfd79
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6874924e-0eae-4909-b19a-0c7087adfd79
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1c28c7f4-a783-47ef-8d8c-acd4814f8fca
This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.3662 | 0.0000 | 1 | 4.0295 |
| 13.7863 | 0.0001 | 3 | 4.0044 |
| 18.6008 | 0.0002 | 6 | 3.7874 |
| 13.7364 | 0.0003 | 9 | 3.1110 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JuniperChinenye/wakeupvalis8
|
JuniperChinenye
| 2025-01-21T11:56:08Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T11:53:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso14/66972514-8697-43de-9112-4681ac299b37
|
lesso14
| 2025-01-21T11:55:37Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-21T11:50:52Z |
---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 66972514-8697-43de-9112-4681ac299b37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 223b73ed5c333b1a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/223b73ed5c333b1a_train_data.json
type:
field_instruction: category
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso14/66972514-8697-43de-9112-4681ac299b37
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/223b73ed5c333b1a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a7378db3-707d-4698-9a3f-5c035ca3db01
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a7378db3-707d-4698-9a3f-5c035ca3db01
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 66972514-8697-43de-9112-4681ac299b37
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0009 | 1 | nan |
| 0.0 | 0.0043 | 5 | nan |
| 0.0 | 0.0086 | 10 | nan |
| 0.0 | 0.0129 | 15 | nan |
| 0.0 | 0.0172 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso09/31427da4-2cde-4f4b-abb3-2871ae1631e1
|
lesso09
| 2025-01-21T11:55:15Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:53:11Z |
---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31427da4-2cde-4f4b-abb3-2871ae1631e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: true
chat_template: llama3
datasets:
- data_files:
- ac24df8c526e3c85_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac24df8c526e3c85_train_data.json
type:
field_input: vw_text
field_instruction: id
field_output: raw_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/31427da4-2cde-4f4b-abb3-2871ae1631e1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac24df8c526e3c85_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 722a0b59-9cc9-4456-b05d-e688625587ce
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 722a0b59-9cc9-4456-b05d-e688625587ce
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 31427da4-2cde-4f4b-abb3-2871ae1631e1
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.0725 | 0.0011 | 1 | 3.1971 |
| 12.7422 | 0.0055 | 5 | 3.1474 |
| 12.8721 | 0.0110 | 10 | 2.9925 |
| 11.728 | 0.0165 | 15 | 2.8301 |
| 11.6738 | 0.0221 | 20 | 2.7617 |
| 11.938 | 0.0276 | 25 | 2.7460 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
simshelper/task-1-google-gemma-7b
|
simshelper
| 2025-01-21T11:55:11Z | 412 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"region:us"
] | null | 2025-01-12T21:23:12Z |
---
base_model: google/gemma-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mrcuddle/Ministral-Instruct-2410-8B-DPO-RP
|
mrcuddle
| 2025-01-21T11:54:32Z | 173 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"en",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:finetune:mistralai/Ministral-8B-Instruct-2410",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T02:03:02Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
base_model: mistralai/Ministral-8B-Instruct-2410
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
language:
- en
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "mrcuddle/Ministral-Instruct-2410-8B-DPO-RP"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
AmberYifan/Llama-2-7b-sft-peers-pool
|
AmberYifan
| 2025-01-21T11:54:06Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/llama2-7b-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/llama2-7b-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-17T09:20:27Z |
---
base_model: AmberYifan/llama2-7b-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-2-7b-sft-peers-pool
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-2-7b-sft-peers-pool
This model is a fine-tuned version of [AmberYifan/llama2-7b-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/llama2-7b-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-2-7b-sft-peers-pool", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/jbqvd8s4)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu118
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nblinh63/e25aed5f-5c03-4efe-9836-f29232e954f5
|
nblinh63
| 2025-01-21T11:53:50Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:41:18Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e25aed5f-5c03-4efe-9836-f29232e954f5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4ae6450c41448135_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ae6450c41448135_train_data.json
type:
field_input: arguments
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/e25aed5f-5c03-4efe-9836-f29232e954f5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ae6450c41448135_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 82c1cb78-ced4-49a5-85a8-db01b7542ac0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 82c1cb78-ced4-49a5-85a8-db01b7542ac0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e25aed5f-5c03-4efe-9836-f29232e954f5
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.529 | 0.1644 | 200 | 0.6300 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
memevis/try56
|
memevis
| 2025-01-21T11:53:19Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T11:48:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vertings6/1cbf9b6c-5749-4703-8e27-b35cd2724a2c
|
vertings6
| 2025-01-21T11:53:18Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Artples/L-MChat-7b",
"base_model:adapter:Artples/L-MChat-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T11:48:51Z |
---
library_name: peft
license: apache-2.0
base_model: Artples/L-MChat-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1cbf9b6c-5749-4703-8e27-b35cd2724a2c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Artples/L-MChat-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- df03514e65800f80_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/df03514e65800f80_train_data.json
type:
field_instruction: input
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vertings6/1cbf9b6c-5749-4703-8e27-b35cd2724a2c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/df03514e65800f80_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|end_of_turn|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e3508d62-5471-4cdf-8dba-5844f441931a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e3508d62-5471-4cdf-8dba-5844f441931a
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 1cbf9b6c-5749-4703-8e27-b35cd2724a2c
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0023 | 1 | nan |
| 0.0 | 0.0113 | 5 | nan |
| 0.0 | 0.0225 | 10 | nan |
| 0.0 | 0.0338 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
simshelper/task-1-google-gemma-2b
|
simshelper
| 2025-01-21T11:53:15Z | 1,231 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2025-01-12T21:08:24Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
ynuwara/SmolVLM-Base-vqav2
|
ynuwara
| 2025-01-21T11:53:15Z | 16 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-Base",
"base_model:adapter:HuggingFaceTB/SmolVLM-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T11:53:13Z |
---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-Base
tags:
- generated_from_trainer
model-index:
- name: SmolVLM-Base-vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM-Base-vqav2
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Base](https://huggingface.co/HuggingFaceTB/SmolVLM-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
memevis/try55
|
memevis
| 2025-01-21T11:53:11Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T11:47:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
memevis/try57
|
memevis
| 2025-01-21T11:52:18Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T11:47:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso10/0a4c2570-7b1e-4bbc-b25e-573f2750a97a
|
lesso10
| 2025-01-21T11:51:48Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama",
"base_model:adapter:unsloth/tinyllama",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:32:46Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0a4c2570-7b1e-4bbc-b25e-573f2750a97a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 543802598c2bc8a9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/543802598c2bc8a9_train_data.json
type:
field_input: cot
field_instruction: query
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: true
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso10/0a4c2570-7b1e-4bbc-b25e-573f2750a97a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/543802598c2bc8a9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ade5524-ba32-4677-ac63-2a87ad5e9260
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5ade5524-ba32-4677-ac63-2a87ad5e9260
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 0a4c2570-7b1e-4bbc-b25e-573f2750a97a
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0005 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mini1013/master_cate_sl19
|
mini1013
| 2025-01-21T11:51:22Z | 1,256 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2025-01-21T11:51:00Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ๋ฐ์ด์๋ง ๋ฐฉํ ๋ณด์จ์๋ง ๋ฑ์ฐ ๋์ ์คํค ์ค๋
ธ์ฐ๋ณด๋ ์ค์ผ์ดํธ ์ผ์ธ์์
์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋๋ฐฉํ์ฉํ>์๋ง
- text: ๋ฌดํฌ ์ ๋ฌดํฌ ํ ๋ก ๋ฐํฌ ๋คํฌ๋ค์ด๋น 517413203ZB ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์ค๋
ธ๋ณด๋์ฅ๋น>๋ฐํฌ
- text: ์คํค๋ณต ์ฑ์ธ ์์ผ ์์ ์ฌ์ฑ์ฉ JACKET ์คํค์์ผ ๋จ์ฑ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค๋ณต>์์
- text: Toko Edge Tuner Pro ์ค๋
ธ์ฐ๋ณด๋ ์ฃ์ง ํ๋ ์ปทํ
์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋์ฉํ>๋ณด์์ฅ๋น
- text: ํฌ๋ฆฌ์ ์ฃผ๋์ด ๊ณ ๊ธ ์นด์ด๋ก์ค ๋ฌด๊ดํผํ๋ธ๋ ๋ณด๋๊ณ ๊ธ ์คํฌ์ธ ๊ณ ๊ธ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋์ฉํ>๊ณ ๊ธ
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: mini1013/master_domain
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2.0 | <ul><li>'์ถฉ์ ์ ์ด์ ์๋ง ๋ฐ์ด ์คํค์ฅ ๋ณด๋ ์คํค ๋ฑ์ฐ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋๋ฐฉํ์ฉํ>์๋ง'</li><li>'์ฐ์ฃผ๋ง์ผ ๊ฒจ์ธ ๋ฐฉํ ๋ง์คํฌ ๋ณด์จ ๋ฑ์ฐ ๊ณจํ ๋ฐ๋ปํ ์์ ๊ฑฐ ๊ท๋ฎ๊ฐ ๊ท๋ง๊ฐ ๋ง์คํฌ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋๋ฐฉํ์ฉํ>๊ท๋ง๊ฐ'</li><li>'๋ค์ด๋ํ ํด๋๋ ์ค๋ชฐ๋ก๊ณ ๋น๋ Dark ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋๋ฐฉํ์ฉํ>๋น๋'</li></ul> |
| 0.0 | <ul><li>'๋ฐฉํ ๋ฐฉ์ ์ฌ์ฑ ์ค๋
ธ์ฐ ๋ณด๋ ํ๋ ์ด ์ฌ์ ๋ณต ์ด์คํฌ ์ ํผ ์ ํ ์ํธ ์ํธ ์คํค ๊ฐํ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>๋ณด๋๋ณต>์ฌํท'</li><li>'2023 ์ฌ์ฑ์ฉ ์ํผ์ค ์คํค ์ํธ ๊ฒจ์ธ ์ผ์ธ ์คํฌ์ธ ๋ฐฉํ ๋ฐฉ์ ๋ณด์จ ์ค๋
ธ๋ณด๋ ์ ํ์ํธ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>๋ณด๋๋ณต>์ํ์ธํธ'</li><li>'์ฌ์ฑ์ฉ ์ค๋
ธ์ฐ๋ณด๋ ์ ํ์ํธ ์ฌ์ฑ ์ผ์ฒดํ ์คํค๋ณต ๋ฐฉ -๋จ์ฑ์ฉ ๋ฏผํธ ๊ทธ๋ฆฐ ์ํธ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>๋ณด๋๋ณต>์ํ์ธํธ'</li></ul> |
| 5.0 | <ul><li>'2223 ํค๋ ์คํค PURE JOY ์ฌ์ฑ์ฉ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค์ฅ๋น>ํ๋ ์ดํธ'</li><li>'๋ฏธ๋ ์คํค ๋ถ์ธ ์ค์ผ์ดํธ ์ฐ๋งค ์ค๋
ธ์ฐ ์๋ถ์ธ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค์ฅ๋น>๋ถ์ธ '</li><li>'PHOENIX ํผ๋์ค ์ฃผ๋์ด ์คํค ํ๋ณต 2223 PHENIX KOREA JR TEAM RD ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค์ฅ๋น>ํ๋ ์ดํธ'</li></ul> |
| 4.0 | <ul><li>'์คํค๋ณต ์ธํธ ์ฌ์ฑ ๋จ์ฑ ๋ฐฉํ ๋ฐฉํ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค๋ณต>์ํ์ธํธ'</li><li>'์คํ์ด๋ ๋จ์ฑ ๋ณด๋ฅด๋ฏธ์ค GTX ์คํค ํฌ์ธ SPFWCISP401MBLK LE1216929158 ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค๋ณต>ํ์'</li><li>'์นด๋ฅดํฌ์ค ์คํค๋ฐ์ง ๋จ์ ๊ฒจ์ธ 2521013 ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค๋ณต>ํ์'</li></ul> |
| 3.0 | <ul><li>'XCMAN 4๊ฒน์ฝ ์คํฐ๋ ๋์ 7 87์ธ์น ์๋ฃจ๋ฏธ๋ ์ค๋
ธ์ฐ๋ณด๋ ์คํฐํ ํจ๋ 9pcs ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋์ฉํ>์คํฐ์ปค์ฉํ'</li><li>'Thule RoundTrip ์คํค ์ค๋
ธ๋ณด๋ ๋ํ ๋ฐฑ 90L ๋คํฌ ์ฌ๋ ์ดํธ 142322 ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋์ฉํ>๋ณด๋๊ฐ๋ฐฉ'</li><li>'ToeJamR ์ค๋
ธ์ฐ๋ณด๋ ์คํฐํ ํจ๋ ๋๋น ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค/๋ณด๋์ฉํ>์คํฐ์ปค์ฉํ'</li></ul> |
| 1.0 | <ul><li>'์ค๋
ธ์ฐ ์คํค ์ฌ์ฑ ๋ถ์ธ ๋ณด๋ ๋กฑ ํธ ๋ฐ๋ฏํ ์ค๋
ธ๋ณด๋ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์ค๋
ธ๋ณด๋์ฅ๋น>๋ถ์ธ '</li><li>'๋์ดํธ๋ก ํ ๋ฐ์ธ๋ฉ 2223 NITRO Team ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์ค๋
ธ๋ณด๋์ฅ๋น>๋ฐ์ธ๋ฉ'</li><li>'ํํฐ WOMEN ์ธํธ๋ ํผ๋ ๋ฆฌํ๋ ํฐ๋ธ ์นด๋ชจ ์ ์ค๋
ธ์ฐ๋ถ์ธ - ํจํด๊ทธ๋ ์ด WFS1004PCTPTG ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์ค๋
ธ๋ณด๋์ฅ๋น>๋ถ์ธ '</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_sl19")
# Run inference
preds = model("์คํค๋ณต ์ฑ์ธ ์์ผ ์์ ์ฌ์ฑ์ฉ JACKET ์คํค์์ผ ๋จ์ฑ ์คํฌ์ธ /๋ ์ >์คํค/๋ณด๋>์คํค๋ณต>์์")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 9.4619 | 18 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 70 |
| 1.0 | 70 |
| 2.0 | 70 |
| 3.0 | 70 |
| 4.0 | 70 |
| 5.0 | 70 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 50
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0120 | 1 | 0.4926 | - |
| 0.6024 | 50 | 0.497 | - |
| 1.2048 | 100 | 0.5003 | - |
| 1.8072 | 150 | 0.1918 | - |
| 2.4096 | 200 | 0.0218 | - |
| 3.0120 | 250 | 0.0004 | - |
| 3.6145 | 300 | 0.0003 | - |
| 4.2169 | 350 | 0.0001 | - |
| 4.8193 | 400 | 0.0001 | - |
| 5.4217 | 450 | 0.0 | - |
| 6.0241 | 500 | 0.0 | - |
| 6.6265 | 550 | 0.0 | - |
| 7.2289 | 600 | 0.0 | - |
| 7.8313 | 650 | 0.0 | - |
| 8.4337 | 700 | 0.0 | - |
| 9.0361 | 750 | 0.0 | - |
| 9.6386 | 800 | 0.0 | - |
| 10.2410 | 850 | 0.0 | - |
| 10.8434 | 900 | 0.0 | - |
| 11.4458 | 950 | 0.0 | - |
| 12.0482 | 1000 | 0.0 | - |
| 12.6506 | 1050 | 0.0001 | - |
| 13.2530 | 1100 | 0.0 | - |
| 13.8554 | 1150 | 0.0 | - |
| 14.4578 | 1200 | 0.0 | - |
| 15.0602 | 1250 | 0.0 | - |
| 15.6627 | 1300 | 0.0 | - |
| 16.2651 | 1350 | 0.0 | - |
| 16.8675 | 1400 | 0.0 | - |
| 17.4699 | 1450 | 0.0 | - |
| 18.0723 | 1500 | 0.0 | - |
| 18.6747 | 1550 | 0.0 | - |
| 19.2771 | 1600 | 0.0 | - |
| 19.8795 | 1650 | 0.0 | - |
| 20.4819 | 1700 | 0.0 | - |
| 21.0843 | 1750 | 0.0 | - |
| 21.6867 | 1800 | 0.0 | - |
| 22.2892 | 1850 | 0.0 | - |
| 22.8916 | 1900 | 0.0 | - |
| 23.4940 | 1950 | 0.0 | - |
| 24.0964 | 2000 | 0.0 | - |
| 24.6988 | 2050 | 0.0 | - |
| 25.3012 | 2100 | 0.0 | - |
| 25.9036 | 2150 | 0.0 | - |
| 26.5060 | 2200 | 0.0 | - |
| 27.1084 | 2250 | 0.0 | - |
| 27.7108 | 2300 | 0.0 | - |
| 28.3133 | 2350 | 0.0 | - |
| 28.9157 | 2400 | 0.0 | - |
| 29.5181 | 2450 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lesso09/0f79296c-b754-4989-b3bc-489ace006ef1
|
lesso09
| 2025-01-21T11:50:36Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T08:34:01Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0f79296c-b754-4989-b3bc-489ace006ef1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: true
chat_template: llama3
datasets:
- data_files:
- 124bc05ddbf5ee81_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/124bc05ddbf5ee81_train_data.json
type:
field_instruction: docstring
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/0f79296c-b754-4989-b3bc-489ace006ef1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/124bc05ddbf5ee81_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5fe9995d-0a95-46fa-b89c-25f97cbb6eb6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5fe9995d-0a95-46fa-b89c-25f97cbb6eb6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0f79296c-b754-4989-b3bc-489ace006ef1
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 5 | nan |
| 0.0 | 0.0002 | 10 | nan |
| 0.0 | 0.0003 | 15 | nan |
| 0.0 | 0.0003 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
CharlesLi/mistral_cot_simplest_code_math_4_3_epoch_full
|
CharlesLi
| 2025-01-21T11:50:33Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T11:28:42Z |
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: mistral_cot_simplest_code_math_4_3_epoch_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_cot_simplest_code_math_4_3_epoch_full
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4526 | 1.9802 | 100 | 0.5809 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
nhung03/5048b92e-4369-4158-938b-4347f8451cde
|
nhung03
| 2025-01-21T11:50:31Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:35:38Z |
---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5048b92e-4369-4158-938b-4347f8451cde
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fdd56d09ce656747_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fdd56d09ce656747_train_data.json
type:
field_instruction: INSTRUCTION
field_output: RESPONSE
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/5048b92e-4369-4158-938b-4347f8451cde
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fdd56d09ce656747_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fecef9ac-e0fb-4174-87a6-ec0f3fcd1777
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5048b92e-4369-4158-938b-4347f8451cde
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5927 | 0.1960 | 200 | 0.5537 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
gavrilstep/02378d6a-b7cf-4b37-9aa8-c35ba4ba1172
|
gavrilstep
| 2025-01-21T11:49:53Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:44:12Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02378d6a-b7cf-4b37-9aa8-c35ba4ba1172
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 54704e0639ee0f16_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/54704e0639ee0f16_train_data.json
type:
field_input: statement
field_instruction: queries
field_output: paraphrased_statement
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 256
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/02378d6a-b7cf-4b37-9aa8-c35ba4ba1172
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 40
micro_batch_size: 2
mlflow_experiment_name: /tmp/54704e0639ee0f16_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 273f8157-2d8b-40fb-adae-f4a501d93f8b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 273f8157-2d8b-40fb-adae-f4a501d93f8b
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 02378d6a-b7cf-4b37-9aa8-c35ba4ba1172
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0014 | 1 | 3.4282 |
| 3.3332 | 0.0068 | 5 | 2.8551 |
| 2.3572 | 0.0136 | 10 | 2.0829 |
| 1.8508 | 0.0205 | 15 | 1.8183 |
| 1.8325 | 0.0273 | 20 | 1.6728 |
| 1.773 | 0.0341 | 25 | 1.6280 |
| 1.6409 | 0.0409 | 30 | 1.6051 |
| 1.7932 | 0.0478 | 35 | 1.5948 |
| 1.5856 | 0.0546 | 40 | 1.5929 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/b6422ae3-6e75-4f8a-9f23-67f0c343008d
|
ClarenceDan
| 2025-01-21T11:49:03Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-01-21T11:43:00Z |
---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b6422ae3-6e75-4f8a-9f23-67f0c343008d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dddb0489dc663e1a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dddb0489dc663e1a_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answers
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/b6422ae3-6e75-4f8a-9f23-67f0c343008d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/dddb0489dc663e1a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bbd202cf-ffeb-42f5-82b2-0c60d893aeab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b6422ae3-6e75-4f8a-9f23-67f0c343008d
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0008 | 3 | nan |
| 0.0 | 0.0017 | 6 | nan |
| 0.0 | 0.0025 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.