modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
lesso/aea67b18-5ac1-4d8c-b41b-6914dd35cd0e
|
lesso
| 2025-02-04T03:23:33Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T03:18:24Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aea67b18-5ac1-4d8c-b41b-6914dd35cd0e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-64k
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 9bd7b6044d104eec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9bd7b6044d104eec_train_data.json
type:
field_input: ''
field_instruction: input_text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/aea67b18-5ac1-4d8c-b41b-6914dd35cd0e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god06/9bd7b6044d104eec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
wandb_project: ab-god06
wandb_run: your_name
wandb_runid: e5a6e46b-b77f-4d50-a625-e1eb21e1df7c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aea67b18-5ac1-4d8c-b41b-6914dd35cd0e
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 31
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.1135 | 0.0976 | 1 | 2.1915 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nblinh/5c964eb4-436b-4018-b81a-1cce46ed0d6a
|
nblinh
| 2025-02-04T03:21:11Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:47:01Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c964eb4-436b-4018-b81a-1cce46ed0d6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 015a4bbf3a6316ca_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/015a4bbf3a6316ca_train_data.json
type:
field_instruction: user
field_output: chip2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/5c964eb4-436b-4018-b81a-1cce46ed0d6a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/015a4bbf3a6316ca_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f749570f-dd98-4c70-b97b-c49b1248c0d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f749570f-dd98-4c70-b97b-c49b1248c0d4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5c964eb4-436b-4018-b81a-1cce46ed0d6a
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5019 | 0.0080 | 200 | 1.5983 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
arcwarden46/e0a572e9-7ab6-49d0-969b-9d8320a49c38
|
arcwarden46
| 2025-02-04T03:20:32Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T01:53:59Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e0a572e9-7ab6-49d0-969b-9d8320a49c38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/OpenHermes-2.5-Mistral-7B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c4378b501f71de8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c4378b501f71de8_train_data.json
type:
field_input: prompt
field_instruction: reason1
field_output: reason2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: arcwarden46/e0a572e9-7ab6-49d0-969b-9d8320a49c38
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/9c4378b501f71de8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 432ed5ae-dbea-46a8-8795-45618fe0369a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 432ed5ae-dbea-46a8-8795-45618fe0369a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e0a572e9-7ab6-49d0-969b-9d8320a49c38
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7904 | 0.0002 | 1 | 1.5057 |
| 2.8028 | 0.0088 | 50 | 0.8330 |
| 2.416 | 0.0177 | 100 | 0.7194 |
| 2.454 | 0.0265 | 150 | 0.6717 |
| 2.6065 | 0.0354 | 200 | 0.6418 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/777027f2-b2cd-47ad-ae63-9199147afdc9
|
lesso
| 2025-02-04T03:15:32Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T02:47:07Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 777027f2-b2cd-47ad-ae63-9199147afdc9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 015a4bbf3a6316ca_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/015a4bbf3a6316ca_train_data.json
type:
field_instruction: user
field_output: chip2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/777027f2-b2cd-47ad-ae63-9199147afdc9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god01/015a4bbf3a6316ca_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f749570f-dd98-4c70-b97b-c49b1248c0d4
wandb_project: ab-god01
wandb_run: your_name
wandb_runid: f749570f-dd98-4c70-b97b-c49b1248c0d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 777027f2-b2cd-47ad-ae63-9199147afdc9
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1778 | 0.0000 | 1 | 2.0653 |
| 1.1488 | 0.0020 | 50 | 1.7117 |
| 1.0782 | 0.0040 | 100 | 1.6105 |
| 1.2952 | 0.0060 | 150 | 1.5184 |
| 1.3547 | 0.0080 | 200 | 1.4753 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
gchen019/textual_inversion_dog_weights
|
gchen019
| 2025-02-04T03:08:38Z | 34 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-02-04T02:25:35Z |
---
base_model: sd-legacy/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - gchen019/textual_inversion_dog_weights
These are textual inversion adaption weights for sd-legacy/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
nhung03/d44a8c95-524a-4a44-b4d7-7642b7e36835
|
nhung03
| 2025-02-04T03:05:41Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:39:51Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d44a8c95-524a-4a44-b4d7-7642b7e36835
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cf971d07e3ff665f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cf971d07e3ff665f_train_data.json
type:
field_input: labels
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/d44a8c95-524a-4a44-b4d7-7642b7e36835
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cf971d07e3ff665f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fef0eb04-6ba6-4379-a2e7-a7fdc70a6b88
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fef0eb04-6ba6-4379-a2e7-a7fdc70a6b88
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d44a8c95-524a-4a44-b4d7-7642b7e36835
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8419 | 0.0113 | 200 | 3.8241 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
romainnn/91992658-b53a-4ec5-93c4-6d37fd5f0dc3
|
romainnn
| 2025-02-04T03:04:18Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-02-04T01:51:24Z |
---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 91992658-b53a-4ec5-93c4-6d37fd5f0dc3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b177e99f9afc8918_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b177e99f9afc8918_train_data.json
type:
field_input: ''
field_instruction: title
field_output: cleaned_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: romainnn/91992658-b53a-4ec5-93c4-6d37fd5f0dc3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 1083
micro_batch_size: 4
mlflow_experiment_name: /tmp/b177e99f9afc8918_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6224a0bd-20f5-44b3-8193-1192471d4f6a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6224a0bd-20f5-44b3-8193-1192471d4f6a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 91992658-b53a-4ec5-93c4-6d37fd5f0dc3
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 202
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 74.2102 | 0.0099 | 1 | 2.0957 |
| 38.0148 | 0.4960 | 50 | 2.0554 |
| 35.4166 | 0.9919 | 100 | 2.0161 |
| 35.328 | 1.4941 | 150 | 2.0022 |
| 33.5952 | 1.9901 | 200 | 1.9988 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/Qwen2.5-32b-Erudite-Writer-Q5_K_M-GGUF
|
Triangle104
| 2025-02-04T03:03:08Z | 24 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SubtleOne/Qwen2.5-32b-Erudite-Writer",
"base_model:quantized:SubtleOne/Qwen2.5-32b-Erudite-Writer",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T02:54:56Z |
---
base_model: SubtleOne/Qwen2.5-32b-Erudite-Writer
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Qwen2.5-32b-Erudite-Writer-Q5_K_M-GGUF
This model was converted to GGUF format from [`SubtleOne/Qwen2.5-32b-Erudite-Writer`](https://huggingface.co/SubtleOne/Qwen2.5-32b-Erudite-Writer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SubtleOne/Qwen2.5-32b-Erudite-Writer) for more details on the model.
---
This model is a merge using Rombos's top-ranked 32b model, based on Qwen 2.5, and merging three creative writing finetunes. The creative content is a serious upgrade over the base it started with and has a much more literary style than the previous Writer model. I won't call it better or worse, merely a very distinct flavor and style. I quite like it, and enjoin you to try it as well. Enjoy!
Merge Method
-
This model was merged using the DELLA merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base.
Models Merged
The following models were included in the merge:
nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
Configuration
-
The following YAML configuration was used to produce this model:
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
int8_mask: true
rescale: false
normalize: true
lambda: 1.04
epsilon: 0.05
dtype: bfloat16
tokenizer_source: union
merge_method: della
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
parameters:
weight: [0.40]
density: [0.53]
- model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
parameters:
weight: [0.30]
density: [0.53]
- model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
parameters:
weight: [0.40]
density: [0.53]
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q5_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q5_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q5_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q5_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q5_k_m.gguf -c 2048
```
|
lesso/c07d0aa3-8113-4ae9-ae58-f4b336b5da81
|
lesso
| 2025-02-04T03:00:43Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-02-04T00:31:05Z |
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c07d0aa3-8113-4ae9-ae58-f4b336b5da81
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 711eb262493f89e0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/711eb262493f89e0_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/c07d0aa3-8113-4ae9-ae58-f4b336b5da81
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001017
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god17/711eb262493f89e0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 24cc9eb4-7f5e-4d72-a2ff-2c216f2efd51
wandb_project: ab-god17
wandb_run: your_name
wandb_runid: 24cc9eb4-7f5e-4d72-a2ff-2c216f2efd51
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c07d0aa3-8113-4ae9-ae58-f4b336b5da81
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001017
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4695 | 0.0000 | 1 | 0.5331 |
| 0.351 | 0.0011 | 50 | 0.2976 |
| 0.309 | 0.0021 | 100 | 0.2833 |
| 0.1739 | 0.0032 | 150 | 0.2705 |
| 0.2364 | 0.0043 | 200 | 0.2644 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
CultriX/Enhanced-TIES-Base-v1
|
CultriX
| 2025-02-04T03:00:05Z | 68 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:CultriX/Qwen2.5-14B-Hyperionv4",
"base_model:merge:CultriX/Qwen2.5-14B-Hyperionv4",
"base_model:arcee-ai/Virtuoso-Small-v2",
"base_model:merge:arcee-ai/Virtuoso-Small-v2",
"base_model:sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3",
"base_model:merge:sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3",
"base_model:sometimesanotion/Qwenvergence-14B-v12-Prose-DS",
"base_model:merge:sometimesanotion/Qwenvergence-14B-v12-Prose-DS",
"base_model:sthenno-com/miscii-14b-1225",
"base_model:merge:sthenno-com/miscii-14b-1225",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T02:51:51Z |
---
base_model:
- arcee-ai/Virtuoso-Small-v2
- sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3
- CultriX/Qwen2.5-14B-Hyperionv4
- sometimesanotion/Qwenvergence-14B-v12-Prose-DS
- sthenno-com/miscii-14b-1225
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3](https://huggingface.co/sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3) as a base.
### Models Merged
The following models were included in the merge:
* [arcee-ai/Virtuoso-Small-v2](https://huggingface.co/arcee-ai/Virtuoso-Small-v2)
* [CultriX/Qwen2.5-14B-Hyperionv4](https://huggingface.co/CultriX/Qwen2.5-14B-Hyperionv4)
* [sometimesanotion/Qwenvergence-14B-v12-Prose-DS](https://huggingface.co/sometimesanotion/Qwenvergence-14B-v12-Prose-DS)
* [sthenno-com/miscii-14b-1225](https://huggingface.co/sthenno-com/miscii-14b-1225)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
name: Enhanced-TIES-Base-v1
# Defining the TIES-merged base model used in the SLERP merge above.
merge_method: dare_ties
base_model: sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3 # Solid base model
tokenizer_source: base # Base tokenizer
dtype: bfloat16 # Efficient dtype
out_dtype: bfloat16 # Output in bfloat16
parameters:
normalize: true # Normalize weights for TIES
int8_mask: true # Int8 mask for TIES
rescale: false # No rescaling for TIES
density: 0.75 # Density for TIES merge
models: # Models for the TIES base merge (same models and densities as Enhanced-LayeredSlerp-v1)
- model: arcee-ai/Virtuoso-Small-v2 # IFEval specialist - high density
parameters:
weight: 1.0
density: 0.9
- model: sthenno-com/miscii-14b-1225 # BBH and Reasoning - medium density
parameters:
weight: 1.0
density: 0.8
- model: sometimesanotion/Qwenvergence-14B-v12-Prose-DS # MATH and general Qwen - medium density
parameters:
weight: 1.0
density: 0.8
- model: CultriX/Qwen2.5-14B-Hyperionv4 # General improvement - lower density
parameters:
weight: 1.0
density: 0.6
# Commentary:
# =============================================================================
# SuperMerge-LayeredTIES-v1 Commentary:
#
# This configuration combines the strengths of both Enhanced-LayeredSlerp-v1 and SuperMerge-Enhanced-v1.
# It leverages the robust foundation of a TIES-merged base model (Enhanced-TIES-Base-v1) and applies
# the layer-wise module approach and fine-grained weight control from SuperMerge-Enhanced-v1 in a SLERP merge.
#
# Key Features:
# - TIES-Merged Base Foundation: Uses 'Enhanced-TIES-Base-v1' as the base model for the SLERP merge.
# This TIES base provides a selectively merged and potentially more efficient starting point, incorporating
# strengths from multiple models (Virtuoso, Phi-4, Qwenvergence, DeepSeek) with density control.
#
# - Layer-wise Module Integration in SLERP: Maintains the module-based slice structure from SuperMerge-Enhanced-v1.
# The SLERP merge now combines the TIES-merged base with specialized modules for Reasoning, IFEval, and MATH/Knowledge
# at different layer ranges, using explicit weights for fine-grained control.
#
# - Benchmark-Driven Iterative Weight Tuning: The configuration is designed to be optimized through a
# benchmark-driven iterative weight tuning process (as described in the refined SuperMerge-Enhanced-v1 approach).
# The initial weights provided are starting points and need to be systematically tuned based on benchmark results.
#
# Tuning Process (Same as Refined SuperMerge-Enhanced-v1):
# 1. Initial Benchmarking: Run a full benchmark suite.
# 2. Performance Analysis: Examine per-benchmark scores and compare to source models.
# 3. Targeted Weight Adjustments: Adjust layer weights based on performance analysis (e.g., increase IFEval module weight
# in early layers if IFEval is weak).
# 4. Iterate: Repeat steps 1-3. Make small, incremental adjustments in each iteration.
#
# Rationale:
# - By using a TIES-merged base, we aim to create a more robust and potentially efficient foundation for the SLERP merge.
# - The layer-wise module approach and fine-grained weights in SLERP still allow for precise control over the blending
# of specialized capabilities at different network depths, building upon the solid TIES base.
# - The emphasis on a benchmark-driven iterative weight tuning process remains crucial for achieving optimal performance.
#
# Next Steps:
# - Implement this configuration using MergeKit.
# - Run initial benchmarks to establish a baseline.
# - Begin the iterative benchmark-driven weight tuning process to optimize performance.
# =============================================================================
```
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task7_organization
|
MayBashendy
| 2025-02-04T02:57:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T02:51:48Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6785
- Qwk: 0.2883
- Mse: 0.6785
- Rmse: 0.8237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0185 | 2 | 2.6272 | -0.0729 | 2.6272 | 1.6209 |
| No log | 0.0370 | 4 | 1.2463 | 0.0983 | 1.2463 | 1.1164 |
| No log | 0.0556 | 6 | 0.7915 | 0.0441 | 0.7915 | 0.8897 |
| No log | 0.0741 | 8 | 0.7705 | 0.1368 | 0.7705 | 0.8778 |
| No log | 0.0926 | 10 | 0.6891 | 0.2955 | 0.6891 | 0.8301 |
| No log | 0.1111 | 12 | 0.6805 | 0.3141 | 0.6805 | 0.8249 |
| No log | 0.1296 | 14 | 0.7554 | 0.2223 | 0.7554 | 0.8691 |
| No log | 0.1481 | 16 | 0.8799 | 0.2259 | 0.8799 | 0.9380 |
| No log | 0.1667 | 18 | 0.7391 | 0.3590 | 0.7391 | 0.8597 |
| No log | 0.1852 | 20 | 0.6912 | 0.3348 | 0.6912 | 0.8314 |
| No log | 0.2037 | 22 | 0.8114 | 0.2772 | 0.8114 | 0.9008 |
| No log | 0.2222 | 24 | 0.7259 | 0.2813 | 0.7259 | 0.8520 |
| No log | 0.2407 | 26 | 0.6871 | 0.3050 | 0.6871 | 0.8289 |
| No log | 0.2593 | 28 | 1.3581 | 0.2590 | 1.3581 | 1.1654 |
| No log | 0.2778 | 30 | 1.7250 | 0.1895 | 1.7250 | 1.3134 |
| No log | 0.2963 | 32 | 1.2685 | 0.1895 | 1.2685 | 1.1263 |
| No log | 0.3148 | 34 | 0.7739 | 0.3606 | 0.7739 | 0.8797 |
| No log | 0.3333 | 36 | 0.6666 | 0.1983 | 0.6666 | 0.8165 |
| No log | 0.3519 | 38 | 0.6695 | 0.2046 | 0.6695 | 0.8182 |
| No log | 0.3704 | 40 | 0.7431 | 0.3564 | 0.7431 | 0.8620 |
| No log | 0.3889 | 42 | 0.9422 | 0.3579 | 0.9422 | 0.9707 |
| No log | 0.4074 | 44 | 1.0279 | 0.3516 | 1.0279 | 1.0138 |
| No log | 0.4259 | 46 | 0.9828 | 0.3516 | 0.9828 | 0.9914 |
| No log | 0.4444 | 48 | 0.8631 | 0.3777 | 0.8631 | 0.9290 |
| No log | 0.4630 | 50 | 0.7154 | 0.3746 | 0.7154 | 0.8458 |
| No log | 0.4815 | 52 | 0.6521 | 0.4219 | 0.6521 | 0.8075 |
| No log | 0.5 | 54 | 0.6224 | 0.3092 | 0.6224 | 0.7889 |
| No log | 0.5185 | 56 | 0.6890 | 0.3819 | 0.6890 | 0.8301 |
| No log | 0.5370 | 58 | 1.0277 | 0.3166 | 1.0277 | 1.0138 |
| No log | 0.5556 | 60 | 1.2795 | 0.2772 | 1.2795 | 1.1312 |
| No log | 0.5741 | 62 | 1.2126 | 0.2909 | 1.2126 | 1.1012 |
| No log | 0.5926 | 64 | 0.8438 | 0.4255 | 0.8438 | 0.9186 |
| No log | 0.6111 | 66 | 0.5983 | 0.4463 | 0.5983 | 0.7735 |
| No log | 0.6296 | 68 | 0.6445 | 0.4674 | 0.6445 | 0.8028 |
| No log | 0.6481 | 70 | 0.6404 | 0.4737 | 0.6404 | 0.8003 |
| No log | 0.6667 | 72 | 0.6235 | 0.4419 | 0.6235 | 0.7897 |
| No log | 0.6852 | 74 | 0.9033 | 0.4096 | 0.9033 | 0.9504 |
| No log | 0.7037 | 76 | 1.0313 | 0.2910 | 1.0313 | 1.0155 |
| No log | 0.7222 | 78 | 0.8396 | 0.4568 | 0.8396 | 0.9163 |
| No log | 0.7407 | 80 | 0.6278 | 0.3945 | 0.6278 | 0.7923 |
| No log | 0.7593 | 82 | 0.6544 | 0.4345 | 0.6544 | 0.8090 |
| No log | 0.7778 | 84 | 0.6348 | 0.4322 | 0.6348 | 0.7968 |
| No log | 0.7963 | 86 | 0.6784 | 0.2995 | 0.6784 | 0.8236 |
| No log | 0.8148 | 88 | 0.9486 | 0.4092 | 0.9486 | 0.9740 |
| No log | 0.8333 | 90 | 1.1878 | 0.2206 | 1.1878 | 1.0899 |
| No log | 0.8519 | 92 | 1.1619 | 0.2191 | 1.1619 | 1.0779 |
| No log | 0.8704 | 94 | 0.9051 | 0.4347 | 0.9051 | 0.9514 |
| No log | 0.8889 | 96 | 0.7585 | 0.3494 | 0.7585 | 0.8709 |
| No log | 0.9074 | 98 | 0.6845 | 0.3196 | 0.6845 | 0.8273 |
| No log | 0.9259 | 100 | 0.7034 | 0.2467 | 0.7034 | 0.8387 |
| No log | 0.9444 | 102 | 0.7146 | 0.3302 | 0.7146 | 0.8453 |
| No log | 0.9630 | 104 | 0.8031 | 0.3918 | 0.8031 | 0.8962 |
| No log | 0.9815 | 106 | 0.9954 | 0.3849 | 0.9954 | 0.9977 |
| No log | 1.0 | 108 | 1.0793 | 0.3269 | 1.0793 | 1.0389 |
| No log | 1.0185 | 110 | 1.0460 | 0.3697 | 1.0460 | 1.0227 |
| No log | 1.0370 | 112 | 0.8320 | 0.3560 | 0.8320 | 0.9121 |
| No log | 1.0556 | 114 | 0.7203 | 0.3069 | 0.7203 | 0.8487 |
| No log | 1.0741 | 116 | 0.6927 | 0.3060 | 0.6927 | 0.8323 |
| No log | 1.0926 | 118 | 0.7416 | 0.2518 | 0.7416 | 0.8612 |
| No log | 1.1111 | 120 | 0.8737 | 0.3892 | 0.8737 | 0.9347 |
| No log | 1.1296 | 122 | 1.1036 | 0.3088 | 1.1036 | 1.0505 |
| No log | 1.1481 | 124 | 1.0979 | 0.3404 | 1.0979 | 1.0478 |
| No log | 1.1667 | 126 | 0.9128 | 0.3709 | 0.9128 | 0.9554 |
| No log | 1.1852 | 128 | 0.8296 | 0.2843 | 0.8296 | 0.9108 |
| No log | 1.2037 | 130 | 0.7985 | 0.2904 | 0.7985 | 0.8936 |
| No log | 1.2222 | 132 | 0.8440 | 0.4080 | 0.8440 | 0.9187 |
| No log | 1.2407 | 134 | 0.9444 | 0.3676 | 0.9444 | 0.9718 |
| No log | 1.2593 | 136 | 1.0034 | 0.3337 | 1.0034 | 1.0017 |
| No log | 1.2778 | 138 | 0.8877 | 0.4092 | 0.8877 | 0.9422 |
| No log | 1.2963 | 140 | 0.7385 | 0.3637 | 0.7385 | 0.8593 |
| No log | 1.3148 | 142 | 0.6943 | 0.2498 | 0.6943 | 0.8333 |
| No log | 1.3333 | 144 | 0.6994 | 0.2471 | 0.6994 | 0.8363 |
| No log | 1.3519 | 146 | 0.7091 | 0.2784 | 0.7091 | 0.8421 |
| No log | 1.3704 | 148 | 0.7587 | 0.3234 | 0.7587 | 0.8710 |
| No log | 1.3889 | 150 | 0.8943 | 0.3538 | 0.8943 | 0.9457 |
| No log | 1.4074 | 152 | 0.9652 | 0.3029 | 0.9652 | 0.9824 |
| No log | 1.4259 | 154 | 0.8352 | 0.4404 | 0.8352 | 0.9139 |
| No log | 1.4444 | 156 | 0.6769 | 0.2558 | 0.6769 | 0.8228 |
| No log | 1.4630 | 158 | 0.6655 | 0.3141 | 0.6655 | 0.8158 |
| No log | 1.4815 | 160 | 0.6565 | 0.3426 | 0.6565 | 0.8102 |
| No log | 1.5 | 162 | 0.7265 | 0.3817 | 0.7265 | 0.8523 |
| No log | 1.5185 | 164 | 0.8765 | 0.3499 | 0.8765 | 0.9362 |
| No log | 1.5370 | 166 | 1.0127 | 0.2898 | 1.0127 | 1.0064 |
| No log | 1.5556 | 168 | 0.9417 | 0.3052 | 0.9417 | 0.9704 |
| No log | 1.5741 | 170 | 0.7469 | 0.3562 | 0.7469 | 0.8642 |
| No log | 1.5926 | 172 | 0.6349 | 0.3763 | 0.6349 | 0.7968 |
| No log | 1.6111 | 174 | 0.6206 | 0.2877 | 0.6206 | 0.7878 |
| No log | 1.6296 | 176 | 0.6285 | 0.3399 | 0.6285 | 0.7928 |
| No log | 1.6481 | 178 | 0.6664 | 0.3099 | 0.6664 | 0.8163 |
| No log | 1.6667 | 180 | 0.7391 | 0.3746 | 0.7391 | 0.8597 |
| No log | 1.6852 | 182 | 0.7609 | 0.3746 | 0.7609 | 0.8723 |
| No log | 1.7037 | 184 | 0.7273 | 0.3372 | 0.7273 | 0.8528 |
| No log | 1.7222 | 186 | 0.6796 | 0.2227 | 0.6796 | 0.8244 |
| No log | 1.7407 | 188 | 0.7217 | 0.2383 | 0.7217 | 0.8495 |
| No log | 1.7593 | 190 | 0.8368 | 0.3456 | 0.8368 | 0.9148 |
| No log | 1.7778 | 192 | 0.8586 | 0.3688 | 0.8586 | 0.9266 |
| No log | 1.7963 | 194 | 0.7750 | 0.2871 | 0.7750 | 0.8803 |
| No log | 1.8148 | 196 | 0.7648 | 0.2871 | 0.7648 | 0.8746 |
| No log | 1.8333 | 198 | 0.7913 | 0.3095 | 0.7913 | 0.8896 |
| No log | 1.8519 | 200 | 0.7951 | 0.2926 | 0.7951 | 0.8917 |
| No log | 1.8704 | 202 | 0.8246 | 0.2471 | 0.8246 | 0.9081 |
| No log | 1.8889 | 204 | 0.8560 | 0.2364 | 0.8560 | 0.9252 |
| No log | 1.9074 | 206 | 0.9938 | 0.3052 | 0.9938 | 0.9969 |
| No log | 1.9259 | 208 | 1.1704 | 0.2643 | 1.1704 | 1.0818 |
| No log | 1.9444 | 210 | 1.1412 | 0.2501 | 1.1412 | 1.0683 |
| No log | 1.9630 | 212 | 0.9513 | 0.3601 | 0.9513 | 0.9754 |
| No log | 1.9815 | 214 | 0.8096 | 0.2904 | 0.8096 | 0.8998 |
| No log | 2.0 | 216 | 0.8180 | 0.2904 | 0.8180 | 0.9044 |
| No log | 2.0185 | 218 | 0.9502 | 0.3439 | 0.9502 | 0.9748 |
| No log | 2.0370 | 220 | 0.9671 | 0.3381 | 0.9671 | 0.9834 |
| No log | 2.0556 | 222 | 0.9231 | 0.3439 | 0.9231 | 0.9608 |
| No log | 2.0741 | 224 | 0.8631 | 0.3499 | 0.8631 | 0.9290 |
| No log | 2.0926 | 226 | 0.7739 | 0.4239 | 0.7739 | 0.8797 |
| No log | 2.1111 | 228 | 0.7480 | 0.2749 | 0.7480 | 0.8648 |
| No log | 2.1296 | 230 | 0.7852 | 0.4114 | 0.7852 | 0.8861 |
| No log | 2.1481 | 232 | 0.8783 | 0.3560 | 0.8783 | 0.9372 |
| No log | 2.1667 | 234 | 0.8716 | 0.3678 | 0.8716 | 0.9336 |
| No log | 2.1852 | 236 | 0.8379 | 0.4366 | 0.8379 | 0.9154 |
| No log | 2.2037 | 238 | 0.7586 | 0.3700 | 0.7586 | 0.8710 |
| No log | 2.2222 | 240 | 0.7216 | 0.3340 | 0.7216 | 0.8495 |
| No log | 2.2407 | 242 | 0.7426 | 0.3569 | 0.7426 | 0.8617 |
| No log | 2.2593 | 244 | 0.8270 | 0.4153 | 0.8270 | 0.9094 |
| No log | 2.2778 | 246 | 0.9176 | 0.3381 | 0.9176 | 0.9579 |
| No log | 2.2963 | 248 | 0.8500 | 0.3799 | 0.8500 | 0.9219 |
| No log | 2.3148 | 250 | 0.6978 | 0.3544 | 0.6978 | 0.8354 |
| No log | 2.3333 | 252 | 0.6435 | 0.3144 | 0.6435 | 0.8022 |
| No log | 2.3519 | 254 | 0.6297 | 0.3625 | 0.6297 | 0.7935 |
| No log | 2.3704 | 256 | 0.6371 | 0.3840 | 0.6371 | 0.7982 |
| No log | 2.3889 | 258 | 0.6757 | 0.3942 | 0.6757 | 0.8220 |
| No log | 2.4074 | 260 | 0.6659 | 0.3942 | 0.6659 | 0.8160 |
| No log | 2.4259 | 262 | 0.6379 | 0.3976 | 0.6379 | 0.7987 |
| No log | 2.4444 | 264 | 0.6425 | 0.3197 | 0.6425 | 0.8016 |
| No log | 2.4630 | 266 | 0.6550 | 0.2537 | 0.6550 | 0.8093 |
| No log | 2.4815 | 268 | 0.6578 | 0.2787 | 0.6578 | 0.8110 |
| No log | 2.5 | 270 | 0.7050 | 0.3195 | 0.7050 | 0.8396 |
| No log | 2.5185 | 272 | 0.7764 | 0.4272 | 0.7764 | 0.8811 |
| No log | 2.5370 | 274 | 0.7354 | 0.4745 | 0.7354 | 0.8576 |
| No log | 2.5556 | 276 | 0.6619 | 0.3656 | 0.6619 | 0.8136 |
| No log | 2.5741 | 278 | 0.6357 | 0.4207 | 0.6357 | 0.7973 |
| No log | 2.5926 | 280 | 0.6774 | 0.4404 | 0.6774 | 0.8231 |
| No log | 2.6111 | 282 | 0.7805 | 0.4721 | 0.7805 | 0.8835 |
| No log | 2.6296 | 284 | 0.8090 | 0.4705 | 0.8090 | 0.8995 |
| No log | 2.6481 | 286 | 0.6898 | 0.4144 | 0.6898 | 0.8305 |
| No log | 2.6667 | 288 | 0.5588 | 0.4243 | 0.5588 | 0.7475 |
| No log | 2.6852 | 290 | 0.5194 | 0.4147 | 0.5194 | 0.7207 |
| No log | 2.7037 | 292 | 0.5186 | 0.4722 | 0.5186 | 0.7201 |
| No log | 2.7222 | 294 | 0.5234 | 0.4722 | 0.5234 | 0.7235 |
| No log | 2.7407 | 296 | 0.5440 | 0.4819 | 0.5440 | 0.7376 |
| No log | 2.7593 | 298 | 0.5435 | 0.4642 | 0.5435 | 0.7373 |
| No log | 2.7778 | 300 | 0.5318 | 0.3702 | 0.5318 | 0.7293 |
| No log | 2.7963 | 302 | 0.5482 | 0.4384 | 0.5482 | 0.7404 |
| No log | 2.8148 | 304 | 0.5548 | 0.3947 | 0.5548 | 0.7448 |
| No log | 2.8333 | 306 | 0.5691 | 0.3494 | 0.5691 | 0.7544 |
| No log | 2.8519 | 308 | 0.6289 | 0.4035 | 0.6289 | 0.7931 |
| No log | 2.8704 | 310 | 0.6465 | 0.4035 | 0.6465 | 0.8041 |
| No log | 2.8889 | 312 | 0.6420 | 0.3755 | 0.6420 | 0.8013 |
| No log | 2.9074 | 314 | 0.6189 | 0.3092 | 0.6189 | 0.7867 |
| No log | 2.9259 | 316 | 0.6213 | 0.3092 | 0.6213 | 0.7883 |
| No log | 2.9444 | 318 | 0.6413 | 0.3092 | 0.6413 | 0.8008 |
| No log | 2.9630 | 320 | 0.6483 | 0.3092 | 0.6483 | 0.8052 |
| No log | 2.9815 | 322 | 0.6706 | 0.3387 | 0.6706 | 0.8189 |
| No log | 3.0 | 324 | 0.7129 | 0.2883 | 0.7129 | 0.8444 |
| No log | 3.0185 | 326 | 0.7934 | 0.4224 | 0.7934 | 0.8907 |
| No log | 3.0370 | 328 | 0.8775 | 0.3473 | 0.8775 | 0.9368 |
| No log | 3.0556 | 330 | 0.8439 | 0.4624 | 0.8439 | 0.9187 |
| No log | 3.0741 | 332 | 0.7766 | 0.3099 | 0.7766 | 0.8813 |
| No log | 3.0926 | 334 | 0.6686 | 0.2981 | 0.6686 | 0.8177 |
| No log | 3.1111 | 336 | 0.6458 | 0.3123 | 0.6458 | 0.8036 |
| No log | 3.1296 | 338 | 0.6396 | 0.3166 | 0.6396 | 0.7998 |
| No log | 3.1481 | 340 | 0.6458 | 0.3092 | 0.6458 | 0.8036 |
| No log | 3.1667 | 342 | 0.6672 | 0.3312 | 0.6672 | 0.8168 |
| No log | 3.1852 | 344 | 0.7297 | 0.3099 | 0.7297 | 0.8542 |
| No log | 3.2037 | 346 | 0.7574 | 0.4197 | 0.7574 | 0.8703 |
| No log | 3.2222 | 348 | 0.6859 | 0.3261 | 0.6859 | 0.8282 |
| No log | 3.2407 | 350 | 0.6214 | 0.3312 | 0.6214 | 0.7883 |
| No log | 3.2593 | 352 | 0.5847 | 0.3166 | 0.5847 | 0.7646 |
| No log | 3.2778 | 354 | 0.5664 | 0.3354 | 0.5664 | 0.7526 |
| No log | 3.2963 | 356 | 0.5628 | 0.3354 | 0.5628 | 0.7502 |
| No log | 3.3148 | 358 | 0.5628 | 0.3354 | 0.5628 | 0.7502 |
| No log | 3.3333 | 360 | 0.5712 | 0.3006 | 0.5712 | 0.7558 |
| No log | 3.3519 | 362 | 0.5911 | 0.3323 | 0.5911 | 0.7689 |
| No log | 3.3704 | 364 | 0.5943 | 0.3243 | 0.5943 | 0.7709 |
| No log | 3.3889 | 366 | 0.5777 | 0.3039 | 0.5777 | 0.7600 |
| No log | 3.4074 | 368 | 0.5638 | 0.3354 | 0.5638 | 0.7509 |
| No log | 3.4259 | 370 | 0.5524 | 0.3889 | 0.5524 | 0.7432 |
| No log | 3.4444 | 372 | 0.5468 | 0.3369 | 0.5468 | 0.7395 |
| No log | 3.4630 | 374 | 0.5588 | 0.4845 | 0.5588 | 0.7476 |
| No log | 3.4815 | 376 | 0.5484 | 0.4060 | 0.5484 | 0.7406 |
| No log | 3.5 | 378 | 0.5375 | 0.3274 | 0.5375 | 0.7332 |
| No log | 3.5185 | 380 | 0.5438 | 0.2987 | 0.5438 | 0.7375 |
| No log | 3.5370 | 382 | 0.5484 | 0.3273 | 0.5484 | 0.7405 |
| No log | 3.5556 | 384 | 0.5405 | 0.2987 | 0.5405 | 0.7352 |
| No log | 3.5741 | 386 | 0.5429 | 0.2996 | 0.5429 | 0.7368 |
| No log | 3.5926 | 388 | 0.5399 | 0.2641 | 0.5399 | 0.7348 |
| No log | 3.6111 | 390 | 0.5373 | 0.2641 | 0.5373 | 0.7330 |
| No log | 3.6296 | 392 | 0.5325 | 0.2996 | 0.5325 | 0.7297 |
| No log | 3.6481 | 394 | 0.5277 | 0.3953 | 0.5277 | 0.7264 |
| No log | 3.6667 | 396 | 0.5433 | 0.3416 | 0.5433 | 0.7371 |
| No log | 3.6852 | 398 | 0.5704 | 0.3341 | 0.5704 | 0.7553 |
| No log | 3.7037 | 400 | 0.5767 | 0.3341 | 0.5767 | 0.7594 |
| No log | 3.7222 | 402 | 0.5726 | 0.3341 | 0.5726 | 0.7567 |
| No log | 3.7407 | 404 | 0.5866 | 0.3341 | 0.5866 | 0.7659 |
| No log | 3.7593 | 406 | 0.5951 | 0.3312 | 0.5951 | 0.7714 |
| No log | 3.7778 | 408 | 0.6172 | 0.3312 | 0.6172 | 0.7856 |
| No log | 3.7963 | 410 | 0.6595 | 0.3843 | 0.6595 | 0.8121 |
| No log | 3.8148 | 412 | 0.6781 | 0.3843 | 0.6781 | 0.8235 |
| No log | 3.8333 | 414 | 0.6525 | 0.4190 | 0.6525 | 0.8078 |
| No log | 3.8519 | 416 | 0.6357 | 0.4020 | 0.6357 | 0.7973 |
| No log | 3.8704 | 418 | 0.6030 | 0.3622 | 0.6030 | 0.7765 |
| No log | 3.8889 | 420 | 0.5870 | 0.3341 | 0.5870 | 0.7662 |
| No log | 3.9074 | 422 | 0.5679 | 0.3675 | 0.5679 | 0.7536 |
| No log | 3.9259 | 424 | 0.5573 | 0.3995 | 0.5573 | 0.7465 |
| No log | 3.9444 | 426 | 0.5627 | 0.4194 | 0.5627 | 0.7501 |
| No log | 3.9630 | 428 | 0.5972 | 0.4292 | 0.5972 | 0.7728 |
| No log | 3.9815 | 430 | 0.6792 | 0.4815 | 0.6792 | 0.8241 |
| No log | 4.0 | 432 | 0.7062 | 0.4644 | 0.7062 | 0.8404 |
| No log | 4.0185 | 434 | 0.6888 | 0.4644 | 0.6888 | 0.8299 |
| No log | 4.0370 | 436 | 0.6759 | 0.4409 | 0.6759 | 0.8221 |
| No log | 4.0556 | 438 | 0.6074 | 0.4044 | 0.6074 | 0.7793 |
| No log | 4.0741 | 440 | 0.5911 | 0.4027 | 0.5911 | 0.7689 |
| No log | 4.0926 | 442 | 0.5959 | 0.3782 | 0.5959 | 0.7719 |
| No log | 4.1111 | 444 | 0.5990 | 0.3494 | 0.5990 | 0.7740 |
| No log | 4.1296 | 446 | 0.6249 | 0.3465 | 0.6249 | 0.7905 |
| No log | 4.1481 | 448 | 0.6833 | 0.3789 | 0.6833 | 0.8266 |
| No log | 4.1667 | 450 | 0.6998 | 0.3789 | 0.6998 | 0.8365 |
| No log | 4.1852 | 452 | 0.6573 | 0.3465 | 0.6573 | 0.8108 |
| No log | 4.2037 | 454 | 0.6596 | 0.3465 | 0.6596 | 0.8122 |
| No log | 4.2222 | 456 | 0.6712 | 0.3387 | 0.6712 | 0.8193 |
| No log | 4.2407 | 458 | 0.6840 | 0.4052 | 0.6840 | 0.8270 |
| No log | 4.2593 | 460 | 0.6763 | 0.3444 | 0.6763 | 0.8224 |
| No log | 4.2778 | 462 | 0.6450 | 0.3387 | 0.6450 | 0.8031 |
| No log | 4.2963 | 464 | 0.6399 | 0.3387 | 0.6399 | 0.8000 |
| No log | 4.3148 | 466 | 0.6431 | 0.3387 | 0.6431 | 0.8019 |
| No log | 4.3333 | 468 | 0.6471 | 0.3167 | 0.6471 | 0.8044 |
| No log | 4.3519 | 470 | 0.6554 | 0.3789 | 0.6554 | 0.8096 |
| No log | 4.3704 | 472 | 0.6469 | 0.3471 | 0.6469 | 0.8043 |
| No log | 4.3889 | 474 | 0.6061 | 0.3976 | 0.6061 | 0.7785 |
| No log | 4.4074 | 476 | 0.5654 | 0.3754 | 0.5654 | 0.7519 |
| No log | 4.4259 | 478 | 0.5624 | 0.3258 | 0.5624 | 0.7499 |
| No log | 4.4444 | 480 | 0.5691 | 0.2923 | 0.5691 | 0.7544 |
| No log | 4.4630 | 482 | 0.5774 | 0.2963 | 0.5774 | 0.7599 |
| No log | 4.4815 | 484 | 0.5919 | 0.3575 | 0.5919 | 0.7693 |
| No log | 4.5 | 486 | 0.6617 | 0.3673 | 0.6617 | 0.8134 |
| No log | 4.5185 | 488 | 0.7257 | 0.3444 | 0.7257 | 0.8519 |
| No log | 4.5370 | 490 | 0.7068 | 0.3444 | 0.7068 | 0.8407 |
| No log | 4.5556 | 492 | 0.6779 | 0.3167 | 0.6779 | 0.8233 |
| No log | 4.5741 | 494 | 0.6681 | 0.3594 | 0.6681 | 0.8174 |
| No log | 4.5926 | 496 | 0.6884 | 0.3444 | 0.6884 | 0.8297 |
| No log | 4.6111 | 498 | 0.7111 | 0.3444 | 0.7111 | 0.8432 |
| 0.2421 | 4.6296 | 500 | 0.7278 | 0.3444 | 0.7278 | 0.8531 |
| 0.2421 | 4.6481 | 502 | 0.6692 | 0.3312 | 0.6692 | 0.8181 |
| 0.2421 | 4.6667 | 504 | 0.6313 | 0.3166 | 0.6313 | 0.7946 |
| 0.2421 | 4.6852 | 506 | 0.6121 | 0.3445 | 0.6121 | 0.7824 |
| 0.2421 | 4.7037 | 508 | 0.6076 | 0.3445 | 0.6076 | 0.7795 |
| 0.2421 | 4.7222 | 510 | 0.6405 | 0.3572 | 0.6405 | 0.8003 |
| 0.2421 | 4.7407 | 512 | 0.7083 | 0.4052 | 0.7083 | 0.8416 |
| 0.2421 | 4.7593 | 514 | 0.7393 | 0.4554 | 0.7393 | 0.8598 |
| 0.2421 | 4.7778 | 516 | 0.7042 | 0.4642 | 0.7042 | 0.8392 |
| 0.2421 | 4.7963 | 518 | 0.6464 | 0.3594 | 0.6464 | 0.8040 |
| 0.2421 | 4.8148 | 520 | 0.6130 | 0.3649 | 0.6130 | 0.7830 |
| 0.2421 | 4.8333 | 522 | 0.6089 | 0.3599 | 0.6089 | 0.7803 |
| 0.2421 | 4.8519 | 524 | 0.6280 | 0.3183 | 0.6280 | 0.7924 |
| 0.2421 | 4.8704 | 526 | 0.6584 | 0.3425 | 0.6584 | 0.8114 |
| 0.2421 | 4.8889 | 528 | 0.6523 | 0.3155 | 0.6523 | 0.8077 |
| 0.2421 | 4.9074 | 530 | 0.6549 | 0.2950 | 0.6549 | 0.8092 |
| 0.2421 | 4.9259 | 532 | 0.6785 | 0.2883 | 0.6785 | 0.8237 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF
|
featherless-ai-quants
| 2025-02-04T02:56:33Z | 297 | 0 | null |
[
"gguf",
"text-generation",
"base_model:FogTeams/experiment-45-intelligent-layer-2-plus-exp-39-data",
"base_model:quantized:FogTeams/experiment-45-intelligent-layer-2-plus-exp-39-data",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-02-04T02:47:44Z |
---
base_model: FogTeams/experiment-45-intelligent-layer-2-plus-exp-39-data
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# FogTeams/experiment-45-intelligent-layer-2-plus-exp-39-data GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-GGUF/blob/main/FogTeams-experiment-45-intelligent-layer-2-plus-exp-39-data-Q8_0.gguf) | 8145.11 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
kumo24/bert-sentiment
|
kumo24
| 2025-02-04T02:55:28Z | 45 | 0 | null |
[
"safetensors",
"bert",
"license:apache-2.0",
"region:us"
] | null | 2025-02-03T20:20:37Z |
---
license: apache-2.0
---
This BERT was fined-tuned on +672k tweets from twitter/X. The classification accuracy obtained is 98%. \
The number of labels is 3: {0: Negative, 1: Neutral, 2: Positive}
This is an example to use it
```bash
from transformers import AutoTokenizer
from transformers import pipeline
from transformers import AutoModelForSequenceClassification
import torch
checkpoint = 'kumo24/bert-sentiment'
tokenizer=AutoTokenizer.from_pretrained(checkpoint)
id2label = {0: "negative", 1: "neutral", 2: "positive"}
label2id = {"negative": 0, "neutral": 1, "positive": 2}
if tokenizer.pad_token is None:
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = AutoModelForSequenceClassification.from_pretrained(checkpoint,
num_labels=3,
id2label=id2label,
label2id=label2id)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
sentiment_task = pipeline("sentiment-analysis",
model=model,
tokenizer=tokenizer,
device =device)
print(sentiment_task("Michigan Wolverines are Champions, Go Blue!"))
```
|
havinash-ai/f0a2e4f3-1036-40de-9274-3ca0adb47323
|
havinash-ai
| 2025-02-04T02:54:53Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"license:mit",
"region:us"
] | null | 2025-02-04T02:23:45Z |
---
library_name: peft
license: mit
base_model: princeton-nlp/gemma-2-9b-it-SimPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f0a2e4f3-1036-40de-9274-3ca0adb47323
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 27445fcde1646c52_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/27445fcde1646c52_train_data.json
type:
field_instruction: article
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/f0a2e4f3-1036-40de-9274-3ca0adb47323
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/27445fcde1646c52_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1fdacdcc-5748-4bbf-b058-02b8f37bd7ab
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1fdacdcc-5748-4bbf-b058-02b8f37bd7ab
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f0a2e4f3-1036-40de-9274-3ca0adb47323
This model is a fine-tuned version of [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.0342 |
| 0.8211 | 0.0030 | 63 | 0.7868 |
| 0.4865 | 0.0060 | 126 | 0.7196 |
| 0.5538 | 0.0090 | 189 | 0.6824 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF
|
featherless-ai-quants
| 2025-02-04T02:54:21Z | 138 | 0 | null |
[
"gguf",
"text-generation",
"base_model:ChaoticNeutrals/Eris_PrimeV4.20-Vision-32k-7B",
"base_model:quantized:ChaoticNeutrals/Eris_PrimeV4.20-Vision-32k-7B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-02-04T02:45:44Z |
---
base_model: ChaoticNeutrals/Eris_PrimeV4.20-Vision-32k-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ChaoticNeutrals/Eris_PrimeV4.20-Vision-32k-7B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-GGUF/blob/main/ChaoticNeutrals-Eris_PrimeV4.20-Vision-32k-7B-Q8_0.gguf) | 7339.34 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
abenius/559863ef-fa70-4bc3-8021-4b1eb30929f9
|
abenius
| 2025-02-04T02:51:21Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:34:49Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 559863ef-fa70-4bc3-8021-4b1eb30929f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cff7ac798e6d5dcd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cff7ac798e6d5dcd_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: abenius/559863ef-fa70-4bc3-8021-4b1eb30929f9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cff7ac798e6d5dcd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: cf8d9384-56f2-40e9-8877-64c5c8e6e996
wandb_project: Gradients-On-12
wandb_run: your_name
wandb_runid: cf8d9384-56f2-40e9-8877-64c5c8e6e996
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 559863ef-fa70-4bc3-8021-4b1eb30929f9
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.82 | 0.2947 | 200 | 1.8207 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
earnxus/12630b07-2524-46a5-b87b-17746a4405b6
|
earnxus
| 2025-02-04T02:51:20Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:34:48Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12630b07-2524-46a5-b87b-17746a4405b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cff7ac798e6d5dcd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cff7ac798e6d5dcd_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/12630b07-2524-46a5-b87b-17746a4405b6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cff7ac798e6d5dcd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: cf8d9384-56f2-40e9-8877-64c5c8e6e996
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: cf8d9384-56f2-40e9-8877-64c5c8e6e996
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 12630b07-2524-46a5-b87b-17746a4405b6
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.8089 | 0.2947 | 200 | 1.8095 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
InsultedByMathematics/alpha_1e-4_beta_3e-3
|
InsultedByMathematics
| 2025-02-04T02:51:11Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-02T18:20:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
newnexum/Carlos
|
newnexum
| 2025-02-04T02:49:57Z | 18 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-04T02:26:47Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Carlos
---
# Carlos
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Carlos` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('newnexum/Carlos', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
alchemist69/11d38ba5-ecbd-400b-a4ae-49926680a2ab
|
alchemist69
| 2025-02-04T02:47:09Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-02-04T02:45:48Z |
---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 11d38ba5-ecbd-400b-a4ae-49926680a2ab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8277d95e38f8c211_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8277d95e38f8c211_train_data.json
type:
field_input: spans
field_instruction: document
field_output: query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: alchemist69/11d38ba5-ecbd-400b-a4ae-49926680a2ab
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/8277d95e38f8c211_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bbd31077-243a-452b-a84a-48bd4f630777
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bbd31077-243a-452b-a84a-48bd4f630777
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 11d38ba5-ecbd-400b-a4ae-49926680a2ab
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3703 | 0.0007 | 1 | 10.3644 |
| 10.3313 | 0.0366 | 50 | 10.3349 |
| 10.3337 | 0.0731 | 100 | 10.3220 |
| 10.3167 | 0.1097 | 150 | 10.3208 |
| 10.3138 | 0.1463 | 200 | 10.3208 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF
|
featherless-ai-quants
| 2025-02-04T02:44:56Z | 151 | 0 | null |
[
"gguf",
"text-generation",
"base_model:ChaoticNeutrals/Stanta-Lelemon-Maid-7B",
"base_model:quantized:ChaoticNeutrals/Stanta-Lelemon-Maid-7B",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T02:36:07Z |
---
base_model: ChaoticNeutrals/Stanta-Lelemon-Maid-7B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ChaoticNeutrals/Stanta-Lelemon-Maid-7B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-GGUF/blob/main/ChaoticNeutrals-Stanta-Lelemon-Maid-7B-Q8_0.gguf) | 7339.34 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
antimage88/b0565cf5-342a-4520-9361-476dac07d7d0
|
antimage88
| 2025-02-04T02:42:19Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:19:15Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b0565cf5-342a-4520-9361-476dac07d7d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d9f1df0279b4eb5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d9f1df0279b4eb5_train_data.json
type:
field_input: context
field_instruction: background
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: antimage88/b0565cf5-342a-4520-9361-476dac07d7d0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/3d9f1df0279b4eb5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ef12fd27-895e-4a23-bdde-4567c829c2e3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ef12fd27-895e-4a23-bdde-4567c829c2e3
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b0565cf5-342a-4520-9361-476dac07d7d0
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2016 | 0.0371 | 200 | 1.6806 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
blood34/36655a94-8b2a-4c53-b692-9a64f4cf2ee3
|
blood34
| 2025-02-04T02:41:04Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:33:28Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 36655a94-8b2a-4c53-b692-9a64f4cf2ee3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9ee4c7d4f914610d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9ee4c7d4f914610d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: blood34/36655a94-8b2a-4c53-b692-9a64f4cf2ee3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/9ee4c7d4f914610d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8c04dad1-b647-409f-8c82-04b3516dd360
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8c04dad1-b647-409f-8c82-04b3516dd360
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 36655a94-8b2a-4c53-b692-9a64f4cf2ee3
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 139
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5491 | 0.9982 | 138 | 0.5401 |
| 0.71 | 1.0054 | 139 | 0.5384 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kk-aivio/98de4832-57f4-4ded-b26d-cbc90fad2011
|
kk-aivio
| 2025-02-04T02:38:26Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | 2025-02-04T02:34:42Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 98de4832-57f4-4ded-b26d-cbc90fad2011
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cff7ac798e6d5dcd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cff7ac798e6d5dcd_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/98de4832-57f4-4ded-b26d-cbc90fad2011
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cff7ac798e6d5dcd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cf8d9384-56f2-40e9-8877-64c5c8e6e996
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: cf8d9384-56f2-40e9-8877-64c5c8e6e996
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 98de4832-57f4-4ded-b26d-cbc90fad2011
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 3.0766 |
| 7.4635 | 0.0737 | 50 | 2.0064 |
| 7.4142 | 0.1473 | 100 | 1.7402 |
| 6.0921 | 0.2210 | 150 | 1.6190 |
| 6.1163 | 0.2947 | 200 | 1.5955 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
pipidepulus/hojas
|
pipidepulus
| 2025-02-04T02:35:44Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-02-04T02:25:02Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hojas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hojas
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0200
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1305 | 3.8462 | 500 | 0.0200 | 0.9925 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Tokenizers 0.21.0
|
batrider32/3fd1cba6-e769-49f1-bd50-1d4545cb45b6
|
batrider32
| 2025-02-04T02:34:49Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T02:11:50Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3fd1cba6-e769-49f1-bd50-1d4545cb45b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d9f1df0279b4eb5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d9f1df0279b4eb5_train_data.json
type:
field_input: context
field_instruction: background
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: batrider32/3fd1cba6-e769-49f1-bd50-1d4545cb45b6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/3d9f1df0279b4eb5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ef12fd27-895e-4a23-bdde-4567c829c2e3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ef12fd27-895e-4a23-bdde-4567c829c2e3
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3fd1cba6-e769-49f1-bd50-1d4545cb45b6
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2217 | 0.0371 | 200 | 1.6823 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
archit11/smollm350m-grpo
|
archit11
| 2025-02-04T02:34:40Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-02T12:41:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
SMOLLM 350M trained for 500 steps on gsm8k with grpo , gets 2% accuracy boost over base
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso/7a7e5161-5875-42cc-b67d-ede2e161c29e
|
lesso
| 2025-02-04T02:34:40Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-02-04T02:22:33Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a7e5161-5875-42cc-b67d-ede2e161c29e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7f4ffc4da3710d39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f4ffc4da3710d39_train_data.json
type:
field_input: text
field_instruction: task_name
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/7a7e5161-5875-42cc-b67d-ede2e161c29e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god01/7f4ffc4da3710d39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a4f7ae30-2ca5-42fa-a4c8-6320e54b4228
wandb_project: ab-god01
wandb_run: your_name
wandb_runid: a4f7ae30-2ca5-42fa-a4c8-6320e54b4228
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7a7e5161-5875-42cc-b67d-ede2e161c29e
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7729 | 0.0004 | 1 | 3.0150 |
| 0.5912 | 0.0199 | 50 | 0.5331 |
| 0.2983 | 0.0398 | 100 | 0.2335 |
| 0.1296 | 0.0598 | 150 | 0.1907 |
| 0.0004 | 0.0797 | 200 | 0.1695 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task7_organization
|
MayBashendy
| 2025-02-04T02:33:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T02:27:51Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4163
- Qwk: 0.5267
- Mse: 0.4163
- Rmse: 0.6452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.2857 | 2 | 2.5847 | -0.0545 | 2.5847 | 1.6077 |
| No log | 0.5714 | 4 | 1.1679 | 0.0993 | 1.1679 | 1.0807 |
| No log | 0.8571 | 6 | 0.7084 | 0.0893 | 0.7084 | 0.8417 |
| No log | 1.1429 | 8 | 0.8818 | 0.2651 | 0.8818 | 0.9391 |
| No log | 1.4286 | 10 | 0.8770 | 0.2552 | 0.8770 | 0.9365 |
| No log | 1.7143 | 12 | 0.7326 | 0.2871 | 0.7326 | 0.8559 |
| No log | 2.0 | 14 | 0.6137 | 0.3197 | 0.6137 | 0.7834 |
| No log | 2.2857 | 16 | 0.5657 | 0.4161 | 0.5657 | 0.7521 |
| No log | 2.5714 | 18 | 0.5944 | 0.3416 | 0.5944 | 0.7710 |
| No log | 2.8571 | 20 | 0.5174 | 0.4561 | 0.5174 | 0.7193 |
| No log | 3.1429 | 22 | 0.5026 | 0.4354 | 0.5026 | 0.7090 |
| No log | 3.4286 | 24 | 0.4933 | 0.4444 | 0.4933 | 0.7023 |
| No log | 3.7143 | 26 | 0.5254 | 0.4370 | 0.5254 | 0.7249 |
| No log | 4.0 | 28 | 0.5601 | 0.4330 | 0.5601 | 0.7484 |
| No log | 4.2857 | 30 | 0.4951 | 0.5466 | 0.4951 | 0.7037 |
| No log | 4.5714 | 32 | 0.4624 | 0.5373 | 0.4624 | 0.6800 |
| No log | 4.8571 | 34 | 0.4230 | 0.6295 | 0.4230 | 0.6504 |
| No log | 5.1429 | 36 | 0.4533 | 0.5868 | 0.4533 | 0.6733 |
| No log | 5.4286 | 38 | 0.3891 | 0.6458 | 0.3891 | 0.6238 |
| No log | 5.7143 | 40 | 0.4106 | 0.6184 | 0.4106 | 0.6408 |
| No log | 6.0 | 42 | 0.5541 | 0.6587 | 0.5541 | 0.7444 |
| No log | 6.2857 | 44 | 0.5450 | 0.6263 | 0.5450 | 0.7383 |
| No log | 6.5714 | 46 | 0.4511 | 0.5798 | 0.4511 | 0.6717 |
| No log | 6.8571 | 48 | 0.5396 | 0.6765 | 0.5396 | 0.7346 |
| No log | 7.1429 | 50 | 0.3981 | 0.7123 | 0.3981 | 0.6309 |
| No log | 7.4286 | 52 | 0.5540 | 0.5657 | 0.5540 | 0.7443 |
| No log | 7.7143 | 54 | 1.0452 | 0.2990 | 1.0452 | 1.0223 |
| No log | 8.0 | 56 | 1.0185 | 0.3290 | 1.0185 | 1.0092 |
| No log | 8.2857 | 58 | 0.5688 | 0.5017 | 0.5688 | 0.7542 |
| No log | 8.5714 | 60 | 0.4385 | 0.6313 | 0.4385 | 0.6622 |
| No log | 8.8571 | 62 | 0.5364 | 0.5722 | 0.5364 | 0.7324 |
| No log | 9.1429 | 64 | 0.4873 | 0.6670 | 0.4873 | 0.6981 |
| No log | 9.4286 | 66 | 0.4862 | 0.5339 | 0.4862 | 0.6972 |
| No log | 9.7143 | 68 | 0.6790 | 0.4921 | 0.6790 | 0.8240 |
| No log | 10.0 | 70 | 0.6722 | 0.5160 | 0.6722 | 0.8199 |
| No log | 10.2857 | 72 | 0.5552 | 0.5498 | 0.5552 | 0.7451 |
| No log | 10.5714 | 74 | 0.4779 | 0.6010 | 0.4779 | 0.6913 |
| No log | 10.8571 | 76 | 0.4970 | 0.5817 | 0.4970 | 0.7050 |
| No log | 11.1429 | 78 | 0.4601 | 0.6076 | 0.4601 | 0.6783 |
| No log | 11.4286 | 80 | 0.5461 | 0.5672 | 0.5461 | 0.7390 |
| No log | 11.7143 | 82 | 0.6890 | 0.4667 | 0.6890 | 0.8301 |
| No log | 12.0 | 84 | 0.5512 | 0.5315 | 0.5512 | 0.7424 |
| No log | 12.2857 | 86 | 0.4524 | 0.5633 | 0.4524 | 0.6726 |
| No log | 12.5714 | 88 | 0.5360 | 0.5481 | 0.5360 | 0.7321 |
| No log | 12.8571 | 90 | 0.6668 | 0.5243 | 0.6668 | 0.8166 |
| No log | 13.1429 | 92 | 0.4830 | 0.5570 | 0.4830 | 0.6950 |
| No log | 13.4286 | 94 | 0.4624 | 0.5339 | 0.4624 | 0.6800 |
| No log | 13.7143 | 96 | 0.5056 | 0.5470 | 0.5056 | 0.7110 |
| No log | 14.0 | 98 | 0.4221 | 0.5475 | 0.4221 | 0.6497 |
| No log | 14.2857 | 100 | 0.4074 | 0.6596 | 0.4074 | 0.6383 |
| No log | 14.5714 | 102 | 0.4089 | 0.6596 | 0.4089 | 0.6395 |
| No log | 14.8571 | 104 | 0.4013 | 0.6060 | 0.4013 | 0.6335 |
| No log | 15.1429 | 106 | 0.4072 | 0.5722 | 0.4072 | 0.6381 |
| No log | 15.4286 | 108 | 0.4069 | 0.5479 | 0.4069 | 0.6379 |
| No log | 15.7143 | 110 | 0.3896 | 0.6201 | 0.3896 | 0.6242 |
| No log | 16.0 | 112 | 0.4330 | 0.6388 | 0.4330 | 0.6580 |
| No log | 16.2857 | 114 | 0.4085 | 0.6490 | 0.4085 | 0.6392 |
| No log | 16.5714 | 116 | 0.4058 | 0.6278 | 0.4058 | 0.6370 |
| No log | 16.8571 | 118 | 0.4000 | 0.6490 | 0.4000 | 0.6325 |
| No log | 17.1429 | 120 | 0.3925 | 0.5539 | 0.3925 | 0.6265 |
| No log | 17.4286 | 122 | 0.3918 | 0.5782 | 0.3918 | 0.6259 |
| No log | 17.7143 | 124 | 0.3939 | 0.6503 | 0.3939 | 0.6276 |
| No log | 18.0 | 126 | 0.4008 | 0.5853 | 0.4008 | 0.6331 |
| No log | 18.2857 | 128 | 0.4045 | 0.6701 | 0.4045 | 0.6360 |
| No log | 18.5714 | 130 | 0.4060 | 0.6142 | 0.4060 | 0.6371 |
| No log | 18.8571 | 132 | 0.4484 | 0.6169 | 0.4484 | 0.6696 |
| No log | 19.1429 | 134 | 0.4426 | 0.6169 | 0.4426 | 0.6653 |
| No log | 19.4286 | 136 | 0.4139 | 0.6678 | 0.4139 | 0.6433 |
| No log | 19.7143 | 138 | 0.3923 | 0.6643 | 0.3923 | 0.6263 |
| No log | 20.0 | 140 | 0.3947 | 0.6747 | 0.3947 | 0.6283 |
| No log | 20.2857 | 142 | 0.4032 | 0.6854 | 0.4032 | 0.6349 |
| No log | 20.5714 | 144 | 0.3916 | 0.6542 | 0.3916 | 0.6258 |
| No log | 20.8571 | 146 | 0.3908 | 0.6627 | 0.3908 | 0.6252 |
| No log | 21.1429 | 148 | 0.4166 | 0.7052 | 0.4166 | 0.6455 |
| No log | 21.4286 | 150 | 0.4734 | 0.5567 | 0.4734 | 0.6880 |
| No log | 21.7143 | 152 | 0.5077 | 0.6088 | 0.5077 | 0.7126 |
| No log | 22.0 | 154 | 0.4498 | 0.6287 | 0.4498 | 0.6707 |
| No log | 22.2857 | 156 | 0.4239 | 0.6968 | 0.4239 | 0.6511 |
| No log | 22.5714 | 158 | 0.4240 | 0.6975 | 0.4240 | 0.6511 |
| No log | 22.8571 | 160 | 0.4161 | 0.6643 | 0.4161 | 0.6451 |
| No log | 23.1429 | 162 | 0.4205 | 0.5698 | 0.4205 | 0.6485 |
| No log | 23.4286 | 164 | 0.4184 | 0.6229 | 0.4184 | 0.6468 |
| No log | 23.7143 | 166 | 0.4235 | 0.5698 | 0.4235 | 0.6508 |
| No log | 24.0 | 168 | 0.4586 | 0.5124 | 0.4586 | 0.6772 |
| No log | 24.2857 | 170 | 0.4622 | 0.4881 | 0.4622 | 0.6799 |
| No log | 24.5714 | 172 | 0.4639 | 0.5527 | 0.4639 | 0.6811 |
| No log | 24.8571 | 174 | 0.4502 | 0.5649 | 0.4502 | 0.6709 |
| No log | 25.1429 | 176 | 0.4411 | 0.5974 | 0.4411 | 0.6641 |
| No log | 25.4286 | 178 | 0.4654 | 0.6305 | 0.4654 | 0.6822 |
| No log | 25.7143 | 180 | 0.4581 | 0.6296 | 0.4581 | 0.6769 |
| No log | 26.0 | 182 | 0.4324 | 0.5926 | 0.4324 | 0.6576 |
| No log | 26.2857 | 184 | 0.4312 | 0.5656 | 0.4312 | 0.6567 |
| No log | 26.5714 | 186 | 0.4436 | 0.5831 | 0.4436 | 0.6660 |
| No log | 26.8571 | 188 | 0.4371 | 0.5731 | 0.4371 | 0.6611 |
| No log | 27.1429 | 190 | 0.4254 | 0.5860 | 0.4254 | 0.6522 |
| No log | 27.4286 | 192 | 0.4413 | 0.6201 | 0.4413 | 0.6643 |
| No log | 27.7143 | 194 | 0.4523 | 0.6495 | 0.4523 | 0.6725 |
| No log | 28.0 | 196 | 0.4151 | 0.6983 | 0.4151 | 0.6443 |
| No log | 28.2857 | 198 | 0.3907 | 0.6828 | 0.3907 | 0.6251 |
| No log | 28.5714 | 200 | 0.4017 | 0.6183 | 0.4017 | 0.6338 |
| No log | 28.8571 | 202 | 0.3992 | 0.6183 | 0.3992 | 0.6319 |
| No log | 29.1429 | 204 | 0.3900 | 0.7095 | 0.3900 | 0.6245 |
| No log | 29.4286 | 206 | 0.3955 | 0.7073 | 0.3955 | 0.6289 |
| No log | 29.7143 | 208 | 0.3990 | 0.6479 | 0.3990 | 0.6317 |
| No log | 30.0 | 210 | 0.4296 | 0.6127 | 0.4296 | 0.6555 |
| No log | 30.2857 | 212 | 0.4053 | 0.6292 | 0.4053 | 0.6366 |
| No log | 30.5714 | 214 | 0.3996 | 0.7073 | 0.3996 | 0.6322 |
| No log | 30.8571 | 216 | 0.4009 | 0.7073 | 0.4009 | 0.6331 |
| No log | 31.1429 | 218 | 0.3906 | 0.7003 | 0.3906 | 0.6250 |
| No log | 31.4286 | 220 | 0.4075 | 0.6402 | 0.4075 | 0.6384 |
| No log | 31.7143 | 222 | 0.4055 | 0.6407 | 0.4055 | 0.6368 |
| No log | 32.0 | 224 | 0.3925 | 0.6750 | 0.3925 | 0.6265 |
| No log | 32.2857 | 226 | 0.4021 | 0.6720 | 0.4021 | 0.6341 |
| No log | 32.5714 | 228 | 0.4088 | 0.6890 | 0.4088 | 0.6394 |
| No log | 32.8571 | 230 | 0.4200 | 0.6371 | 0.4200 | 0.6481 |
| No log | 33.1429 | 232 | 0.4313 | 0.6046 | 0.4313 | 0.6568 |
| No log | 33.4286 | 234 | 0.4369 | 0.6145 | 0.4369 | 0.6610 |
| No log | 33.7143 | 236 | 0.4467 | 0.6687 | 0.4467 | 0.6684 |
| No log | 34.0 | 238 | 0.4332 | 0.6973 | 0.4332 | 0.6582 |
| No log | 34.2857 | 240 | 0.4293 | 0.5649 | 0.4293 | 0.6552 |
| No log | 34.5714 | 242 | 0.4686 | 0.5528 | 0.4686 | 0.6845 |
| No log | 34.8571 | 244 | 0.4966 | 0.5808 | 0.4966 | 0.7047 |
| No log | 35.1429 | 246 | 0.4907 | 0.5883 | 0.4907 | 0.7005 |
| No log | 35.4286 | 248 | 0.4640 | 0.5672 | 0.4640 | 0.6812 |
| No log | 35.7143 | 250 | 0.4102 | 0.6395 | 0.4102 | 0.6405 |
| No log | 36.0 | 252 | 0.3968 | 0.6645 | 0.3968 | 0.6299 |
| No log | 36.2857 | 254 | 0.3963 | 0.6464 | 0.3963 | 0.6296 |
| No log | 36.5714 | 256 | 0.4017 | 0.6282 | 0.4017 | 0.6338 |
| No log | 36.8571 | 258 | 0.3942 | 0.6154 | 0.3942 | 0.6279 |
| No log | 37.1429 | 260 | 0.3802 | 0.7227 | 0.3802 | 0.6166 |
| No log | 37.4286 | 262 | 0.3829 | 0.7085 | 0.3829 | 0.6188 |
| No log | 37.7143 | 264 | 0.3833 | 0.7238 | 0.3833 | 0.6191 |
| No log | 38.0 | 266 | 0.3820 | 0.7588 | 0.3820 | 0.6180 |
| No log | 38.2857 | 268 | 0.4355 | 0.5908 | 0.4355 | 0.6600 |
| No log | 38.5714 | 270 | 0.4503 | 0.5908 | 0.4503 | 0.6710 |
| No log | 38.8571 | 272 | 0.4037 | 0.6771 | 0.4037 | 0.6354 |
| No log | 39.1429 | 274 | 0.3847 | 0.6542 | 0.3847 | 0.6202 |
| No log | 39.4286 | 276 | 0.4026 | 0.6264 | 0.4026 | 0.6345 |
| No log | 39.7143 | 278 | 0.4247 | 0.6156 | 0.4247 | 0.6517 |
| No log | 40.0 | 280 | 0.4182 | 0.6264 | 0.4182 | 0.6467 |
| No log | 40.2857 | 282 | 0.4135 | 0.6374 | 0.4135 | 0.6430 |
| No log | 40.5714 | 284 | 0.4195 | 0.5305 | 0.4195 | 0.6477 |
| No log | 40.8571 | 286 | 0.4320 | 0.5065 | 0.4320 | 0.6573 |
| No log | 41.1429 | 288 | 0.4285 | 0.5065 | 0.4285 | 0.6546 |
| No log | 41.4286 | 290 | 0.4202 | 0.5539 | 0.4202 | 0.6482 |
| No log | 41.7143 | 292 | 0.4184 | 0.5846 | 0.4184 | 0.6469 |
| No log | 42.0 | 294 | 0.4239 | 0.5580 | 0.4239 | 0.6511 |
| No log | 42.2857 | 296 | 0.4373 | 0.5266 | 0.4373 | 0.6613 |
| No log | 42.5714 | 298 | 0.4366 | 0.5195 | 0.4366 | 0.6608 |
| No log | 42.8571 | 300 | 0.4208 | 0.6184 | 0.4208 | 0.6487 |
| No log | 43.1429 | 302 | 0.4088 | 0.6634 | 0.4088 | 0.6394 |
| No log | 43.4286 | 304 | 0.4047 | 0.6344 | 0.4047 | 0.6362 |
| No log | 43.7143 | 306 | 0.4030 | 0.6555 | 0.4030 | 0.6348 |
| No log | 44.0 | 308 | 0.4029 | 0.7266 | 0.4029 | 0.6348 |
| No log | 44.2857 | 310 | 0.4053 | 0.7154 | 0.4053 | 0.6366 |
| No log | 44.5714 | 312 | 0.3944 | 0.6724 | 0.3944 | 0.6280 |
| No log | 44.8571 | 314 | 0.3886 | 0.6555 | 0.3886 | 0.6234 |
| No log | 45.1429 | 316 | 0.4266 | 0.5569 | 0.4266 | 0.6531 |
| No log | 45.4286 | 318 | 0.4584 | 0.5983 | 0.4584 | 0.6771 |
| No log | 45.7143 | 320 | 0.4464 | 0.5779 | 0.4464 | 0.6681 |
| No log | 46.0 | 322 | 0.4085 | 0.6282 | 0.4085 | 0.6392 |
| No log | 46.2857 | 324 | 0.3959 | 0.6648 | 0.3959 | 0.6292 |
| No log | 46.5714 | 326 | 0.3991 | 0.6648 | 0.3991 | 0.6317 |
| No log | 46.8571 | 328 | 0.4040 | 0.5930 | 0.4040 | 0.6356 |
| No log | 47.1429 | 330 | 0.4067 | 0.5915 | 0.4067 | 0.6377 |
| No log | 47.4286 | 332 | 0.4121 | 0.6046 | 0.4121 | 0.6420 |
| No log | 47.7143 | 334 | 0.4187 | 0.6530 | 0.4187 | 0.6471 |
| No log | 48.0 | 336 | 0.4140 | 0.6530 | 0.4140 | 0.6434 |
| No log | 48.2857 | 338 | 0.4044 | 0.6460 | 0.4044 | 0.6359 |
| No log | 48.5714 | 340 | 0.4065 | 0.5904 | 0.4065 | 0.6376 |
| No log | 48.8571 | 342 | 0.4200 | 0.5495 | 0.4200 | 0.6481 |
| No log | 49.1429 | 344 | 0.4343 | 0.5811 | 0.4343 | 0.6590 |
| No log | 49.4286 | 346 | 0.4375 | 0.5811 | 0.4375 | 0.6614 |
| No log | 49.7143 | 348 | 0.4254 | 0.5495 | 0.4254 | 0.6522 |
| No log | 50.0 | 350 | 0.4047 | 0.5714 | 0.4047 | 0.6361 |
| No log | 50.2857 | 352 | 0.4078 | 0.6820 | 0.4078 | 0.6386 |
| No log | 50.5714 | 354 | 0.4147 | 0.6506 | 0.4147 | 0.6440 |
| No log | 50.8571 | 356 | 0.4058 | 0.6712 | 0.4058 | 0.6370 |
| No log | 51.1429 | 358 | 0.3941 | 0.6942 | 0.3941 | 0.6278 |
| No log | 51.4286 | 360 | 0.3999 | 0.5985 | 0.3999 | 0.6324 |
| No log | 51.7143 | 362 | 0.4155 | 0.5841 | 0.4155 | 0.6446 |
| No log | 52.0 | 364 | 0.4258 | 0.5970 | 0.4258 | 0.6526 |
| No log | 52.2857 | 366 | 0.4243 | 0.6434 | 0.4243 | 0.6514 |
| No log | 52.5714 | 368 | 0.4150 | 0.6333 | 0.4150 | 0.6442 |
| No log | 52.8571 | 370 | 0.4219 | 0.6257 | 0.4219 | 0.6495 |
| No log | 53.1429 | 372 | 0.4235 | 0.6257 | 0.4235 | 0.6508 |
| No log | 53.4286 | 374 | 0.4163 | 0.6405 | 0.4163 | 0.6452 |
| No log | 53.7143 | 376 | 0.4130 | 0.6298 | 0.4130 | 0.6426 |
| No log | 54.0 | 378 | 0.4086 | 0.6452 | 0.4086 | 0.6392 |
| No log | 54.2857 | 380 | 0.4088 | 0.5714 | 0.4088 | 0.6394 |
| No log | 54.5714 | 382 | 0.4092 | 0.5227 | 0.4092 | 0.6397 |
| No log | 54.8571 | 384 | 0.4098 | 0.5440 | 0.4098 | 0.6401 |
| No log | 55.1429 | 386 | 0.4150 | 0.6096 | 0.4150 | 0.6442 |
| No log | 55.4286 | 388 | 0.4142 | 0.6096 | 0.4142 | 0.6436 |
| No log | 55.7143 | 390 | 0.4094 | 0.6326 | 0.4094 | 0.6398 |
| No log | 56.0 | 392 | 0.4043 | 0.6919 | 0.4043 | 0.6359 |
| No log | 56.2857 | 394 | 0.4043 | 0.6395 | 0.4043 | 0.6358 |
| No log | 56.5714 | 396 | 0.4169 | 0.6143 | 0.4169 | 0.6457 |
| No log | 56.8571 | 398 | 0.4338 | 0.5498 | 0.4338 | 0.6586 |
| No log | 57.1429 | 400 | 0.4236 | 0.5692 | 0.4236 | 0.6508 |
| No log | 57.4286 | 402 | 0.4065 | 0.6034 | 0.4065 | 0.6375 |
| No log | 57.7143 | 404 | 0.4081 | 0.5956 | 0.4081 | 0.6388 |
| No log | 58.0 | 406 | 0.4104 | 0.5956 | 0.4104 | 0.6406 |
| No log | 58.2857 | 408 | 0.3976 | 0.6860 | 0.3976 | 0.6305 |
| No log | 58.5714 | 410 | 0.3950 | 0.6672 | 0.3950 | 0.6285 |
| No log | 58.8571 | 412 | 0.3986 | 0.6389 | 0.3986 | 0.6314 |
| No log | 59.1429 | 414 | 0.4163 | 0.5841 | 0.4163 | 0.6452 |
| No log | 59.4286 | 416 | 0.4267 | 0.5569 | 0.4267 | 0.6532 |
| No log | 59.7143 | 418 | 0.4411 | 0.5569 | 0.4411 | 0.6641 |
| No log | 60.0 | 420 | 0.4347 | 0.5718 | 0.4347 | 0.6593 |
| No log | 60.2857 | 422 | 0.4149 | 0.5702 | 0.4149 | 0.6442 |
| No log | 60.5714 | 424 | 0.4089 | 0.5152 | 0.4089 | 0.6395 |
| No log | 60.8571 | 426 | 0.4099 | 0.5584 | 0.4099 | 0.6403 |
| No log | 61.1429 | 428 | 0.4133 | 0.5800 | 0.4133 | 0.6429 |
| No log | 61.4286 | 430 | 0.4166 | 0.5361 | 0.4166 | 0.6454 |
| No log | 61.7143 | 432 | 0.4188 | 0.5600 | 0.4188 | 0.6472 |
| No log | 62.0 | 434 | 0.4237 | 0.5152 | 0.4237 | 0.6509 |
| No log | 62.2857 | 436 | 0.4338 | 0.5098 | 0.4338 | 0.6586 |
| No log | 62.5714 | 438 | 0.4377 | 0.5495 | 0.4377 | 0.6616 |
| No log | 62.8571 | 440 | 0.4284 | 0.5028 | 0.4284 | 0.6545 |
| No log | 63.1429 | 442 | 0.4143 | 0.5152 | 0.4143 | 0.6437 |
| No log | 63.4286 | 444 | 0.4088 | 0.5600 | 0.4088 | 0.6393 |
| No log | 63.7143 | 446 | 0.4074 | 0.6076 | 0.4074 | 0.6383 |
| No log | 64.0 | 448 | 0.4074 | 0.6076 | 0.4074 | 0.6383 |
| No log | 64.2857 | 450 | 0.4071 | 0.6389 | 0.4071 | 0.6381 |
| No log | 64.5714 | 452 | 0.4040 | 0.6076 | 0.4040 | 0.6356 |
| No log | 64.8571 | 454 | 0.4029 | 0.5379 | 0.4029 | 0.6347 |
| No log | 65.1429 | 456 | 0.4061 | 0.5397 | 0.4061 | 0.6372 |
| No log | 65.4286 | 458 | 0.4078 | 0.6156 | 0.4078 | 0.6386 |
| No log | 65.7143 | 460 | 0.4095 | 0.6156 | 0.4095 | 0.6399 |
| No log | 66.0 | 462 | 0.4103 | 0.6156 | 0.4103 | 0.6405 |
| No log | 66.2857 | 464 | 0.4119 | 0.5941 | 0.4119 | 0.6418 |
| No log | 66.5714 | 466 | 0.4150 | 0.5522 | 0.4150 | 0.6442 |
| No log | 66.8571 | 468 | 0.4204 | 0.4703 | 0.4204 | 0.6484 |
| No log | 67.1429 | 470 | 0.4262 | 0.4703 | 0.4262 | 0.6529 |
| No log | 67.4286 | 472 | 0.4298 | 0.4774 | 0.4298 | 0.6556 |
| No log | 67.7143 | 474 | 0.4327 | 0.4774 | 0.4327 | 0.6578 |
| No log | 68.0 | 476 | 0.4353 | 0.5267 | 0.4353 | 0.6598 |
| No log | 68.2857 | 478 | 0.4361 | 0.5267 | 0.4361 | 0.6603 |
| No log | 68.5714 | 480 | 0.4356 | 0.5267 | 0.4356 | 0.6600 |
| No log | 68.8571 | 482 | 0.4320 | 0.5267 | 0.4320 | 0.6573 |
| No log | 69.1429 | 484 | 0.4276 | 0.5267 | 0.4276 | 0.6539 |
| No log | 69.4286 | 486 | 0.4218 | 0.5267 | 0.4218 | 0.6495 |
| No log | 69.7143 | 488 | 0.4213 | 0.5044 | 0.4213 | 0.6491 |
| No log | 70.0 | 490 | 0.4227 | 0.4970 | 0.4227 | 0.6502 |
| No log | 70.2857 | 492 | 0.4278 | 0.5227 | 0.4278 | 0.6541 |
| No log | 70.5714 | 494 | 0.4262 | 0.5475 | 0.4262 | 0.6528 |
| No log | 70.8571 | 496 | 0.4192 | 0.5227 | 0.4192 | 0.6475 |
| No log | 71.1429 | 498 | 0.4127 | 0.4970 | 0.4127 | 0.6424 |
| 0.1864 | 71.4286 | 500 | 0.4169 | 0.5397 | 0.4169 | 0.6457 |
| 0.1864 | 71.7143 | 502 | 0.4216 | 0.5397 | 0.4216 | 0.6493 |
| 0.1864 | 72.0 | 504 | 0.4212 | 0.5397 | 0.4212 | 0.6490 |
| 0.1864 | 72.2857 | 506 | 0.4146 | 0.5208 | 0.4146 | 0.6439 |
| 0.1864 | 72.5714 | 508 | 0.4128 | 0.5024 | 0.4128 | 0.6425 |
| 0.1864 | 72.8571 | 510 | 0.4163 | 0.5267 | 0.4163 | 0.6452 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Kwakrhkr/flyai_dataset
|
Kwakrhkr
| 2025-02-04T02:29:43Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T02:26:35Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Kwakrhkr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nexspear/a1c387b0-d0a7-4dca-86c3-d562ff5448df
|
Nexspear
| 2025-02-04T02:27:35Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T01:59:20Z |
---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a1c387b0-d0a7-4dca-86c3-d562ff5448df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bd2a081ce1ece142_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bd2a081ce1ece142_train_data.json
type:
field_instruction: instructions
field_output: outputs
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/a1c387b0-d0a7-4dca-86c3-d562ff5448df
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/bd2a081ce1ece142_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 84541128-e99e-4412-b56a-7eb22c1c1e64
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 84541128-e99e-4412-b56a-7eb22c1c1e64
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a1c387b0-d0a7-4dca-86c3-d562ff5448df
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.5192 | 0.0003 | 1 | 2.4969 |
| 11.1203 | 0.0171 | 50 | 2.2104 |
| 10.5202 | 0.0342 | 100 | 2.0548 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
na0-0/flyai_DATA
|
na0-0
| 2025-02-04T02:27:24Z | 21 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T02:25:21Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** na0-0
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Cold-brew/sktqa
|
Cold-brew
| 2025-02-04T02:27:23Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T02:25:20Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Cold-brew
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YongMinPark/chatbot_prac
|
YongMinPark
| 2025-02-04T02:22:35Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T02:20:33Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** YongMinPark
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/s1K_32b
|
mlfoundations-dev
| 2025-02-04T02:21:23Z | 3,403 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T00:35:15Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: s1K_32b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s1K_32b
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the mlfoundations-dev/s1K_reformat dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 16
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
oiehhun/sktqa
|
oiehhun
| 2025-02-04T02:21:12Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T02:19:09Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oiehhun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k10_task5_organization
|
MayBashendy
| 2025-02-04T02:21:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T02:15:01Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k10_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k10_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9416
- Qwk: 0.4333
- Mse: 0.9416
- Rmse: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0377 | 2 | 4.3431 | -0.0008 | 4.3431 | 2.0840 |
| No log | 0.0755 | 4 | 2.5060 | 0.1117 | 2.5060 | 1.5830 |
| No log | 0.1132 | 6 | 1.3539 | 0.0627 | 1.3539 | 1.1636 |
| No log | 0.1509 | 8 | 1.1834 | 0.1680 | 1.1834 | 1.0878 |
| No log | 0.1887 | 10 | 1.0077 | 0.2897 | 1.0077 | 1.0038 |
| No log | 0.2264 | 12 | 0.9656 | 0.2865 | 0.9656 | 0.9826 |
| No log | 0.2642 | 14 | 0.9498 | 0.2492 | 0.9498 | 0.9746 |
| No log | 0.3019 | 16 | 0.9449 | 0.2746 | 0.9449 | 0.9721 |
| No log | 0.3396 | 18 | 0.9280 | 0.3293 | 0.9280 | 0.9633 |
| No log | 0.3774 | 20 | 0.9419 | 0.3713 | 0.9419 | 0.9705 |
| No log | 0.4151 | 22 | 0.9147 | 0.4065 | 0.9147 | 0.9564 |
| No log | 0.4528 | 24 | 0.8826 | 0.3721 | 0.8826 | 0.9395 |
| No log | 0.4906 | 26 | 0.8752 | 0.4223 | 0.8752 | 0.9355 |
| No log | 0.5283 | 28 | 0.8666 | 0.4867 | 0.8666 | 0.9309 |
| No log | 0.5660 | 30 | 0.9803 | 0.3333 | 0.9803 | 0.9901 |
| No log | 0.6038 | 32 | 1.0629 | 0.3396 | 1.0629 | 1.0310 |
| No log | 0.6415 | 34 | 0.9256 | 0.5135 | 0.9256 | 0.9621 |
| No log | 0.6792 | 36 | 0.8616 | 0.4065 | 0.8616 | 0.9282 |
| No log | 0.7170 | 38 | 0.8219 | 0.4568 | 0.8219 | 0.9066 |
| No log | 0.7547 | 40 | 0.8354 | 0.5668 | 0.8354 | 0.9140 |
| No log | 0.7925 | 42 | 0.9386 | 0.4707 | 0.9386 | 0.9688 |
| No log | 0.8302 | 44 | 0.8572 | 0.5658 | 0.8572 | 0.9258 |
| No log | 0.8679 | 46 | 0.7516 | 0.4439 | 0.7516 | 0.8669 |
| No log | 0.9057 | 48 | 0.8695 | 0.4034 | 0.8695 | 0.9325 |
| No log | 0.9434 | 50 | 0.8482 | 0.4613 | 0.8482 | 0.9210 |
| No log | 0.9811 | 52 | 0.7741 | 0.4411 | 0.7741 | 0.8798 |
| No log | 1.0189 | 54 | 0.8846 | 0.4878 | 0.8846 | 0.9405 |
| No log | 1.0566 | 56 | 0.9381 | 0.4815 | 0.9381 | 0.9686 |
| No log | 1.0943 | 58 | 0.8910 | 0.5254 | 0.8910 | 0.9439 |
| No log | 1.1321 | 60 | 0.8403 | 0.4603 | 0.8403 | 0.9167 |
| No log | 1.1698 | 62 | 0.8066 | 0.3996 | 0.8066 | 0.8981 |
| No log | 1.2075 | 64 | 0.8110 | 0.4385 | 0.8110 | 0.9005 |
| No log | 1.2453 | 66 | 0.9008 | 0.4575 | 0.9008 | 0.9491 |
| No log | 1.2830 | 68 | 0.8895 | 0.4575 | 0.8895 | 0.9431 |
| No log | 1.3208 | 70 | 0.7635 | 0.5113 | 0.7635 | 0.8738 |
| No log | 1.3585 | 72 | 0.7317 | 0.5405 | 0.7317 | 0.8554 |
| No log | 1.3962 | 74 | 0.7203 | 0.4889 | 0.7203 | 0.8487 |
| No log | 1.4340 | 76 | 0.7085 | 0.5098 | 0.7085 | 0.8417 |
| No log | 1.4717 | 78 | 0.7240 | 0.5303 | 0.7240 | 0.8509 |
| No log | 1.5094 | 80 | 0.7187 | 0.5510 | 0.7187 | 0.8477 |
| No log | 1.5472 | 82 | 0.6939 | 0.5510 | 0.6939 | 0.8330 |
| No log | 1.5849 | 84 | 0.6631 | 0.5405 | 0.6631 | 0.8143 |
| No log | 1.6226 | 86 | 0.7455 | 0.4467 | 0.7455 | 0.8634 |
| No log | 1.6604 | 88 | 0.7487 | 0.4330 | 0.7487 | 0.8653 |
| No log | 1.6981 | 90 | 0.7574 | 0.4724 | 0.7574 | 0.8703 |
| No log | 1.7358 | 92 | 0.8651 | 0.4931 | 0.8651 | 0.9301 |
| No log | 1.7736 | 94 | 0.8031 | 0.5366 | 0.8031 | 0.8962 |
| No log | 1.8113 | 96 | 0.7102 | 0.5316 | 0.7102 | 0.8427 |
| No log | 1.8491 | 98 | 0.6671 | 0.5771 | 0.6671 | 0.8167 |
| No log | 1.8868 | 100 | 0.6880 | 0.5811 | 0.6880 | 0.8295 |
| No log | 1.9245 | 102 | 0.6635 | 0.5874 | 0.6635 | 0.8145 |
| No log | 1.9623 | 104 | 0.6632 | 0.5084 | 0.6632 | 0.8144 |
| No log | 2.0 | 106 | 0.7978 | 0.5563 | 0.7978 | 0.8932 |
| No log | 2.0377 | 108 | 0.7813 | 0.5563 | 0.7813 | 0.8839 |
| No log | 2.0755 | 110 | 0.6346 | 0.5326 | 0.6346 | 0.7966 |
| No log | 2.1132 | 112 | 0.5927 | 0.6229 | 0.5927 | 0.7698 |
| No log | 2.1509 | 114 | 0.6034 | 0.6319 | 0.6034 | 0.7768 |
| No log | 2.1887 | 116 | 0.6091 | 0.6067 | 0.6091 | 0.7804 |
| No log | 2.2264 | 118 | 0.7940 | 0.5270 | 0.7940 | 0.8911 |
| No log | 2.2642 | 120 | 0.9979 | 0.375 | 0.9979 | 0.9989 |
| No log | 2.3019 | 122 | 0.8626 | 0.4728 | 0.8626 | 0.9288 |
| No log | 2.3396 | 124 | 0.6249 | 0.5747 | 0.6249 | 0.7905 |
| No log | 2.3774 | 126 | 0.6406 | 0.6189 | 0.6406 | 0.8004 |
| No log | 2.4151 | 128 | 0.6850 | 0.6151 | 0.6850 | 0.8276 |
| No log | 2.4528 | 130 | 0.6373 | 0.6160 | 0.6373 | 0.7983 |
| No log | 2.4906 | 132 | 0.6188 | 0.6041 | 0.6188 | 0.7866 |
| No log | 2.5283 | 134 | 0.6275 | 0.6185 | 0.6275 | 0.7921 |
| No log | 2.5660 | 136 | 0.6313 | 0.5988 | 0.6313 | 0.7945 |
| No log | 2.6038 | 138 | 0.6185 | 0.6128 | 0.6185 | 0.7865 |
| No log | 2.6415 | 140 | 0.6225 | 0.6165 | 0.6225 | 0.7890 |
| No log | 2.6792 | 142 | 0.6313 | 0.5746 | 0.6313 | 0.7946 |
| No log | 2.7170 | 144 | 0.6625 | 0.5692 | 0.6625 | 0.8140 |
| No log | 2.7547 | 146 | 0.7226 | 0.5229 | 0.7226 | 0.8500 |
| No log | 2.7925 | 148 | 0.7231 | 0.5230 | 0.7231 | 0.8504 |
| No log | 2.8302 | 150 | 0.6117 | 0.6364 | 0.6117 | 0.7821 |
| No log | 2.8679 | 152 | 0.5949 | 0.6320 | 0.5949 | 0.7713 |
| No log | 2.9057 | 154 | 0.6084 | 0.6164 | 0.6084 | 0.7800 |
| No log | 2.9434 | 156 | 0.6165 | 0.5735 | 0.6165 | 0.7852 |
| No log | 2.9811 | 158 | 0.6685 | 0.5103 | 0.6685 | 0.8176 |
| No log | 3.0189 | 160 | 0.6240 | 0.5523 | 0.6240 | 0.7899 |
| No log | 3.0566 | 162 | 0.5889 | 0.6256 | 0.5889 | 0.7674 |
| No log | 3.0943 | 164 | 0.5841 | 0.6356 | 0.5841 | 0.7643 |
| No log | 3.1321 | 166 | 0.5787 | 0.6659 | 0.5787 | 0.7607 |
| No log | 3.1698 | 168 | 0.6180 | 0.6479 | 0.6180 | 0.7861 |
| No log | 3.2075 | 170 | 0.6266 | 0.5561 | 0.6266 | 0.7916 |
| No log | 3.2453 | 172 | 0.6266 | 0.5672 | 0.6266 | 0.7916 |
| No log | 3.2830 | 174 | 0.6028 | 0.5549 | 0.6028 | 0.7764 |
| No log | 3.3208 | 176 | 0.5924 | 0.6117 | 0.5924 | 0.7697 |
| No log | 3.3585 | 178 | 0.5936 | 0.6400 | 0.5936 | 0.7705 |
| No log | 3.3962 | 180 | 0.6284 | 0.5837 | 0.6284 | 0.7927 |
| No log | 3.4340 | 182 | 0.6824 | 0.5870 | 0.6824 | 0.8261 |
| No log | 3.4717 | 184 | 0.6308 | 0.5993 | 0.6308 | 0.7942 |
| No log | 3.5094 | 186 | 0.5972 | 0.5666 | 0.5972 | 0.7728 |
| No log | 3.5472 | 188 | 0.6394 | 0.5123 | 0.6394 | 0.7996 |
| No log | 3.5849 | 190 | 0.6671 | 0.5123 | 0.6671 | 0.8168 |
| No log | 3.6226 | 192 | 0.6548 | 0.5011 | 0.6548 | 0.8092 |
| No log | 3.6604 | 194 | 0.6623 | 0.5051 | 0.6623 | 0.8138 |
| No log | 3.6981 | 196 | 0.7259 | 0.5255 | 0.7259 | 0.8520 |
| No log | 3.7358 | 198 | 0.7050 | 0.5708 | 0.7050 | 0.8396 |
| No log | 3.7736 | 200 | 0.5990 | 0.6175 | 0.5990 | 0.7740 |
| No log | 3.8113 | 202 | 0.5654 | 0.6438 | 0.5654 | 0.7520 |
| No log | 3.8491 | 204 | 0.5619 | 0.6589 | 0.5619 | 0.7496 |
| No log | 3.8868 | 206 | 0.5497 | 0.6087 | 0.5497 | 0.7414 |
| No log | 3.9245 | 208 | 0.5993 | 0.6828 | 0.5993 | 0.7742 |
| No log | 3.9623 | 210 | 0.6485 | 0.6699 | 0.6485 | 0.8053 |
| No log | 4.0 | 212 | 0.6479 | 0.5782 | 0.6479 | 0.8049 |
| No log | 4.0377 | 214 | 0.5935 | 0.6087 | 0.5935 | 0.7704 |
| No log | 4.0755 | 216 | 0.5878 | 0.6084 | 0.5878 | 0.7667 |
| No log | 4.1132 | 218 | 0.5822 | 0.5563 | 0.5822 | 0.7630 |
| No log | 4.1509 | 220 | 0.6073 | 0.5656 | 0.6073 | 0.7793 |
| No log | 4.1887 | 222 | 0.7414 | 0.5488 | 0.7414 | 0.8610 |
| No log | 4.2264 | 224 | 0.9306 | 0.4987 | 0.9306 | 0.9647 |
| No log | 4.2642 | 226 | 0.8794 | 0.5295 | 0.8794 | 0.9378 |
| No log | 4.3019 | 228 | 0.8252 | 0.5683 | 0.8252 | 0.9084 |
| No log | 4.3396 | 230 | 0.6202 | 0.6173 | 0.6202 | 0.7875 |
| No log | 4.3774 | 232 | 0.5888 | 0.6302 | 0.5888 | 0.7673 |
| No log | 4.4151 | 234 | 0.7016 | 0.5257 | 0.7016 | 0.8376 |
| No log | 4.4528 | 236 | 0.8066 | 0.5283 | 0.8066 | 0.8981 |
| No log | 4.4906 | 238 | 0.7012 | 0.5019 | 0.7012 | 0.8373 |
| No log | 4.5283 | 240 | 0.6219 | 0.5561 | 0.6219 | 0.7886 |
| No log | 4.5660 | 242 | 0.6108 | 0.6325 | 0.6108 | 0.7815 |
| No log | 4.6038 | 244 | 0.6470 | 0.6244 | 0.6470 | 0.8043 |
| No log | 4.6415 | 246 | 0.6703 | 0.5884 | 0.6703 | 0.8187 |
| No log | 4.6792 | 248 | 0.6666 | 0.5759 | 0.6666 | 0.8165 |
| No log | 4.7170 | 250 | 0.6847 | 0.5565 | 0.6847 | 0.8275 |
| No log | 4.7547 | 252 | 0.7120 | 0.5804 | 0.7120 | 0.8438 |
| No log | 4.7925 | 254 | 0.6530 | 0.5117 | 0.6530 | 0.8081 |
| No log | 4.8302 | 256 | 0.6253 | 0.4951 | 0.6253 | 0.7907 |
| No log | 4.8679 | 258 | 0.6055 | 0.5876 | 0.6055 | 0.7782 |
| No log | 4.9057 | 260 | 0.5579 | 0.6407 | 0.5579 | 0.7469 |
| No log | 4.9434 | 262 | 0.5554 | 0.6407 | 0.5554 | 0.7452 |
| No log | 4.9811 | 264 | 0.5862 | 0.5331 | 0.5862 | 0.7657 |
| No log | 5.0189 | 266 | 0.6901 | 0.5504 | 0.6901 | 0.8307 |
| No log | 5.0566 | 268 | 0.7822 | 0.4654 | 0.7822 | 0.8844 |
| No log | 5.0943 | 270 | 0.7385 | 0.4497 | 0.7385 | 0.8594 |
| No log | 5.1321 | 272 | 0.6656 | 0.4941 | 0.6656 | 0.8159 |
| No log | 5.1698 | 274 | 0.6282 | 0.5618 | 0.6282 | 0.7926 |
| No log | 5.2075 | 276 | 0.6254 | 0.5798 | 0.6254 | 0.7908 |
| No log | 5.2453 | 278 | 0.5878 | 0.5631 | 0.5878 | 0.7667 |
| No log | 5.2830 | 280 | 0.7482 | 0.5881 | 0.7482 | 0.8650 |
| No log | 5.3208 | 282 | 1.0094 | 0.4469 | 1.0094 | 1.0047 |
| No log | 5.3585 | 284 | 1.0070 | 0.4740 | 1.0070 | 1.0035 |
| No log | 5.3962 | 286 | 0.8044 | 0.5681 | 0.8044 | 0.8969 |
| No log | 5.4340 | 288 | 0.6256 | 0.5783 | 0.6256 | 0.7909 |
| No log | 5.4717 | 290 | 0.6371 | 0.5231 | 0.6371 | 0.7982 |
| No log | 5.5094 | 292 | 0.6817 | 0.5349 | 0.6817 | 0.8257 |
| No log | 5.5472 | 294 | 0.6774 | 0.4606 | 0.6774 | 0.8230 |
| No log | 5.5849 | 296 | 0.6622 | 0.4892 | 0.6622 | 0.8138 |
| No log | 5.6226 | 298 | 0.6474 | 0.5210 | 0.6474 | 0.8046 |
| No log | 5.6604 | 300 | 0.6703 | 0.6120 | 0.6703 | 0.8187 |
| No log | 5.6981 | 302 | 0.7262 | 0.5547 | 0.7262 | 0.8522 |
| No log | 5.7358 | 304 | 0.7493 | 0.5729 | 0.7493 | 0.8656 |
| No log | 5.7736 | 306 | 0.6762 | 0.5918 | 0.6762 | 0.8223 |
| No log | 5.8113 | 308 | 0.6068 | 0.6125 | 0.6068 | 0.7790 |
| No log | 5.8491 | 310 | 0.6025 | 0.6584 | 0.6025 | 0.7762 |
| No log | 5.8868 | 312 | 0.6113 | 0.5549 | 0.6113 | 0.7819 |
| No log | 5.9245 | 314 | 0.6352 | 0.5165 | 0.6352 | 0.7970 |
| No log | 5.9623 | 316 | 0.6500 | 0.5165 | 0.6500 | 0.8062 |
| No log | 6.0 | 318 | 0.6490 | 0.5408 | 0.6490 | 0.8056 |
| No log | 6.0377 | 320 | 0.6319 | 0.5889 | 0.6319 | 0.7949 |
| No log | 6.0755 | 322 | 0.6134 | 0.5405 | 0.6134 | 0.7832 |
| No log | 6.1132 | 324 | 0.5987 | 0.6241 | 0.5987 | 0.7738 |
| No log | 6.1509 | 326 | 0.5828 | 0.5797 | 0.5828 | 0.7634 |
| No log | 6.1887 | 328 | 0.5799 | 0.5386 | 0.5799 | 0.7615 |
| No log | 6.2264 | 330 | 0.5754 | 0.5822 | 0.5754 | 0.7586 |
| No log | 6.2642 | 332 | 0.5834 | 0.5405 | 0.5834 | 0.7638 |
| No log | 6.3019 | 334 | 0.6353 | 0.5928 | 0.6353 | 0.7970 |
| No log | 6.3396 | 336 | 0.7385 | 0.6072 | 0.7385 | 0.8593 |
| No log | 6.3774 | 338 | 0.7217 | 0.5951 | 0.7217 | 0.8495 |
| No log | 6.4151 | 340 | 0.6307 | 0.5329 | 0.6307 | 0.7942 |
| No log | 6.4528 | 342 | 0.5989 | 0.6001 | 0.5989 | 0.7739 |
| No log | 6.4906 | 344 | 0.6077 | 0.5874 | 0.6077 | 0.7796 |
| No log | 6.5283 | 346 | 0.6065 | 0.6327 | 0.6065 | 0.7788 |
| No log | 6.5660 | 348 | 0.6610 | 0.5873 | 0.6610 | 0.8130 |
| No log | 6.6038 | 350 | 0.6781 | 0.5560 | 0.6781 | 0.8235 |
| No log | 6.6415 | 352 | 0.6973 | 0.5793 | 0.6973 | 0.8350 |
| No log | 6.6792 | 354 | 0.6195 | 0.6456 | 0.6195 | 0.7871 |
| No log | 6.7170 | 356 | 0.5756 | 0.6087 | 0.5756 | 0.7587 |
| No log | 6.7547 | 358 | 0.5692 | 0.6327 | 0.5692 | 0.7544 |
| No log | 6.7925 | 360 | 0.5743 | 0.5759 | 0.5743 | 0.7578 |
| No log | 6.8302 | 362 | 0.5942 | 0.5666 | 0.5942 | 0.7708 |
| No log | 6.8679 | 364 | 0.6069 | 0.4951 | 0.6069 | 0.7791 |
| No log | 6.9057 | 366 | 0.6043 | 0.5902 | 0.6043 | 0.7774 |
| No log | 6.9434 | 368 | 0.5905 | 0.5989 | 0.5905 | 0.7684 |
| No log | 6.9811 | 370 | 0.5857 | 0.5989 | 0.5857 | 0.7653 |
| No log | 7.0189 | 372 | 0.5815 | 0.5989 | 0.5815 | 0.7625 |
| No log | 7.0566 | 374 | 0.5808 | 0.5989 | 0.5808 | 0.7621 |
| No log | 7.0943 | 376 | 0.5801 | 0.5989 | 0.5801 | 0.7616 |
| No log | 7.1321 | 378 | 0.5731 | 0.5989 | 0.5731 | 0.7570 |
| No log | 7.1698 | 380 | 0.5799 | 0.5562 | 0.5799 | 0.7615 |
| No log | 7.2075 | 382 | 0.6543 | 0.6081 | 0.6543 | 0.8089 |
| No log | 7.2453 | 384 | 0.6970 | 0.6269 | 0.6970 | 0.8348 |
| No log | 7.2830 | 386 | 0.6492 | 0.6269 | 0.6492 | 0.8057 |
| No log | 7.3208 | 388 | 0.5663 | 0.6217 | 0.5663 | 0.7525 |
| No log | 7.3585 | 390 | 0.6007 | 0.6177 | 0.6007 | 0.7750 |
| No log | 7.3962 | 392 | 0.6617 | 0.6240 | 0.6617 | 0.8135 |
| No log | 7.4340 | 394 | 0.6379 | 0.5666 | 0.6379 | 0.7987 |
| No log | 7.4717 | 396 | 0.5891 | 0.5522 | 0.5891 | 0.7675 |
| No log | 7.5094 | 398 | 0.6081 | 0.5359 | 0.6081 | 0.7798 |
| No log | 7.5472 | 400 | 0.6128 | 0.5486 | 0.6128 | 0.7828 |
| No log | 7.5849 | 402 | 0.5782 | 0.6067 | 0.5782 | 0.7604 |
| No log | 7.6226 | 404 | 0.5742 | 0.5886 | 0.5742 | 0.7578 |
| No log | 7.6604 | 406 | 0.6045 | 0.5472 | 0.6045 | 0.7775 |
| No log | 7.6981 | 408 | 0.5824 | 0.5522 | 0.5824 | 0.7631 |
| No log | 7.7358 | 410 | 0.5554 | 0.5835 | 0.5554 | 0.7452 |
| No log | 7.7736 | 412 | 0.5513 | 0.5831 | 0.5513 | 0.7425 |
| No log | 7.8113 | 414 | 0.5462 | 0.6507 | 0.5462 | 0.7390 |
| No log | 7.8491 | 416 | 0.5385 | 0.6788 | 0.5385 | 0.7338 |
| No log | 7.8868 | 418 | 0.5515 | 0.6673 | 0.5515 | 0.7426 |
| No log | 7.9245 | 420 | 0.5565 | 0.6456 | 0.5565 | 0.7460 |
| No log | 7.9623 | 422 | 0.5794 | 0.5210 | 0.5794 | 0.7612 |
| No log | 8.0 | 424 | 0.5959 | 0.5578 | 0.5959 | 0.7719 |
| No log | 8.0377 | 426 | 0.5982 | 0.5440 | 0.5982 | 0.7734 |
| No log | 8.0755 | 428 | 0.5963 | 0.5440 | 0.5963 | 0.7722 |
| No log | 8.1132 | 430 | 0.5886 | 0.5703 | 0.5886 | 0.7672 |
| No log | 8.1509 | 432 | 0.6235 | 0.5359 | 0.6235 | 0.7896 |
| No log | 8.1887 | 434 | 0.6700 | 0.6081 | 0.6700 | 0.8185 |
| No log | 8.2264 | 436 | 0.7178 | 0.6293 | 0.7178 | 0.8472 |
| No log | 8.2642 | 438 | 0.6443 | 0.5706 | 0.6443 | 0.8027 |
| No log | 8.3019 | 440 | 0.5743 | 0.6456 | 0.5743 | 0.7578 |
| No log | 8.3396 | 442 | 0.5928 | 0.5472 | 0.5928 | 0.7699 |
| No log | 8.3774 | 444 | 0.5922 | 0.5700 | 0.5922 | 0.7695 |
| No log | 8.4151 | 446 | 0.6016 | 0.5809 | 0.6016 | 0.7756 |
| No log | 8.4528 | 448 | 0.6520 | 0.5472 | 0.6520 | 0.8074 |
| No log | 8.4906 | 450 | 0.6638 | 0.5242 | 0.6638 | 0.8147 |
| No log | 8.5283 | 452 | 0.6163 | 0.5343 | 0.6163 | 0.7850 |
| No log | 8.5660 | 454 | 0.6094 | 0.5357 | 0.6094 | 0.7806 |
| No log | 8.6038 | 456 | 0.5751 | 0.5568 | 0.5751 | 0.7584 |
| No log | 8.6415 | 458 | 0.5668 | 0.5568 | 0.5668 | 0.7528 |
| No log | 8.6792 | 460 | 0.5656 | 0.5568 | 0.5656 | 0.7521 |
| No log | 8.7170 | 462 | 0.6112 | 0.5581 | 0.6112 | 0.7818 |
| No log | 8.7547 | 464 | 0.5987 | 0.5078 | 0.5987 | 0.7738 |
| No log | 8.7925 | 466 | 0.5927 | 0.5568 | 0.5927 | 0.7699 |
| No log | 8.8302 | 468 | 0.5829 | 0.5782 | 0.5829 | 0.7635 |
| No log | 8.8679 | 470 | 0.5753 | 0.6636 | 0.5753 | 0.7585 |
| No log | 8.9057 | 472 | 0.5633 | 0.6598 | 0.5633 | 0.7505 |
| No log | 8.9434 | 474 | 0.5818 | 0.6516 | 0.5818 | 0.7628 |
| No log | 8.9811 | 476 | 0.6217 | 0.6317 | 0.6217 | 0.7885 |
| No log | 9.0189 | 478 | 0.6764 | 0.6130 | 0.6764 | 0.8225 |
| No log | 9.0566 | 480 | 0.6117 | 0.6120 | 0.6117 | 0.7821 |
| No log | 9.0943 | 482 | 0.5940 | 0.5684 | 0.5940 | 0.7707 |
| No log | 9.1321 | 484 | 0.6215 | 0.5573 | 0.6215 | 0.7883 |
| No log | 9.1698 | 486 | 0.6838 | 0.5816 | 0.6838 | 0.8269 |
| No log | 9.2075 | 488 | 0.7662 | 0.5780 | 0.7662 | 0.8753 |
| No log | 9.2453 | 490 | 0.7799 | 0.5780 | 0.7799 | 0.8831 |
| No log | 9.2830 | 492 | 0.7212 | 0.5815 | 0.7212 | 0.8492 |
| No log | 9.3208 | 494 | 0.6152 | 0.5945 | 0.6152 | 0.7844 |
| No log | 9.3585 | 496 | 0.5838 | 0.5197 | 0.5838 | 0.7641 |
| No log | 9.3962 | 498 | 0.6060 | 0.5197 | 0.6060 | 0.7785 |
| 0.2158 | 9.4340 | 500 | 0.5976 | 0.5679 | 0.5976 | 0.7730 |
| 0.2158 | 9.4717 | 502 | 0.5928 | 0.6128 | 0.5928 | 0.7699 |
| 0.2158 | 9.5094 | 504 | 0.5963 | 0.5197 | 0.5963 | 0.7722 |
| 0.2158 | 9.5472 | 506 | 0.6588 | 0.5118 | 0.6588 | 0.8117 |
| 0.2158 | 9.5849 | 508 | 0.8613 | 0.4077 | 0.8613 | 0.9281 |
| 0.2158 | 9.6226 | 510 | 0.9599 | 0.4534 | 0.9599 | 0.9797 |
| 0.2158 | 9.6604 | 512 | 0.9416 | 0.4333 | 0.9416 | 0.9703 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
lesso/a00ad184-b206-4fc2-bdff-39878b790d1d
|
lesso
| 2025-02-04T02:19:35Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T01:33:07Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a00ad184-b206-4fc2-bdff-39878b790d1d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 3e5eab4715297236_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e5eab4715297236_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/a00ad184-b206-4fc2-bdff-39878b790d1d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god01/3e5eab4715297236_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
wandb_project: ab-god01
wandb_run: your_name
wandb_runid: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a00ad184-b206-4fc2-bdff-39878b790d1d
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5118 | 0.0009 | 1 | 0.3299 |
| 0.6458 | 0.0462 | 50 | 0.2378 |
| 0.4444 | 0.0925 | 100 | 0.2297 |
| 0.4177 | 0.1387 | 150 | 0.2221 |
| 0.5608 | 0.1849 | 200 | 0.2190 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/395fb899-af8e-4d01-ae1e-33bea7c0c4c7
|
lesso
| 2025-02-04T02:18:37Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"license:mit",
"region:us"
] | null | 2025-02-04T02:05:06Z |
---
library_name: peft
license: mit
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 395fb899-af8e-4d01-ae1e-33bea7c0c4c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- fe297105e697bbbb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe297105e697bbbb_train_data.json
type:
field_instruction: task
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/395fb899-af8e-4d01-ae1e-33bea7c0c4c7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001018
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god18/fe297105e697bbbb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3fa43a59-7bfe-43c9-93ae-74585476d2fa
wandb_project: ab-god18
wandb_run: your_name
wandb_runid: 3fa43a59-7bfe-43c9-93ae-74585476d2fa
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 395fb899-af8e-4d01-ae1e-33bea7c0c4c7
This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001018
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.205 | 0.0021 | 1 | 0.7869 |
| 1.573 | 0.1036 | 50 | 0.6236 |
| 1.6846 | 0.2073 | 100 | 0.6049 |
| 1.8407 | 0.3109 | 150 | 0.6158 |
| 1.7818 | 0.4145 | 200 | 0.5961 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
clarxus/68a6245c-7479-4d3f-96c1-d3d8ddae92a0
|
clarxus
| 2025-02-04T02:18:30Z | 13 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-02-04T01:58:24Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 68a6245c-7479-4d3f-96c1-d3d8ddae92a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b6e5ed8190ccb774_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b6e5ed8190ccb774_train_data.json
type:
field_instruction: soru
field_output: cevap
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: clarxus/68a6245c-7479-4d3f-96c1-d3d8ddae92a0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/b6e5ed8190ccb774_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 72e7b874-15da-42e2-ab22-791b74a29685
wandb_project: Gradients-On-Seven
wandb_run: your_name
wandb_runid: 72e7b874-15da-42e2-ab22-791b74a29685
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 68a6245c-7479-4d3f-96c1-d3d8ddae92a0
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 3.9722 |
| 15.6041 | 0.0047 | 9 | 3.7843 |
| 14.0132 | 0.0094 | 18 | 3.4767 |
| 13.6117 | 0.0141 | 27 | 3.3608 |
| 12.7378 | 0.0188 | 36 | 3.2427 |
| 12.8445 | 0.0235 | 45 | 3.1598 |
| 12.1381 | 0.0283 | 54 | 3.1043 |
| 12.2995 | 0.0330 | 63 | 3.0745 |
| 11.7993 | 0.0377 | 72 | 3.0427 |
| 12.1708 | 0.0424 | 81 | 3.0253 |
| 11.961 | 0.0471 | 90 | 3.0197 |
| 12.3234 | 0.0518 | 99 | 3.0179 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrferr3t/aa7df70b-10d7-4194-946f-f6b02d72bea0
|
mrferr3t
| 2025-02-04T02:17:38Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-02-04T01:56:15Z |
---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa7df70b-10d7-4194-946f-f6b02d72bea0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- b177e99f9afc8918_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b177e99f9afc8918_train_data.json
type:
field_input: ''
field_instruction: title
field_output: cleaned_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/aa7df70b-10d7-4194-946f-f6b02d72bea0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/b177e99f9afc8918_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6224a0bd-20f5-44b3-8193-1192471d4f6a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6224a0bd-20f5-44b3-8193-1192471d4f6a
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aa7df70b-10d7-4194-946f-f6b02d72bea0
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 252
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0025 | 1 | 2.1047 |
| No log | 0.0991 | 40 | 2.1406 |
| No log | 0.1983 | 80 | 2.0908 |
| 7.0275 | 0.2974 | 120 | 2.0375 |
| 7.0275 | 0.3965 | 160 | 2.0149 |
| 4.9378 | 0.4957 | 200 | 1.9939 |
| 4.9378 | 0.5948 | 240 | 1.9753 |
| 4.9378 | 0.6939 | 280 | 1.9579 |
| 4.5144 | 0.7931 | 320 | 1.9552 |
| 4.5144 | 0.8922 | 360 | 1.9438 |
| 4.3418 | 0.9913 | 400 | 1.9471 |
| 4.3418 | 1.0905 | 440 | 1.9424 |
| 4.3418 | 1.1896 | 480 | 1.9289 |
| 4.1955 | 1.2887 | 520 | 1.9255 |
| 4.1955 | 1.3879 | 560 | 1.9198 |
| 4.159 | 1.4870 | 600 | 1.9194 |
| 4.159 | 1.5861 | 640 | 1.9114 |
| 4.159 | 1.6853 | 680 | 1.9083 |
| 4.1195 | 1.7844 | 720 | 1.9021 |
| 4.1195 | 1.8835 | 760 | 1.9041 |
| 4.1376 | 1.9827 | 800 | 1.9057 |
| 4.1376 | 2.0818 | 840 | 1.9058 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ThatEvan/Qwen2-VL-7B-Instruct-Q8_0-GGUF
|
ThatEvan
| 2025-02-04T02:17:33Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"multimodal",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-02-04T02:16:57Z |
---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- llama-cpp
- gguf-my-repo
library_name: transformers
base_model: Qwen/Qwen2-VL-7B-Instruct
---
# ThatEvan/Qwen2-VL-7B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ThatEvan/Qwen2-VL-7B-Instruct-Q8_0-GGUF --hf-file qwen2-vl-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ThatEvan/Qwen2-VL-7B-Instruct-Q8_0-GGUF --hf-file qwen2-vl-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ThatEvan/Qwen2-VL-7B-Instruct-Q8_0-GGUF --hf-file qwen2-vl-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ThatEvan/Qwen2-VL-7B-Instruct-Q8_0-GGUF --hf-file qwen2-vl-7b-instruct-q8_0.gguf -c 2048
```
|
adammandic87/53524ff7-36ed-4cce-89cb-2c69d9d55d03
|
adammandic87
| 2025-02-04T02:17:22Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T02:02:59Z |
---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 53524ff7-36ed-4cce-89cb-2c69d9d55d03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bd2a081ce1ece142_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bd2a081ce1ece142_train_data.json
type:
field_instruction: instructions
field_output: outputs
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/53524ff7-36ed-4cce-89cb-2c69d9d55d03
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/bd2a081ce1ece142_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84541128-e99e-4412-b56a-7eb22c1c1e64
wandb_project: Birthday-SN56-34-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 84541128-e99e-4412-b56a-7eb22c1c1e64
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 53524ff7-36ed-4cce-89cb-2c69d9d55d03
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.3803 |
| 8.7849 | 0.0043 | 50 | 2.1149 |
| 8.2085 | 0.0086 | 100 | 2.0359 |
| 7.558 | 0.0128 | 150 | 1.9935 |
| 8.0881 | 0.0171 | 200 | 1.9553 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
shibajustfor/803d9cc7-96ff-42c0-9d03-d1a52304d1cf
|
shibajustfor
| 2025-02-04T02:17:08Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T02:02:58Z |
---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 803d9cc7-96ff-42c0-9d03-d1a52304d1cf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tiiuae/falcon-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bd2a081ce1ece142_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bd2a081ce1ece142_train_data.json
type:
field_instruction: instructions
field_output: outputs
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/803d9cc7-96ff-42c0-9d03-d1a52304d1cf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/bd2a081ce1ece142_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 84541128-e99e-4412-b56a-7eb22c1c1e64
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 84541128-e99e-4412-b56a-7eb22c1c1e64
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 803d9cc7-96ff-42c0-9d03-d1a52304d1cf
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.3811 |
| 8.7842 | 0.0043 | 50 | 2.1147 |
| 8.2246 | 0.0086 | 100 | 2.0408 |
| 7.6217 | 0.0128 | 150 | 2.0032 |
| 8.2222 | 0.0171 | 200 | 1.9950 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
brixeus/c082654f-d952-4d0a-b187-0ff59eeaca53
|
brixeus
| 2025-02-04T02:13:31Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-02-04T01:54:58Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c082654f-d952-4d0a-b187-0ff59eeaca53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7f4ffc4da3710d39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f4ffc4da3710d39_train_data.json
type:
field_input: text
field_instruction: task_name
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: brixeus/c082654f-d952-4d0a-b187-0ff59eeaca53
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/7f4ffc4da3710d39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: a4f7ae30-2ca5-42fa-a4c8-6320e54b4228
wandb_project: Gradients-On-Three
wandb_run: your_name
wandb_runid: a4f7ae30-2ca5-42fa-a4c8-6320e54b4228
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c082654f-d952-4d0a-b187-0ff59eeaca53
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0016 | 1 | 2.9893 |
| 1.707 | 0.0143 | 9 | 1.1742 |
| 0.3968 | 0.0287 | 18 | 0.4188 |
| 0.2675 | 0.0430 | 27 | 0.2731 |
| 0.331 | 0.0574 | 36 | 0.2375 |
| 0.2956 | 0.0717 | 45 | 0.2089 |
| 0.2404 | 0.0861 | 54 | 0.1974 |
| 0.2711 | 0.1004 | 63 | 0.1865 |
| 0.1731 | 0.1147 | 72 | 0.1725 |
| 0.1535 | 0.1291 | 81 | 0.1671 |
| 0.199 | 0.1434 | 90 | 0.1653 |
| 0.1485 | 0.1578 | 99 | 0.1649 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
CultriX/Qwen2.5-14B-Ultima
|
CultriX
| 2025-02-04T02:12:11Z | 17 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:sometimesanotion/Lamarck-14B-v0.7",
"base_model:merge:sometimesanotion/Lamarck-14B-v0.7",
"base_model:sthenno/tempesthenno-ppo-ckpt40",
"base_model:merge:sthenno/tempesthenno-ppo-ckpt40",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T02:01:40Z |
---
base_model:
- sometimesanotion/Lamarck-14B-v0.7-rc4
- sthenno/tempesthenno-ppo-ckpt40
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [sometimesanotion/Lamarck-14B-v0.7-rc4](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-rc4)
* [sthenno/tempesthenno-ppo-ckpt40](https://huggingface.co/sthenno/tempesthenno-ppo-ckpt40)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# =============================================================================
# SuperMerge-14B-Simple
#
# This configuration merges only two components:
# - Base Model: Provides stable foundational features.
# Model: sometimesanotion/Lamarck-14B-v0.7-rc4
#
# - Reasoning Module: Drives enhanced mid-layer reasoning.
# Model: sthenno/tempesthenno-ppo-ckpt40
#
# The merge is performed using slerp with a V-shaped interpolation curve.
# Weighting across each 8-layer slice is tuned to balance core feature
# preservation with advanced reasoning.
# =============================================================================
name: SuperMerge-14B-Simple
merge_method: slerp
base_model: sometimesanotion/Lamarck-14B-v0.7-rc4
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true # Optimize memory usage.
normalize: true # Ensure weights are on a comparable scale.
rescale: false # No additional rescaling necessary.
# Interpolation curve for 6 slices (48 layers total):
# Maintains a V-shaped emphasis for mid-layer processing.
t: [0.1, 0.35, 0.85, 0.85, 0.35, 0.1]
slices:
# ---------------------------------------------------------------------------
# Slice 1 (Layers 0-8):
# - Early layers: nearly pure base model with minimal PPO influence.
# ---------------------------------------------------------------------------
- sources:
- model: sometimesanotion/Lamarck-14B-v0.7-rc4
layer_range: [0, 8]
parameters:
weight: 0.95
- model: sthenno/tempesthenno-ppo-ckpt40
layer_range: [0, 8]
parameters:
weight: 0.05
# ---------------------------------------------------------------------------
# Slice 2 (Layers 8-16):
# - Blend base with stronger PPO contributions to boost reasoning.
# ---------------------------------------------------------------------------
- sources:
- model: sometimesanotion/Lamarck-14B-v0.7-rc4
layer_range: [8, 16]
parameters:
weight: 0.4
- model: sthenno/tempesthenno-ppo-ckpt40
layer_range: [8, 16]
parameters:
weight: 0.6
# ---------------------------------------------------------------------------
# Slice 3 (Layers 16-24):
# - Mid-layer: Prioritize advanced reasoning by increasing the PPO share.
# ---------------------------------------------------------------------------
- sources:
- model: sometimesanotion/Lamarck-14B-v0.7-rc4
layer_range: [16, 24]
parameters:
weight: 0.3
- model: sthenno/tempesthenno-ppo-ckpt40
layer_range: [16, 24]
parameters:
weight: 0.7
# ---------------------------------------------------------------------------
# Slice 4 (Layers 24-32):
# - Continue the focus on reasoning with PPO while still retaining base traits.
# ---------------------------------------------------------------------------
- sources:
- model: sometimesanotion/Lamarck-14B-v0.7-rc4
layer_range: [24, 32]
parameters:
weight: 0.35
- model: sthenno/tempesthenno-ppo-ckpt40
layer_range: [24, 32]
parameters:
weight: 0.65
# ---------------------------------------------------------------------------
# Slice 5 (Layers 32-40):
# - Re-stabilize the network with a stronger base model contribution.
# ---------------------------------------------------------------------------
- sources:
- model: sometimesanotion/Lamarck-14B-v0.7-rc4
layer_range: [32, 40]
parameters:
weight: 0.6
- model: sthenno/tempesthenno-ppo-ckpt40
layer_range: [32, 40]
parameters:
weight: 0.4
# ---------------------------------------------------------------------------
# Slice 6 (Layers 40-48):
# - Final output layers: Maintain fluency with the base model augmented by PPO.
# ---------------------------------------------------------------------------
- sources:
- model: sometimesanotion/Lamarck-14B-v0.7-rc4
layer_range: [40, 48]
parameters:
weight: 0.6
- model: sthenno/tempesthenno-ppo-ckpt40
layer_range: [40, 48]
parameters:
weight: 0.4
```
|
mradermacher/Qwen-sce-14B-GGUF
|
mradermacher
| 2025-02-04T02:10:50Z | 234 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:hotmailuser/Qwen-sce-14B",
"base_model:quantized:hotmailuser/Qwen-sce-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T01:03:36Z |
---
base_model: hotmailuser/Qwen-sce-14B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/hotmailuser/Qwen-sce-14B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-sce-14B-GGUF/resolve/main/Qwen-sce-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
demohong/4042886f-6813-4a43-91e4-f688de321ef7
|
demohong
| 2025-02-04T02:10:24Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T00:57:11Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4042886f-6813-4a43-91e4-f688de321ef7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3e5eab4715297236_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e5eab4715297236_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/4042886f-6813-4a43-91e4-f688de321ef7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3e5eab4715297236_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4042886f-6813-4a43-91e4-f688de321ef7
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6901 | 0.1850 | 200 | 0.2257 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF
|
mradermacher
| 2025-02-04T02:08:58Z | 269 | 0 |
transformers
|
[
"transformers",
"gguf",
"exaone",
"ko",
"en",
"base_model:werty1248/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO",
"base_model:quantized:werty1248/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T01:39:57Z |
---
base_model: werty1248/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO
language:
- ko
- en
library_name: transformers
quantized_by: mradermacher
tags:
- exaone
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/werty1248/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q3_K_L.gguf) | Q3_K_L | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q5_K_M.gguf) | Q5_K_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.Q8_0.gguf) | Q8_0 | 8.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO-GGUF/resolve/main/EXAONE-3.5-7.8B-SFT-Translation-Style-Tag-DPO.f16.gguf) | f16 | 15.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task5_organization
|
MayBashendy
| 2025-02-04T02:08:23Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T02:01:33Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6173
- Qwk: 0.5940
- Mse: 0.6173
- Rmse: 0.7857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.2857 | 2 | 4.0411 | -0.0177 | 4.0411 | 2.0102 |
| No log | 0.5714 | 4 | 2.5074 | 0.1240 | 2.5074 | 1.5835 |
| No log | 0.8571 | 6 | 1.3389 | 0.0380 | 1.3389 | 1.1571 |
| No log | 1.1429 | 8 | 1.0995 | 0.2023 | 1.0995 | 1.0486 |
| No log | 1.4286 | 10 | 1.0774 | 0.0855 | 1.0774 | 1.0380 |
| No log | 1.7143 | 12 | 1.0648 | 0.2035 | 1.0648 | 1.0319 |
| No log | 2.0 | 14 | 1.0250 | 0.2591 | 1.0250 | 1.0124 |
| No log | 2.2857 | 16 | 1.0037 | 0.2935 | 1.0037 | 1.0018 |
| No log | 2.5714 | 18 | 0.8978 | 0.3876 | 0.8978 | 0.9475 |
| No log | 2.8571 | 20 | 0.7712 | 0.5220 | 0.7712 | 0.8782 |
| No log | 3.1429 | 22 | 0.7311 | 0.5663 | 0.7311 | 0.8551 |
| No log | 3.4286 | 24 | 0.7272 | 0.5704 | 0.7272 | 0.8528 |
| No log | 3.7143 | 26 | 1.0824 | 0.4618 | 1.0824 | 1.0404 |
| No log | 4.0 | 28 | 1.1801 | 0.3748 | 1.1801 | 1.0863 |
| No log | 4.2857 | 30 | 1.1940 | 0.4026 | 1.1940 | 1.0927 |
| No log | 4.5714 | 32 | 0.7990 | 0.5299 | 0.7990 | 0.8939 |
| No log | 4.8571 | 34 | 0.6993 | 0.5646 | 0.6993 | 0.8362 |
| No log | 5.1429 | 36 | 0.7540 | 0.4872 | 0.7540 | 0.8683 |
| No log | 5.4286 | 38 | 0.8317 | 0.5324 | 0.8317 | 0.9120 |
| No log | 5.7143 | 40 | 1.0255 | 0.4906 | 1.0255 | 1.0127 |
| No log | 6.0 | 42 | 0.7867 | 0.5912 | 0.7867 | 0.8869 |
| No log | 6.2857 | 44 | 0.7651 | 0.6089 | 0.7651 | 0.8747 |
| No log | 6.5714 | 46 | 0.7851 | 0.6354 | 0.7851 | 0.8860 |
| No log | 6.8571 | 48 | 0.7000 | 0.6369 | 0.7000 | 0.8367 |
| No log | 7.1429 | 50 | 0.6834 | 0.6398 | 0.6834 | 0.8267 |
| No log | 7.4286 | 52 | 0.6507 | 0.6269 | 0.6507 | 0.8067 |
| No log | 7.7143 | 54 | 0.6895 | 0.5946 | 0.6895 | 0.8304 |
| No log | 8.0 | 56 | 0.8274 | 0.6263 | 0.8274 | 0.9096 |
| No log | 8.2857 | 58 | 0.7030 | 0.5873 | 0.7030 | 0.8384 |
| No log | 8.5714 | 60 | 0.7523 | 0.6004 | 0.7523 | 0.8674 |
| No log | 8.8571 | 62 | 0.9421 | 0.5283 | 0.9421 | 0.9706 |
| No log | 9.1429 | 64 | 0.7768 | 0.5805 | 0.7768 | 0.8814 |
| No log | 9.4286 | 66 | 0.6776 | 0.5548 | 0.6776 | 0.8231 |
| No log | 9.7143 | 68 | 0.6717 | 0.5653 | 0.6717 | 0.8196 |
| No log | 10.0 | 70 | 0.7936 | 0.5997 | 0.7936 | 0.8908 |
| No log | 10.2857 | 72 | 0.7547 | 0.5459 | 0.7547 | 0.8688 |
| No log | 10.5714 | 74 | 0.7270 | 0.5459 | 0.7270 | 0.8527 |
| No log | 10.8571 | 76 | 0.6373 | 0.5591 | 0.6373 | 0.7983 |
| No log | 11.1429 | 78 | 0.6423 | 0.5506 | 0.6423 | 0.8015 |
| No log | 11.4286 | 80 | 0.6805 | 0.5325 | 0.6805 | 0.8249 |
| No log | 11.7143 | 82 | 0.8641 | 0.5066 | 0.8641 | 0.9295 |
| No log | 12.0 | 84 | 0.7504 | 0.5770 | 0.7504 | 0.8662 |
| No log | 12.2857 | 86 | 0.6421 | 0.6070 | 0.6421 | 0.8013 |
| No log | 12.5714 | 88 | 0.6293 | 0.6187 | 0.6293 | 0.7933 |
| No log | 12.8571 | 90 | 0.6317 | 0.6215 | 0.6317 | 0.7948 |
| No log | 13.1429 | 92 | 0.6525 | 0.5817 | 0.6525 | 0.8078 |
| No log | 13.4286 | 94 | 0.8344 | 0.5583 | 0.8344 | 0.9134 |
| No log | 13.7143 | 96 | 0.7955 | 0.6014 | 0.7955 | 0.8919 |
| No log | 14.0 | 98 | 0.6467 | 0.5964 | 0.6467 | 0.8042 |
| No log | 14.2857 | 100 | 0.6482 | 0.6094 | 0.6482 | 0.8051 |
| No log | 14.5714 | 102 | 0.6633 | 0.5869 | 0.6633 | 0.8144 |
| No log | 14.8571 | 104 | 0.8384 | 0.6014 | 0.8384 | 0.9156 |
| No log | 15.1429 | 106 | 1.0215 | 0.5094 | 1.0215 | 1.0107 |
| No log | 15.4286 | 108 | 0.8825 | 0.5781 | 0.8825 | 0.9394 |
| No log | 15.7143 | 110 | 0.6695 | 0.5905 | 0.6695 | 0.8182 |
| No log | 16.0 | 112 | 0.6705 | 0.5239 | 0.6705 | 0.8188 |
| No log | 16.2857 | 114 | 0.6724 | 0.5166 | 0.6724 | 0.8200 |
| No log | 16.5714 | 116 | 0.7416 | 0.5357 | 0.7416 | 0.8612 |
| No log | 16.8571 | 118 | 0.8245 | 0.6043 | 0.8245 | 0.9080 |
| No log | 17.1429 | 120 | 0.7397 | 0.5676 | 0.7397 | 0.8601 |
| No log | 17.4286 | 122 | 0.6358 | 0.5349 | 0.6358 | 0.7974 |
| No log | 17.7143 | 124 | 0.6519 | 0.6560 | 0.6519 | 0.8074 |
| No log | 18.0 | 126 | 0.6405 | 0.6777 | 0.6405 | 0.8003 |
| No log | 18.2857 | 128 | 0.6314 | 0.6104 | 0.6314 | 0.7946 |
| No log | 18.5714 | 130 | 0.7573 | 0.6325 | 0.7573 | 0.8702 |
| No log | 18.8571 | 132 | 0.7665 | 0.6173 | 0.7665 | 0.8755 |
| No log | 19.1429 | 134 | 0.7407 | 0.5553 | 0.7407 | 0.8607 |
| No log | 19.4286 | 136 | 0.6578 | 0.5770 | 0.6578 | 0.8110 |
| No log | 19.7143 | 138 | 0.6158 | 0.5713 | 0.6158 | 0.7848 |
| No log | 20.0 | 140 | 0.6543 | 0.6209 | 0.6543 | 0.8089 |
| No log | 20.2857 | 142 | 0.6614 | 0.6209 | 0.6614 | 0.8132 |
| No log | 20.5714 | 144 | 0.6205 | 0.6028 | 0.6205 | 0.7877 |
| No log | 20.8571 | 146 | 0.6641 | 0.5665 | 0.6641 | 0.8149 |
| No log | 21.1429 | 148 | 0.7248 | 0.5676 | 0.7248 | 0.8513 |
| No log | 21.4286 | 150 | 0.6624 | 0.5862 | 0.6624 | 0.8139 |
| No log | 21.7143 | 152 | 0.6215 | 0.6186 | 0.6215 | 0.7884 |
| No log | 22.0 | 154 | 0.6231 | 0.6196 | 0.6231 | 0.7893 |
| No log | 22.2857 | 156 | 0.6269 | 0.5911 | 0.6269 | 0.7918 |
| No log | 22.5714 | 158 | 0.6159 | 0.6154 | 0.6159 | 0.7848 |
| No log | 22.8571 | 160 | 0.6527 | 0.5446 | 0.6527 | 0.8079 |
| No log | 23.1429 | 162 | 0.6590 | 0.5566 | 0.6590 | 0.8118 |
| No log | 23.4286 | 164 | 0.6148 | 0.6246 | 0.6148 | 0.7841 |
| No log | 23.7143 | 166 | 0.6160 | 0.6606 | 0.6160 | 0.7849 |
| No log | 24.0 | 168 | 0.6191 | 0.6606 | 0.6191 | 0.7868 |
| No log | 24.2857 | 170 | 0.6525 | 0.6334 | 0.6525 | 0.8078 |
| No log | 24.5714 | 172 | 0.7196 | 0.5860 | 0.7196 | 0.8483 |
| No log | 24.8571 | 174 | 0.6793 | 0.6490 | 0.6793 | 0.8242 |
| No log | 25.1429 | 176 | 0.6187 | 0.6606 | 0.6187 | 0.7866 |
| No log | 25.4286 | 178 | 0.6025 | 0.6426 | 0.6025 | 0.7762 |
| No log | 25.7143 | 180 | 0.6130 | 0.6426 | 0.6130 | 0.7830 |
| No log | 26.0 | 182 | 0.6714 | 0.5459 | 0.6714 | 0.8194 |
| No log | 26.2857 | 184 | 0.7240 | 0.6043 | 0.7240 | 0.8509 |
| No log | 26.5714 | 186 | 0.7571 | 0.5709 | 0.7571 | 0.8701 |
| No log | 26.8571 | 188 | 0.6859 | 0.5645 | 0.6859 | 0.8282 |
| No log | 27.1429 | 190 | 0.6325 | 0.5955 | 0.6325 | 0.7953 |
| No log | 27.4286 | 192 | 0.6464 | 0.6398 | 0.6464 | 0.8040 |
| No log | 27.7143 | 194 | 0.6351 | 0.6429 | 0.6351 | 0.7969 |
| No log | 28.0 | 196 | 0.6261 | 0.6035 | 0.6261 | 0.7912 |
| No log | 28.2857 | 198 | 0.6621 | 0.5759 | 0.6621 | 0.8137 |
| No log | 28.5714 | 200 | 0.6848 | 0.5459 | 0.6848 | 0.8275 |
| No log | 28.8571 | 202 | 0.6610 | 0.5446 | 0.6610 | 0.8130 |
| No log | 29.1429 | 204 | 0.6232 | 0.5971 | 0.6232 | 0.7895 |
| No log | 29.4286 | 206 | 0.6101 | 0.6167 | 0.6101 | 0.7811 |
| No log | 29.7143 | 208 | 0.6101 | 0.6215 | 0.6101 | 0.7811 |
| No log | 30.0 | 210 | 0.6112 | 0.6164 | 0.6112 | 0.7818 |
| No log | 30.2857 | 212 | 0.6341 | 0.5653 | 0.6341 | 0.7963 |
| No log | 30.5714 | 214 | 0.6348 | 0.5536 | 0.6348 | 0.7968 |
| No log | 30.8571 | 216 | 0.6114 | 0.6144 | 0.6114 | 0.7819 |
| No log | 31.1429 | 218 | 0.6003 | 0.6452 | 0.6003 | 0.7748 |
| No log | 31.4286 | 220 | 0.6014 | 0.6555 | 0.6014 | 0.7755 |
| No log | 31.7143 | 222 | 0.6052 | 0.6452 | 0.6052 | 0.7779 |
| No log | 32.0 | 224 | 0.6108 | 0.5737 | 0.6108 | 0.7815 |
| No log | 32.2857 | 226 | 0.6891 | 0.5686 | 0.6891 | 0.8302 |
| No log | 32.5714 | 228 | 0.7466 | 0.6025 | 0.7466 | 0.8640 |
| No log | 32.8571 | 230 | 0.6946 | 0.5553 | 0.6946 | 0.8334 |
| No log | 33.1429 | 232 | 0.6156 | 0.6144 | 0.6156 | 0.7846 |
| No log | 33.4286 | 234 | 0.6046 | 0.6275 | 0.6046 | 0.7776 |
| No log | 33.7143 | 236 | 0.6030 | 0.6275 | 0.6030 | 0.7765 |
| No log | 34.0 | 238 | 0.6072 | 0.6452 | 0.6072 | 0.7792 |
| No log | 34.2857 | 240 | 0.6364 | 0.6052 | 0.6364 | 0.7977 |
| No log | 34.5714 | 242 | 0.6869 | 0.6014 | 0.6869 | 0.8288 |
| No log | 34.8571 | 244 | 0.6785 | 0.6377 | 0.6785 | 0.8237 |
| No log | 35.1429 | 246 | 0.6224 | 0.5737 | 0.6224 | 0.7889 |
| No log | 35.4286 | 248 | 0.6174 | 0.5945 | 0.6174 | 0.7857 |
| No log | 35.7143 | 250 | 0.6454 | 0.5869 | 0.6454 | 0.8034 |
| No log | 36.0 | 252 | 0.6353 | 0.5614 | 0.6353 | 0.7970 |
| No log | 36.2857 | 254 | 0.6112 | 0.6364 | 0.6112 | 0.7818 |
| No log | 36.5714 | 256 | 0.6165 | 0.5748 | 0.6165 | 0.7852 |
| No log | 36.8571 | 258 | 0.6368 | 0.5865 | 0.6368 | 0.7980 |
| No log | 37.1429 | 260 | 0.6268 | 0.5865 | 0.6268 | 0.7917 |
| No log | 37.4286 | 262 | 0.6091 | 0.6144 | 0.6091 | 0.7804 |
| No log | 37.7143 | 264 | 0.6062 | 0.6426 | 0.6062 | 0.7786 |
| No log | 38.0 | 266 | 0.6107 | 0.6068 | 0.6107 | 0.7815 |
| No log | 38.2857 | 268 | 0.6050 | 0.6452 | 0.6050 | 0.7778 |
| No log | 38.5714 | 270 | 0.6054 | 0.6325 | 0.6054 | 0.7781 |
| No log | 38.8571 | 272 | 0.6179 | 0.5852 | 0.6179 | 0.7861 |
| No log | 39.1429 | 274 | 0.6324 | 0.6184 | 0.6324 | 0.7952 |
| No log | 39.4286 | 276 | 0.6154 | 0.5650 | 0.6154 | 0.7845 |
| No log | 39.7143 | 278 | 0.6099 | 0.6078 | 0.6099 | 0.7809 |
| No log | 40.0 | 280 | 0.6126 | 0.6057 | 0.6126 | 0.7827 |
| No log | 40.2857 | 282 | 0.6225 | 0.5943 | 0.6225 | 0.7890 |
| No log | 40.5714 | 284 | 0.6265 | 0.5610 | 0.6265 | 0.7915 |
| No log | 40.8571 | 286 | 0.6268 | 0.5402 | 0.6268 | 0.7917 |
| No log | 41.1429 | 288 | 0.6307 | 0.5301 | 0.6307 | 0.7942 |
| No log | 41.4286 | 290 | 0.6306 | 0.5650 | 0.6306 | 0.7941 |
| No log | 41.7143 | 292 | 0.6274 | 0.5650 | 0.6274 | 0.7921 |
| No log | 42.0 | 294 | 0.6513 | 0.5999 | 0.6513 | 0.8070 |
| No log | 42.2857 | 296 | 0.6633 | 0.5917 | 0.6633 | 0.8144 |
| No log | 42.5714 | 298 | 0.6596 | 0.5917 | 0.6596 | 0.8122 |
| No log | 42.8571 | 300 | 0.6227 | 0.5961 | 0.6227 | 0.7891 |
| No log | 43.1429 | 302 | 0.6064 | 0.6259 | 0.6064 | 0.7787 |
| No log | 43.4286 | 304 | 0.6075 | 0.6426 | 0.6075 | 0.7794 |
| No log | 43.7143 | 306 | 0.6260 | 0.5737 | 0.6260 | 0.7912 |
| No log | 44.0 | 308 | 0.6470 | 0.5862 | 0.6470 | 0.8043 |
| No log | 44.2857 | 310 | 0.6992 | 0.6004 | 0.6992 | 0.8362 |
| No log | 44.5714 | 312 | 0.7056 | 0.5946 | 0.7056 | 0.8400 |
| No log | 44.8571 | 314 | 0.6508 | 0.5748 | 0.6508 | 0.8067 |
| No log | 45.1429 | 316 | 0.6180 | 0.6606 | 0.6180 | 0.7861 |
| No log | 45.4286 | 318 | 0.6216 | 0.6606 | 0.6216 | 0.7884 |
| No log | 45.7143 | 320 | 0.6345 | 0.6482 | 0.6345 | 0.7966 |
| No log | 46.0 | 322 | 0.6901 | 0.6003 | 0.6901 | 0.8307 |
| No log | 46.2857 | 324 | 0.7971 | 0.6382 | 0.7971 | 0.8928 |
| No log | 46.5714 | 326 | 0.8008 | 0.6382 | 0.8008 | 0.8949 |
| No log | 46.8571 | 328 | 0.7238 | 0.6100 | 0.7238 | 0.8507 |
| No log | 47.1429 | 330 | 0.6473 | 0.6444 | 0.6473 | 0.8046 |
| No log | 47.4286 | 332 | 0.6077 | 0.6526 | 0.6077 | 0.7795 |
| No log | 47.7143 | 334 | 0.6130 | 0.6001 | 0.6130 | 0.7829 |
| No log | 48.0 | 336 | 0.6109 | 0.6018 | 0.6109 | 0.7816 |
| No log | 48.2857 | 338 | 0.6014 | 0.6364 | 0.6014 | 0.7755 |
| No log | 48.5714 | 340 | 0.6152 | 0.5843 | 0.6152 | 0.7843 |
| No log | 48.8571 | 342 | 0.6673 | 0.5686 | 0.6673 | 0.8169 |
| No log | 49.1429 | 344 | 0.7147 | 0.6032 | 0.7147 | 0.8454 |
| No log | 49.4286 | 346 | 0.7329 | 0.5881 | 0.7329 | 0.8561 |
| No log | 49.7143 | 348 | 0.6940 | 0.5770 | 0.6940 | 0.8331 |
| No log | 50.0 | 350 | 0.6400 | 0.5966 | 0.6400 | 0.8000 |
| No log | 50.2857 | 352 | 0.6171 | 0.6426 | 0.6171 | 0.7856 |
| No log | 50.5714 | 354 | 0.6120 | 0.6526 | 0.6120 | 0.7823 |
| No log | 50.8571 | 356 | 0.6075 | 0.6526 | 0.6075 | 0.7794 |
| No log | 51.1429 | 358 | 0.6039 | 0.6526 | 0.6039 | 0.7771 |
| No log | 51.4286 | 360 | 0.6010 | 0.6526 | 0.6010 | 0.7753 |
| No log | 51.7143 | 362 | 0.6096 | 0.6032 | 0.6096 | 0.7807 |
| No log | 52.0 | 364 | 0.6431 | 0.5887 | 0.6431 | 0.8019 |
| No log | 52.2857 | 366 | 0.6792 | 0.6341 | 0.6792 | 0.8241 |
| No log | 52.5714 | 368 | 0.6657 | 0.6488 | 0.6657 | 0.8159 |
| No log | 52.8571 | 370 | 0.6297 | 0.6052 | 0.6297 | 0.7936 |
| No log | 53.1429 | 372 | 0.6042 | 0.6615 | 0.6042 | 0.7773 |
| No log | 53.4286 | 374 | 0.5968 | 0.6325 | 0.5968 | 0.7725 |
| No log | 53.7143 | 376 | 0.5944 | 0.6325 | 0.5944 | 0.7710 |
| No log | 54.0 | 378 | 0.5931 | 0.6426 | 0.5931 | 0.7701 |
| No log | 54.2857 | 380 | 0.5922 | 0.6237 | 0.5922 | 0.7696 |
| No log | 54.5714 | 382 | 0.5946 | 0.6364 | 0.5946 | 0.7711 |
| No log | 54.8571 | 384 | 0.6008 | 0.6354 | 0.6008 | 0.7751 |
| No log | 55.1429 | 386 | 0.6037 | 0.6470 | 0.6037 | 0.7770 |
| No log | 55.4286 | 388 | 0.6032 | 0.5747 | 0.6032 | 0.7766 |
| No log | 55.7143 | 390 | 0.5994 | 0.6364 | 0.5994 | 0.7742 |
| No log | 56.0 | 392 | 0.5937 | 0.6364 | 0.5937 | 0.7705 |
| No log | 56.2857 | 394 | 0.5907 | 0.6526 | 0.5907 | 0.7686 |
| No log | 56.5714 | 396 | 0.5937 | 0.6508 | 0.5937 | 0.7705 |
| No log | 56.8571 | 398 | 0.6097 | 0.6508 | 0.6097 | 0.7808 |
| No log | 57.1429 | 400 | 0.6238 | 0.5940 | 0.6238 | 0.7898 |
| No log | 57.4286 | 402 | 0.6393 | 0.5940 | 0.6393 | 0.7996 |
| No log | 57.7143 | 404 | 0.6436 | 0.6377 | 0.6436 | 0.8022 |
| No log | 58.0 | 406 | 0.6572 | 0.6377 | 0.6572 | 0.8107 |
| No log | 58.2857 | 408 | 0.6916 | 0.6436 | 0.6916 | 0.8316 |
| No log | 58.5714 | 410 | 0.6700 | 0.6353 | 0.6700 | 0.8185 |
| No log | 58.8571 | 412 | 0.6326 | 0.6265 | 0.6326 | 0.7954 |
| No log | 59.1429 | 414 | 0.6042 | 0.6508 | 0.6042 | 0.7773 |
| No log | 59.4286 | 416 | 0.5994 | 0.6426 | 0.5994 | 0.7742 |
| No log | 59.7143 | 418 | 0.5984 | 0.6325 | 0.5984 | 0.7736 |
| No log | 60.0 | 420 | 0.5994 | 0.6325 | 0.5994 | 0.7742 |
| No log | 60.2857 | 422 | 0.6045 | 0.6334 | 0.6045 | 0.7775 |
| No log | 60.5714 | 424 | 0.6159 | 0.6265 | 0.6159 | 0.7848 |
| No log | 60.8571 | 426 | 0.6224 | 0.6265 | 0.6224 | 0.7889 |
| No log | 61.1429 | 428 | 0.6214 | 0.6070 | 0.6214 | 0.7883 |
| No log | 61.4286 | 430 | 0.6114 | 0.6144 | 0.6114 | 0.7819 |
| No log | 61.7143 | 432 | 0.6046 | 0.6144 | 0.6046 | 0.7776 |
| No log | 62.0 | 434 | 0.6050 | 0.6144 | 0.6050 | 0.7778 |
| No log | 62.2857 | 436 | 0.6153 | 0.6144 | 0.6153 | 0.7844 |
| No log | 62.5714 | 438 | 0.6107 | 0.6144 | 0.6107 | 0.7815 |
| No log | 62.8571 | 440 | 0.6071 | 0.6144 | 0.6071 | 0.7792 |
| No log | 63.1429 | 442 | 0.6025 | 0.6144 | 0.6025 | 0.7762 |
| No log | 63.4286 | 444 | 0.6019 | 0.6144 | 0.6019 | 0.7758 |
| No log | 63.7143 | 446 | 0.6092 | 0.6334 | 0.6092 | 0.7805 |
| No log | 64.0 | 448 | 0.6424 | 0.6184 | 0.6424 | 0.8015 |
| No log | 64.2857 | 450 | 0.6696 | 0.6275 | 0.6696 | 0.8183 |
| No log | 64.5714 | 452 | 0.6633 | 0.6275 | 0.6633 | 0.8144 |
| No log | 64.8571 | 454 | 0.6482 | 0.6164 | 0.6482 | 0.8051 |
| No log | 65.1429 | 456 | 0.6270 | 0.6311 | 0.6270 | 0.7918 |
| No log | 65.4286 | 458 | 0.6185 | 0.6334 | 0.6185 | 0.7865 |
| No log | 65.7143 | 460 | 0.6108 | 0.6334 | 0.6108 | 0.7816 |
| No log | 66.0 | 462 | 0.6094 | 0.6368 | 0.6094 | 0.7807 |
| No log | 66.2857 | 464 | 0.6159 | 0.6184 | 0.6159 | 0.7848 |
| No log | 66.5714 | 466 | 0.6185 | 0.5770 | 0.6185 | 0.7865 |
| No log | 66.8571 | 468 | 0.6113 | 0.6368 | 0.6113 | 0.7818 |
| No log | 67.1429 | 470 | 0.6045 | 0.6334 | 0.6045 | 0.7775 |
| No log | 67.4286 | 472 | 0.6005 | 0.6334 | 0.6005 | 0.7749 |
| No log | 67.7143 | 474 | 0.5951 | 0.6144 | 0.5951 | 0.7714 |
| No log | 68.0 | 476 | 0.5959 | 0.6144 | 0.5959 | 0.7720 |
| No log | 68.2857 | 478 | 0.5924 | 0.6144 | 0.5924 | 0.7697 |
| No log | 68.5714 | 480 | 0.5903 | 0.6246 | 0.5903 | 0.7683 |
| No log | 68.8571 | 482 | 0.5896 | 0.6246 | 0.5896 | 0.7678 |
| No log | 69.1429 | 484 | 0.5890 | 0.6246 | 0.5890 | 0.7675 |
| No log | 69.4286 | 486 | 0.5888 | 0.6246 | 0.5888 | 0.7673 |
| No log | 69.7143 | 488 | 0.5888 | 0.6215 | 0.5888 | 0.7673 |
| No log | 70.0 | 490 | 0.5895 | 0.6215 | 0.5895 | 0.7678 |
| No log | 70.2857 | 492 | 0.5910 | 0.6237 | 0.5910 | 0.7688 |
| No log | 70.5714 | 494 | 0.5911 | 0.5995 | 0.5911 | 0.7688 |
| No log | 70.8571 | 496 | 0.5863 | 0.6237 | 0.5863 | 0.7657 |
| No log | 71.1429 | 498 | 0.5870 | 0.6144 | 0.5870 | 0.7662 |
| 0.1606 | 71.4286 | 500 | 0.5935 | 0.6144 | 0.5935 | 0.7704 |
| 0.1606 | 71.7143 | 502 | 0.6017 | 0.6144 | 0.6017 | 0.7757 |
| 0.1606 | 72.0 | 504 | 0.6197 | 0.6761 | 0.6197 | 0.7872 |
| 0.1606 | 72.2857 | 506 | 0.6340 | 0.6377 | 0.6340 | 0.7962 |
| 0.1606 | 72.5714 | 508 | 0.6478 | 0.6377 | 0.6478 | 0.8049 |
| 0.1606 | 72.8571 | 510 | 0.6764 | 0.6204 | 0.6764 | 0.8224 |
| 0.1606 | 73.1429 | 512 | 0.6861 | 0.6173 | 0.6861 | 0.8283 |
| 0.1606 | 73.4286 | 514 | 0.6691 | 0.6353 | 0.6691 | 0.8180 |
| 0.1606 | 73.7143 | 516 | 0.6439 | 0.6653 | 0.6439 | 0.8024 |
| 0.1606 | 74.0 | 518 | 0.6308 | 0.6653 | 0.6308 | 0.7942 |
| 0.1606 | 74.2857 | 520 | 0.6349 | 0.6653 | 0.6349 | 0.7968 |
| 0.1606 | 74.5714 | 522 | 0.6511 | 0.6729 | 0.6511 | 0.8069 |
| 0.1606 | 74.8571 | 524 | 0.6635 | 0.6353 | 0.6635 | 0.8145 |
| 0.1606 | 75.1429 | 526 | 0.6741 | 0.6353 | 0.6741 | 0.8210 |
| 0.1606 | 75.4286 | 528 | 0.6751 | 0.6461 | 0.6751 | 0.8216 |
| 0.1606 | 75.7143 | 530 | 0.6515 | 0.6377 | 0.6515 | 0.8071 |
| 0.1606 | 76.0 | 532 | 0.6210 | 0.6653 | 0.6210 | 0.7881 |
| 0.1606 | 76.2857 | 534 | 0.6026 | 0.6144 | 0.6026 | 0.7763 |
| 0.1606 | 76.5714 | 536 | 0.5961 | 0.6164 | 0.5961 | 0.7720 |
| 0.1606 | 76.8571 | 538 | 0.5950 | 0.6164 | 0.5950 | 0.7713 |
| 0.1606 | 77.1429 | 540 | 0.5968 | 0.6164 | 0.5968 | 0.7726 |
| 0.1606 | 77.4286 | 542 | 0.6002 | 0.6164 | 0.6002 | 0.7747 |
| 0.1606 | 77.7143 | 544 | 0.6053 | 0.5748 | 0.6053 | 0.7780 |
| 0.1606 | 78.0 | 546 | 0.6043 | 0.5737 | 0.6043 | 0.7774 |
| 0.1606 | 78.2857 | 548 | 0.6059 | 0.5940 | 0.6059 | 0.7784 |
| 0.1606 | 78.5714 | 550 | 0.6045 | 0.6334 | 0.6045 | 0.7775 |
| 0.1606 | 78.8571 | 552 | 0.6047 | 0.6334 | 0.6047 | 0.7776 |
| 0.1606 | 79.1429 | 554 | 0.6077 | 0.6334 | 0.6077 | 0.7795 |
| 0.1606 | 79.4286 | 556 | 0.6104 | 0.6334 | 0.6104 | 0.7813 |
| 0.1606 | 79.7143 | 558 | 0.6158 | 0.6334 | 0.6158 | 0.7848 |
| 0.1606 | 80.0 | 560 | 0.6184 | 0.6334 | 0.6184 | 0.7864 |
| 0.1606 | 80.2857 | 562 | 0.6272 | 0.6334 | 0.6272 | 0.7920 |
| 0.1606 | 80.5714 | 564 | 0.6261 | 0.6334 | 0.6261 | 0.7912 |
| 0.1606 | 80.8571 | 566 | 0.6155 | 0.6334 | 0.6155 | 0.7845 |
| 0.1606 | 81.1429 | 568 | 0.6090 | 0.6334 | 0.6090 | 0.7804 |
| 0.1606 | 81.4286 | 570 | 0.6056 | 0.5940 | 0.6056 | 0.7782 |
| 0.1606 | 81.7143 | 572 | 0.6059 | 0.5737 | 0.6059 | 0.7784 |
| 0.1606 | 82.0 | 574 | 0.6031 | 0.5748 | 0.6031 | 0.7766 |
| 0.1606 | 82.2857 | 576 | 0.6006 | 0.5854 | 0.6006 | 0.7750 |
| 0.1606 | 82.5714 | 578 | 0.5996 | 0.6269 | 0.5996 | 0.7744 |
| 0.1606 | 82.8571 | 580 | 0.5991 | 0.6269 | 0.5991 | 0.7740 |
| 0.1606 | 83.1429 | 582 | 0.5989 | 0.6269 | 0.5989 | 0.7739 |
| 0.1606 | 83.4286 | 584 | 0.5983 | 0.6269 | 0.5983 | 0.7735 |
| 0.1606 | 83.7143 | 586 | 0.5975 | 0.6269 | 0.5975 | 0.7730 |
| 0.1606 | 84.0 | 588 | 0.5967 | 0.6269 | 0.5967 | 0.7724 |
| 0.1606 | 84.2857 | 590 | 0.5968 | 0.6269 | 0.5968 | 0.7725 |
| 0.1606 | 84.5714 | 592 | 0.5984 | 0.6246 | 0.5984 | 0.7736 |
| 0.1606 | 84.8571 | 594 | 0.6002 | 0.6435 | 0.6002 | 0.7747 |
| 0.1606 | 85.1429 | 596 | 0.6041 | 0.6334 | 0.6041 | 0.7772 |
| 0.1606 | 85.4286 | 598 | 0.6116 | 0.6334 | 0.6116 | 0.7820 |
| 0.1606 | 85.7143 | 600 | 0.6178 | 0.6334 | 0.6178 | 0.7860 |
| 0.1606 | 86.0 | 602 | 0.6189 | 0.6334 | 0.6189 | 0.7867 |
| 0.1606 | 86.2857 | 604 | 0.6163 | 0.6334 | 0.6163 | 0.7850 |
| 0.1606 | 86.5714 | 606 | 0.6123 | 0.6334 | 0.6123 | 0.7825 |
| 0.1606 | 86.8571 | 608 | 0.6119 | 0.6334 | 0.6119 | 0.7822 |
| 0.1606 | 87.1429 | 610 | 0.6092 | 0.6334 | 0.6092 | 0.7805 |
| 0.1606 | 87.4286 | 612 | 0.6074 | 0.6334 | 0.6074 | 0.7793 |
| 0.1606 | 87.7143 | 614 | 0.6059 | 0.6334 | 0.6059 | 0.7784 |
| 0.1606 | 88.0 | 616 | 0.6037 | 0.6334 | 0.6037 | 0.7770 |
| 0.1606 | 88.2857 | 618 | 0.6008 | 0.6334 | 0.6008 | 0.7751 |
| 0.1606 | 88.5714 | 620 | 0.5978 | 0.6334 | 0.5978 | 0.7732 |
| 0.1606 | 88.8571 | 622 | 0.5963 | 0.6246 | 0.5963 | 0.7722 |
| 0.1606 | 89.1429 | 624 | 0.5965 | 0.6246 | 0.5965 | 0.7723 |
| 0.1606 | 89.4286 | 626 | 0.5975 | 0.6144 | 0.5975 | 0.7730 |
| 0.1606 | 89.7143 | 628 | 0.6004 | 0.6144 | 0.6004 | 0.7748 |
| 0.1606 | 90.0 | 630 | 0.6030 | 0.6334 | 0.6030 | 0.7765 |
| 0.1606 | 90.2857 | 632 | 0.6047 | 0.6334 | 0.6047 | 0.7777 |
| 0.1606 | 90.5714 | 634 | 0.6045 | 0.6334 | 0.6045 | 0.7775 |
| 0.1606 | 90.8571 | 636 | 0.6037 | 0.6334 | 0.6037 | 0.7770 |
| 0.1606 | 91.1429 | 638 | 0.6039 | 0.6334 | 0.6039 | 0.7771 |
| 0.1606 | 91.4286 | 640 | 0.6062 | 0.6334 | 0.6062 | 0.7786 |
| 0.1606 | 91.7143 | 642 | 0.6113 | 0.5940 | 0.6113 | 0.7818 |
| 0.1606 | 92.0 | 644 | 0.6155 | 0.5940 | 0.6155 | 0.7845 |
| 0.1606 | 92.2857 | 646 | 0.6165 | 0.5940 | 0.6165 | 0.7851 |
| 0.1606 | 92.5714 | 648 | 0.6176 | 0.5940 | 0.6176 | 0.7859 |
| 0.1606 | 92.8571 | 650 | 0.6173 | 0.5940 | 0.6173 | 0.7857 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mlfoundations-dev/llama3-1_8b_r1_annotated_aops
|
mlfoundations-dev
| 2025-02-04T02:08:08Z | 357 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T20:45:31Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-1_8b_r1_annotated_aops
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-1_8b_r1_annotated_aops
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the mlfoundations-dev/r1_annotated_aops dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6827 | 1.0 | 33 | 0.6528 |
| 0.5976 | 2.0 | 66 | 0.6136 |
| 0.5482 | 3.0 | 99 | 0.6034 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
kostiantynk-out/51571237-142e-4c33-bece-51111e57a344
|
kostiantynk-out
| 2025-02-04T02:05:14Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-02-04T02:02:43Z |
---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 51571237-142e-4c33-bece-51111e57a344
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b177e99f9afc8918_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b177e99f9afc8918_train_data.json
type:
field_input: ''
field_instruction: title
field_output: cleaned_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/51571237-142e-4c33-bece-51111e57a344
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/b177e99f9afc8918_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6224a0bd-20f5-44b3-8193-1192471d4f6a
wandb_project: Mine-SN56-1-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6224a0bd-20f5-44b3-8193-1192471d4f6a
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 51571237-142e-4c33-bece-51111e57a344
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.1054 |
| 5.5429 | 0.0391 | 63 | 2.0672 |
| 5.2538 | 0.0781 | 126 | 2.0365 |
| 4.9491 | 0.1172 | 189 | 2.0195 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Chaitanya14/Financial_Agent
|
Chaitanya14
| 2025-02-04T02:01:44Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"region:us"
] | null | 2025-01-09T17:53:45Z |
---
base_model: bigscience/bloom-7b1
library_name: peft
---
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "Chaitanya14/Financial_Agent"
config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
```
- PEFT 0.10.1.dev0
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task2_organization
|
MayBashendy
| 2025-02-04T02:01:09Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T01:55:17Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8610
- Qwk: 0.3970
- Mse: 0.8610
- Rmse: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0185 | 2 | 4.8061 | 0.0010 | 4.8061 | 2.1923 |
| No log | 0.0370 | 4 | 2.6276 | 0.0051 | 2.6276 | 1.6210 |
| No log | 0.0556 | 6 | 1.6356 | 0.0682 | 1.6356 | 1.2789 |
| No log | 0.0741 | 8 | 1.3581 | 0.0958 | 1.3581 | 1.1654 |
| No log | 0.0926 | 10 | 1.4611 | -0.0494 | 1.4611 | 1.2088 |
| No log | 0.1111 | 12 | 1.4770 | -0.1091 | 1.4770 | 1.2153 |
| No log | 0.1296 | 14 | 1.3101 | 0.0847 | 1.3101 | 1.1446 |
| No log | 0.1481 | 16 | 1.3902 | 0.0253 | 1.3902 | 1.1791 |
| No log | 0.1667 | 18 | 1.4876 | 0.1288 | 1.4876 | 1.2197 |
| No log | 0.1852 | 20 | 1.3910 | 0.1507 | 1.3910 | 1.1794 |
| No log | 0.2037 | 22 | 1.2287 | 0.0788 | 1.2287 | 1.1085 |
| No log | 0.2222 | 24 | 1.1899 | 0.1043 | 1.1899 | 1.0908 |
| No log | 0.2407 | 26 | 1.1645 | 0.1443 | 1.1645 | 1.0791 |
| No log | 0.2593 | 28 | 1.1629 | 0.1344 | 1.1629 | 1.0784 |
| No log | 0.2778 | 30 | 1.1827 | 0.1344 | 1.1827 | 1.0875 |
| No log | 0.2963 | 32 | 1.2156 | 0.0977 | 1.2156 | 1.1025 |
| No log | 0.3148 | 34 | 1.2314 | 0.1232 | 1.2314 | 1.1097 |
| No log | 0.3333 | 36 | 1.4842 | 0.0537 | 1.4842 | 1.2183 |
| No log | 0.3519 | 38 | 1.6229 | 0.1032 | 1.6229 | 1.2739 |
| No log | 0.3704 | 40 | 1.3818 | 0.1530 | 1.3818 | 1.1755 |
| No log | 0.3889 | 42 | 1.2727 | 0.2446 | 1.2727 | 1.1281 |
| No log | 0.4074 | 44 | 1.1019 | 0.2168 | 1.1019 | 1.0497 |
| No log | 0.4259 | 46 | 1.0527 | 0.3066 | 1.0527 | 1.0260 |
| No log | 0.4444 | 48 | 0.9995 | 0.3695 | 0.9995 | 0.9997 |
| No log | 0.4630 | 50 | 0.9803 | 0.3596 | 0.9803 | 0.9901 |
| No log | 0.4815 | 52 | 0.9745 | 0.3346 | 0.9745 | 0.9872 |
| No log | 0.5 | 54 | 0.9892 | 0.3154 | 0.9892 | 0.9946 |
| No log | 0.5185 | 56 | 0.9996 | 0.4318 | 0.9996 | 0.9998 |
| No log | 0.5370 | 58 | 1.0600 | 0.2883 | 1.0600 | 1.0296 |
| No log | 0.5556 | 60 | 1.0838 | 0.2877 | 1.0838 | 1.0411 |
| No log | 0.5741 | 62 | 1.0717 | 0.3430 | 1.0717 | 1.0352 |
| No log | 0.5926 | 64 | 1.0834 | 0.2431 | 1.0834 | 1.0409 |
| No log | 0.6111 | 66 | 1.0516 | 0.2709 | 1.0516 | 1.0255 |
| No log | 0.6296 | 68 | 1.0386 | 0.2871 | 1.0386 | 1.0191 |
| No log | 0.6481 | 70 | 1.0151 | 0.3294 | 1.0151 | 1.0075 |
| No log | 0.6667 | 72 | 1.0660 | 0.2938 | 1.0660 | 1.0325 |
| No log | 0.6852 | 74 | 1.2035 | 0.4045 | 1.2035 | 1.0970 |
| No log | 0.7037 | 76 | 1.2189 | 0.4033 | 1.2189 | 1.1040 |
| No log | 0.7222 | 78 | 1.0613 | 0.4005 | 1.0613 | 1.0302 |
| No log | 0.7407 | 80 | 0.9840 | 0.3457 | 0.9840 | 0.9920 |
| No log | 0.7593 | 82 | 0.9527 | 0.4260 | 0.9527 | 0.9761 |
| No log | 0.7778 | 84 | 0.9423 | 0.4260 | 0.9423 | 0.9707 |
| No log | 0.7963 | 86 | 0.9502 | 0.3814 | 0.9502 | 0.9748 |
| No log | 0.8148 | 88 | 0.9627 | 0.3798 | 0.9627 | 0.9812 |
| No log | 0.8333 | 90 | 0.9732 | 0.3798 | 0.9732 | 0.9865 |
| No log | 0.8519 | 92 | 0.9781 | 0.3699 | 0.9781 | 0.9890 |
| No log | 0.8704 | 94 | 0.9746 | 0.3559 | 0.9746 | 0.9872 |
| No log | 0.8889 | 96 | 0.9998 | 0.3338 | 0.9998 | 0.9999 |
| No log | 0.9074 | 98 | 1.0160 | 0.2891 | 1.0160 | 1.0080 |
| No log | 0.9259 | 100 | 1.0355 | 0.2672 | 1.0355 | 1.0176 |
| No log | 0.9444 | 102 | 1.0981 | 0.2482 | 1.0981 | 1.0479 |
| No log | 0.9630 | 104 | 1.0951 | 0.2750 | 1.0951 | 1.0465 |
| No log | 0.9815 | 106 | 1.0438 | 0.3173 | 1.0438 | 1.0217 |
| No log | 1.0 | 108 | 1.0207 | 0.2796 | 1.0207 | 1.0103 |
| No log | 1.0185 | 110 | 0.9698 | 0.3554 | 0.9698 | 0.9848 |
| No log | 1.0370 | 112 | 0.9688 | 0.3351 | 0.9688 | 0.9843 |
| No log | 1.0556 | 114 | 0.9859 | 0.3725 | 0.9859 | 0.9929 |
| No log | 1.0741 | 116 | 0.9732 | 0.3303 | 0.9732 | 0.9865 |
| No log | 1.0926 | 118 | 1.0109 | 0.3427 | 1.0109 | 1.0054 |
| No log | 1.1111 | 120 | 1.0989 | 0.2203 | 1.0989 | 1.0483 |
| No log | 1.1296 | 122 | 1.0715 | 0.2721 | 1.0715 | 1.0351 |
| No log | 1.1481 | 124 | 0.9905 | 0.3276 | 0.9905 | 0.9952 |
| No log | 1.1667 | 126 | 0.9455 | 0.3650 | 0.9455 | 0.9724 |
| No log | 1.1852 | 128 | 0.9577 | 0.4736 | 0.9577 | 0.9786 |
| No log | 1.2037 | 130 | 1.0176 | 0.3518 | 1.0176 | 1.0088 |
| No log | 1.2222 | 132 | 0.9782 | 0.3725 | 0.9782 | 0.9890 |
| No log | 1.2407 | 134 | 0.9128 | 0.4527 | 0.9128 | 0.9554 |
| No log | 1.2593 | 136 | 0.8783 | 0.4197 | 0.8783 | 0.9372 |
| No log | 1.2778 | 138 | 0.8656 | 0.4197 | 0.8656 | 0.9304 |
| No log | 1.2963 | 140 | 0.9447 | 0.4631 | 0.9447 | 0.9720 |
| No log | 1.3148 | 142 | 1.0511 | 0.3807 | 1.0511 | 1.0252 |
| No log | 1.3333 | 144 | 0.9450 | 0.4565 | 0.9450 | 0.9721 |
| No log | 1.3519 | 146 | 0.8753 | 0.4916 | 0.8753 | 0.9356 |
| No log | 1.3704 | 148 | 0.8913 | 0.3965 | 0.8913 | 0.9441 |
| No log | 1.3889 | 150 | 0.9184 | 0.4789 | 0.9184 | 0.9583 |
| No log | 1.4074 | 152 | 0.9299 | 0.4454 | 0.9299 | 0.9643 |
| No log | 1.4259 | 154 | 0.9219 | 0.4628 | 0.9219 | 0.9601 |
| No log | 1.4444 | 156 | 0.9130 | 0.3814 | 0.9130 | 0.9555 |
| No log | 1.4630 | 158 | 0.9167 | 0.4578 | 0.9167 | 0.9574 |
| No log | 1.4815 | 160 | 0.9134 | 0.3382 | 0.9134 | 0.9557 |
| No log | 1.5 | 162 | 0.9653 | 0.4074 | 0.9653 | 0.9825 |
| No log | 1.5185 | 164 | 0.9814 | 0.3908 | 0.9814 | 0.9907 |
| No log | 1.5370 | 166 | 0.9420 | 0.4074 | 0.9420 | 0.9706 |
| No log | 1.5556 | 168 | 0.8930 | 0.4294 | 0.8930 | 0.9450 |
| No log | 1.5741 | 170 | 0.8894 | 0.4661 | 0.8894 | 0.9431 |
| No log | 1.5926 | 172 | 0.8838 | 0.4661 | 0.8838 | 0.9401 |
| No log | 1.6111 | 174 | 0.8736 | 0.4004 | 0.8736 | 0.9347 |
| No log | 1.6296 | 176 | 0.8568 | 0.4429 | 0.8568 | 0.9256 |
| No log | 1.6481 | 178 | 0.8741 | 0.3991 | 0.8741 | 0.9349 |
| No log | 1.6667 | 180 | 0.8583 | 0.3920 | 0.8583 | 0.9264 |
| No log | 1.6852 | 182 | 0.8547 | 0.3920 | 0.8547 | 0.9245 |
| No log | 1.7037 | 184 | 0.8589 | 0.3780 | 0.8589 | 0.9268 |
| No log | 1.7222 | 186 | 0.8637 | 0.4197 | 0.8637 | 0.9293 |
| No log | 1.7407 | 188 | 0.8782 | 0.4334 | 0.8782 | 0.9371 |
| No log | 1.7593 | 190 | 0.8765 | 0.3627 | 0.8765 | 0.9362 |
| No log | 1.7778 | 192 | 0.8782 | 0.3648 | 0.8782 | 0.9371 |
| No log | 1.7963 | 194 | 0.8901 | 0.3648 | 0.8901 | 0.9434 |
| No log | 1.8148 | 196 | 0.9284 | 0.3988 | 0.9284 | 0.9635 |
| No log | 1.8333 | 198 | 0.8939 | 0.4093 | 0.8939 | 0.9455 |
| No log | 1.8519 | 200 | 0.9117 | 0.3951 | 0.9117 | 0.9548 |
| No log | 1.8704 | 202 | 0.9536 | 0.3988 | 0.9536 | 0.9765 |
| No log | 1.8889 | 204 | 0.9097 | 0.4337 | 0.9097 | 0.9538 |
| No log | 1.9074 | 206 | 0.9028 | 0.4337 | 0.9028 | 0.9502 |
| No log | 1.9259 | 208 | 0.9348 | 0.4550 | 0.9348 | 0.9668 |
| No log | 1.9444 | 210 | 0.9483 | 0.5163 | 0.9483 | 0.9738 |
| No log | 1.9630 | 212 | 0.8748 | 0.4730 | 0.8748 | 0.9353 |
| No log | 1.9815 | 214 | 0.8462 | 0.5024 | 0.8462 | 0.9199 |
| No log | 2.0 | 216 | 0.8723 | 0.4563 | 0.8723 | 0.9340 |
| No log | 2.0185 | 218 | 1.0110 | 0.4153 | 1.0110 | 1.0055 |
| No log | 2.0370 | 220 | 1.0326 | 0.4214 | 1.0326 | 1.0161 |
| No log | 2.0556 | 222 | 0.8998 | 0.4476 | 0.8998 | 0.9486 |
| No log | 2.0741 | 224 | 0.8997 | 0.4144 | 0.8997 | 0.9485 |
| No log | 2.0926 | 226 | 0.8919 | 0.3819 | 0.8919 | 0.9444 |
| No log | 2.1111 | 228 | 0.8765 | 0.4563 | 0.8765 | 0.9362 |
| No log | 2.1296 | 230 | 0.8990 | 0.4507 | 0.8990 | 0.9481 |
| No log | 2.1481 | 232 | 0.8755 | 0.4841 | 0.8755 | 0.9357 |
| No log | 2.1667 | 234 | 0.8642 | 0.4334 | 0.8642 | 0.9296 |
| No log | 2.1852 | 236 | 0.8493 | 0.5216 | 0.8493 | 0.9216 |
| No log | 2.2037 | 238 | 0.8464 | 0.4962 | 0.8464 | 0.9200 |
| No log | 2.2222 | 240 | 0.8951 | 0.4848 | 0.8951 | 0.9461 |
| No log | 2.2407 | 242 | 0.9781 | 0.4059 | 0.9781 | 0.9890 |
| No log | 2.2593 | 244 | 1.0199 | 0.4056 | 1.0199 | 1.0099 |
| No log | 2.2778 | 246 | 0.9495 | 0.3348 | 0.9495 | 0.9744 |
| No log | 2.2963 | 248 | 0.9076 | 0.3992 | 0.9076 | 0.9527 |
| No log | 2.3148 | 250 | 0.9068 | 0.4094 | 0.9068 | 0.9523 |
| No log | 2.3333 | 252 | 0.9247 | 0.3992 | 0.9247 | 0.9616 |
| No log | 2.3519 | 254 | 0.9014 | 0.3956 | 0.9014 | 0.9494 |
| No log | 2.3704 | 256 | 0.9229 | 0.4136 | 0.9229 | 0.9607 |
| No log | 2.3889 | 258 | 1.0185 | 0.4516 | 1.0185 | 1.0092 |
| No log | 2.4074 | 260 | 0.9443 | 0.4991 | 0.9443 | 0.9717 |
| No log | 2.4259 | 262 | 0.8616 | 0.3983 | 0.8616 | 0.9282 |
| No log | 2.4444 | 264 | 0.8613 | 0.4757 | 0.8613 | 0.9280 |
| No log | 2.4630 | 266 | 0.8595 | 0.4158 | 0.8595 | 0.9271 |
| No log | 2.4815 | 268 | 0.9163 | 0.4763 | 0.9163 | 0.9572 |
| No log | 2.5 | 270 | 0.9032 | 0.4861 | 0.9032 | 0.9504 |
| No log | 2.5185 | 272 | 0.8801 | 0.4337 | 0.8801 | 0.9381 |
| No log | 2.5370 | 274 | 0.8654 | 0.3596 | 0.8654 | 0.9302 |
| No log | 2.5556 | 276 | 0.8752 | 0.4548 | 0.8752 | 0.9355 |
| No log | 2.5741 | 278 | 0.8582 | 0.3483 | 0.8582 | 0.9264 |
| No log | 2.5926 | 280 | 0.8549 | 0.4548 | 0.8549 | 0.9246 |
| No log | 2.6111 | 282 | 0.8602 | 0.4646 | 0.8602 | 0.9275 |
| No log | 2.6296 | 284 | 0.8500 | 0.3914 | 0.8500 | 0.9219 |
| No log | 2.6481 | 286 | 0.8549 | 0.4056 | 0.8549 | 0.9246 |
| No log | 2.6667 | 288 | 0.8686 | 0.4337 | 0.8686 | 0.9320 |
| No log | 2.6852 | 290 | 0.8592 | 0.4297 | 0.8592 | 0.9269 |
| No log | 2.7037 | 292 | 0.8533 | 0.4450 | 0.8533 | 0.9238 |
| No log | 2.7222 | 294 | 0.8750 | 0.3943 | 0.8750 | 0.9354 |
| No log | 2.7407 | 296 | 0.8459 | 0.4219 | 0.8459 | 0.9197 |
| No log | 2.7593 | 298 | 0.8281 | 0.5042 | 0.8281 | 0.9100 |
| No log | 2.7778 | 300 | 0.8731 | 0.3590 | 0.8731 | 0.9344 |
| No log | 2.7963 | 302 | 0.8381 | 0.3946 | 0.8381 | 0.9155 |
| No log | 2.8148 | 304 | 0.8299 | 0.4157 | 0.8299 | 0.9110 |
| No log | 2.8333 | 306 | 0.8495 | 0.4470 | 0.8495 | 0.9217 |
| No log | 2.8519 | 308 | 0.8499 | 0.4898 | 0.8499 | 0.9219 |
| No log | 2.8704 | 310 | 0.8255 | 0.4012 | 0.8255 | 0.9086 |
| No log | 2.8889 | 312 | 0.8458 | 0.3946 | 0.8458 | 0.9197 |
| No log | 2.9074 | 314 | 0.8425 | 0.3951 | 0.8425 | 0.9179 |
| No log | 2.9259 | 316 | 0.8074 | 0.3728 | 0.8074 | 0.8985 |
| No log | 2.9444 | 318 | 0.8000 | 0.3583 | 0.8000 | 0.8944 |
| No log | 2.9630 | 320 | 0.8083 | 0.4916 | 0.8083 | 0.8990 |
| No log | 2.9815 | 322 | 0.8199 | 0.4998 | 0.8199 | 0.9055 |
| No log | 3.0 | 324 | 0.7871 | 0.3787 | 0.7871 | 0.8872 |
| No log | 3.0185 | 326 | 0.7799 | 0.4075 | 0.7799 | 0.8831 |
| No log | 3.0370 | 328 | 0.7763 | 0.4075 | 0.7763 | 0.8811 |
| No log | 3.0556 | 330 | 0.7751 | 0.4280 | 0.7751 | 0.8804 |
| No log | 3.0741 | 332 | 0.7860 | 0.4611 | 0.7860 | 0.8866 |
| No log | 3.0926 | 334 | 0.7832 | 0.4656 | 0.7832 | 0.8850 |
| No log | 3.1111 | 336 | 0.8004 | 0.4075 | 0.8004 | 0.8946 |
| No log | 3.1296 | 338 | 0.8624 | 0.3660 | 0.8624 | 0.9287 |
| No log | 3.1481 | 340 | 0.8872 | 0.3866 | 0.8872 | 0.9419 |
| No log | 3.1667 | 342 | 0.8758 | 0.3168 | 0.8758 | 0.9358 |
| No log | 3.1852 | 344 | 0.8449 | 0.3437 | 0.8449 | 0.9192 |
| No log | 3.2037 | 346 | 0.8313 | 0.3719 | 0.8313 | 0.9118 |
| No log | 3.2222 | 348 | 0.8613 | 0.3946 | 0.8613 | 0.9281 |
| No log | 3.2407 | 350 | 0.8908 | 0.3946 | 0.8908 | 0.9438 |
| No log | 3.2593 | 352 | 0.8884 | 0.3356 | 0.8884 | 0.9426 |
| No log | 3.2778 | 354 | 0.8856 | 0.3020 | 0.8856 | 0.9411 |
| No log | 3.2963 | 356 | 0.8813 | 0.3229 | 0.8813 | 0.9388 |
| No log | 3.3148 | 358 | 0.8314 | 0.3596 | 0.8314 | 0.9118 |
| No log | 3.3333 | 360 | 0.7783 | 0.4466 | 0.7783 | 0.8822 |
| No log | 3.3519 | 362 | 0.7897 | 0.4198 | 0.7897 | 0.8886 |
| No log | 3.3704 | 364 | 0.7770 | 0.4587 | 0.7770 | 0.8815 |
| No log | 3.3889 | 366 | 0.7246 | 0.4942 | 0.7246 | 0.8512 |
| No log | 3.4074 | 368 | 0.7843 | 0.5567 | 0.7843 | 0.8856 |
| No log | 3.4259 | 370 | 0.7833 | 0.5368 | 0.7833 | 0.8850 |
| No log | 3.4444 | 372 | 0.7477 | 0.3933 | 0.7477 | 0.8647 |
| No log | 3.4630 | 374 | 0.7421 | 0.4853 | 0.7421 | 0.8614 |
| No log | 3.4815 | 376 | 0.7470 | 0.4853 | 0.7470 | 0.8643 |
| No log | 3.5 | 378 | 0.7697 | 0.3933 | 0.7697 | 0.8773 |
| No log | 3.5185 | 380 | 0.8245 | 0.3045 | 0.8245 | 0.9080 |
| No log | 3.5370 | 382 | 0.8643 | 0.3519 | 0.8643 | 0.9297 |
| No log | 3.5556 | 384 | 0.8671 | 0.4503 | 0.8671 | 0.9312 |
| No log | 3.5741 | 386 | 0.8494 | 0.3147 | 0.8494 | 0.9216 |
| No log | 3.5926 | 388 | 0.8145 | 0.4075 | 0.8145 | 0.9025 |
| No log | 3.6111 | 390 | 0.8096 | 0.4054 | 0.8096 | 0.8998 |
| No log | 3.6296 | 392 | 0.7907 | 0.3627 | 0.7907 | 0.8892 |
| No log | 3.6481 | 394 | 0.8544 | 0.4949 | 0.8544 | 0.9243 |
| No log | 3.6667 | 396 | 0.9670 | 0.4186 | 0.9670 | 0.9834 |
| No log | 3.6852 | 398 | 0.9581 | 0.4186 | 0.9581 | 0.9788 |
| No log | 3.7037 | 400 | 0.8559 | 0.3298 | 0.8559 | 0.9252 |
| No log | 3.7222 | 402 | 0.8586 | 0.4483 | 0.8586 | 0.9266 |
| No log | 3.7407 | 404 | 0.8696 | 0.4489 | 0.8696 | 0.9325 |
| No log | 3.7593 | 406 | 0.8190 | 0.3951 | 0.8190 | 0.9050 |
| No log | 3.7778 | 408 | 0.7880 | 0.3938 | 0.7880 | 0.8877 |
| No log | 3.7963 | 410 | 0.8012 | 0.5467 | 0.8012 | 0.8951 |
| No log | 3.8148 | 412 | 0.7806 | 0.5476 | 0.7806 | 0.8835 |
| No log | 3.8333 | 414 | 0.7562 | 0.4019 | 0.7562 | 0.8696 |
| No log | 3.8519 | 416 | 0.7573 | 0.4471 | 0.7573 | 0.8703 |
| No log | 3.8704 | 418 | 0.7520 | 0.5057 | 0.7520 | 0.8672 |
| No log | 3.8889 | 420 | 0.7460 | 0.5770 | 0.7460 | 0.8637 |
| No log | 3.9074 | 422 | 0.7538 | 0.5450 | 0.7538 | 0.8682 |
| No log | 3.9259 | 424 | 0.7739 | 0.3909 | 0.7739 | 0.8797 |
| No log | 3.9444 | 426 | 0.8882 | 0.4594 | 0.8882 | 0.9424 |
| No log | 3.9630 | 428 | 0.9200 | 0.4594 | 0.9200 | 0.9592 |
| No log | 3.9815 | 430 | 0.8186 | 0.4315 | 0.8186 | 0.9048 |
| No log | 4.0 | 432 | 0.6914 | 0.6059 | 0.6914 | 0.8315 |
| No log | 4.0185 | 434 | 0.7329 | 0.6079 | 0.7329 | 0.8561 |
| No log | 4.0370 | 436 | 0.7654 | 0.6079 | 0.7654 | 0.8749 |
| No log | 4.0556 | 438 | 0.7051 | 0.5951 | 0.7051 | 0.8397 |
| No log | 4.0741 | 440 | 0.7309 | 0.5503 | 0.7309 | 0.8549 |
| No log | 4.0926 | 442 | 0.8199 | 0.5578 | 0.8199 | 0.9055 |
| No log | 4.1111 | 444 | 0.8140 | 0.5578 | 0.8140 | 0.9022 |
| No log | 4.1296 | 446 | 0.7557 | 0.5089 | 0.7557 | 0.8693 |
| No log | 4.1481 | 448 | 0.7437 | 0.5125 | 0.7437 | 0.8624 |
| No log | 4.1667 | 450 | 0.7631 | 0.5044 | 0.7631 | 0.8735 |
| No log | 4.1852 | 452 | 0.7899 | 0.4792 | 0.7899 | 0.8888 |
| No log | 4.2037 | 454 | 0.8066 | 0.4874 | 0.8066 | 0.8981 |
| No log | 4.2222 | 456 | 0.8319 | 0.4197 | 0.8319 | 0.9121 |
| No log | 4.2407 | 458 | 0.9779 | 0.3815 | 0.9779 | 0.9889 |
| No log | 4.2593 | 460 | 1.0743 | 0.4040 | 1.0743 | 1.0365 |
| No log | 4.2778 | 462 | 0.9684 | 0.4356 | 0.9684 | 0.9841 |
| No log | 4.2963 | 464 | 0.8000 | 0.4197 | 0.8000 | 0.8944 |
| No log | 4.3148 | 466 | 0.7748 | 0.4977 | 0.7748 | 0.8802 |
| No log | 4.3333 | 468 | 0.7874 | 0.4715 | 0.7874 | 0.8874 |
| No log | 4.3519 | 470 | 0.8109 | 0.3627 | 0.8109 | 0.9005 |
| No log | 4.3704 | 472 | 0.8439 | 0.3771 | 0.8439 | 0.9187 |
| No log | 4.3889 | 474 | 0.8567 | 0.3660 | 0.8567 | 0.9256 |
| No log | 4.4074 | 476 | 0.8428 | 0.3483 | 0.8428 | 0.9180 |
| No log | 4.4259 | 478 | 0.8335 | 0.3483 | 0.8335 | 0.9130 |
| No log | 4.4444 | 480 | 0.8268 | 0.3483 | 0.8268 | 0.9093 |
| No log | 4.4630 | 482 | 0.8385 | 0.3771 | 0.8385 | 0.9157 |
| No log | 4.4815 | 484 | 0.8638 | 0.3806 | 0.8638 | 0.9294 |
| No log | 4.5 | 486 | 0.8727 | 0.3513 | 0.8727 | 0.9342 |
| No log | 4.5185 | 488 | 0.8904 | 0.3196 | 0.8904 | 0.9436 |
| No log | 4.5370 | 490 | 0.9123 | 0.2470 | 0.9123 | 0.9551 |
| No log | 4.5556 | 492 | 0.9144 | 0.2821 | 0.9144 | 0.9562 |
| No log | 4.5741 | 494 | 0.8611 | 0.3744 | 0.8611 | 0.9279 |
| No log | 4.5926 | 496 | 0.8303 | 0.4197 | 0.8303 | 0.9112 |
| No log | 4.6111 | 498 | 0.8320 | 0.4337 | 0.8320 | 0.9122 |
| 0.2735 | 4.6296 | 500 | 0.8089 | 0.4197 | 0.8089 | 0.8994 |
| 0.2735 | 4.6481 | 502 | 0.8035 | 0.3879 | 0.8035 | 0.8964 |
| 0.2735 | 4.6667 | 504 | 0.8206 | 0.4912 | 0.8206 | 0.9059 |
| 0.2735 | 4.6852 | 506 | 0.8262 | 0.3583 | 0.8262 | 0.9090 |
| 0.2735 | 4.7037 | 508 | 0.8333 | 0.3974 | 0.8333 | 0.9129 |
| 0.2735 | 4.7222 | 510 | 0.8620 | 0.4012 | 0.8620 | 0.9284 |
| 0.2735 | 4.7407 | 512 | 0.8742 | 0.4012 | 0.8742 | 0.9350 |
| 0.2735 | 4.7593 | 514 | 0.8610 | 0.3970 | 0.8610 | 0.9279 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Triangle104/Qwen2.5-32b-Erudite-Writer-Q4_K_M-GGUF
|
Triangle104
| 2025-02-04T02:01:05Z | 25 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SubtleOne/Qwen2.5-32b-Erudite-Writer",
"base_model:quantized:SubtleOne/Qwen2.5-32b-Erudite-Writer",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-04T01:41:55Z |
---
base_model: SubtleOne/Qwen2.5-32b-Erudite-Writer
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Qwen2.5-32b-Erudite-Writer-Q4_K_M-GGUF
This model was converted to GGUF format from [`SubtleOne/Qwen2.5-32b-Erudite-Writer`](https://huggingface.co/SubtleOne/Qwen2.5-32b-Erudite-Writer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SubtleOne/Qwen2.5-32b-Erudite-Writer) for more details on the model.
---
This model is a merge using Rombos's top-ranked 32b model, based on Qwen 2.5, and merging three creative writing finetunes. The creative content is a serious upgrade over the base it started with and has a much more literary style than the previous Writer model. I won't call it better or worse, merely a very distinct flavor and style. I quite like it, and enjoin you to try it as well. Enjoy!
Merge Method
-
This model was merged using the DELLA merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base.
Models Merged
The following models were included in the merge:
nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
Configuration
-
The following YAML configuration was used to produce this model:
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
int8_mask: true
rescale: false
normalize: true
lambda: 1.04
epsilon: 0.05
dtype: bfloat16
tokenizer_source: union
merge_method: della
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
parameters:
weight: [0.40]
density: [0.53]
- model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
parameters:
weight: [0.30]
density: [0.53]
- model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
parameters:
weight: [0.40]
density: [0.53]
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q4_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q4_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q4_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-32b-Erudite-Writer-Q4_K_M-GGUF --hf-file qwen2.5-32b-erudite-writer-q4_k_m.gguf -c 2048
```
|
shibajustfor/c0df181a-0877-42be-9869-35d2b3797150
|
shibajustfor
| 2025-02-04T02:00:28Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-02-04T01:55:58Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c0df181a-0877-42be-9869-35d2b3797150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7f4ffc4da3710d39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f4ffc4da3710d39_train_data.json
type:
field_input: text
field_instruction: task_name
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/c0df181a-0877-42be-9869-35d2b3797150
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7f4ffc4da3710d39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a4f7ae30-2ca5-42fa-a4c8-6320e54b4228
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a4f7ae30-2ca5-42fa-a4c8-6320e54b4228
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c0df181a-0877-42be-9869-35d2b3797150
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 2.5925 |
| 0.1522 | 0.0199 | 50 | 0.2752 |
| 0.2398 | 0.0398 | 100 | 0.1994 |
| 0.3881 | 0.0598 | 150 | 0.2192 |
| 0.1998 | 0.0797 | 200 | 0.1778 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Luongdzung/hoa-1b4-sft-mat-rslora
|
Luongdzung
| 2025-02-04T01:58:48Z | 8 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vlsp-2023-vllm/hoa-1b4",
"base_model:adapter:vlsp-2023-vllm/hoa-1b4",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-02-04T01:58:45Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: vlsp-2023-vllm/hoa-1b4
tags:
- generated_from_trainer
model-index:
- name: hoa-1b4-sft-mat-rslora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hoa-1b4-sft-mat-rslora
This model is a fine-tuned version of [vlsp-2023-vllm/hoa-1b4](https://huggingface.co/vlsp-2023-vllm/hoa-1b4) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
daniel40/b2efff34-4244-4b14-9a61-23bfaca91b9a
|
daniel40
| 2025-02-04T01:58:02Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"license:mit",
"region:us"
] | null | 2025-02-04T01:49:06Z |
---
library_name: peft
license: mit
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2efff34-4244-4b14-9a61-23bfaca91b9a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fe297105e697bbbb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe297105e697bbbb_train_data.json
type:
field_instruction: task
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/b2efff34-4244-4b14-9a61-23bfaca91b9a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fe297105e697bbbb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3fa43a59-7bfe-43c9-93ae-74585476d2fa
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3fa43a59-7bfe-43c9-93ae-74585476d2fa
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b2efff34-4244-4b14-9a61-23bfaca91b9a
This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0021 | 1 | 0.7745 |
| 2.4426 | 0.1036 | 50 | 0.6402 |
| 2.3979 | 0.2073 | 100 | 0.6269 |
| 2.3278 | 0.3109 | 150 | 0.6197 |
| 2.3291 | 0.4145 | 200 | 0.6114 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Dongwei/Qwen-2.5-7B_Math
|
Dongwei
| 2025-02-04T01:57:50Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-03T17:49:42Z |
---
base_model: Qwen/Qwen2.5-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B_Math
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B_Math
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Dongwei/Qwen-2.5-7B_Math", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dongwei_jiang/huggingface/runs/ceahffo4)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
J-AI/Qwen_R1-PTBR-Q4_K_M-GGUF
|
J-AI
| 2025-02-04T01:57:32Z | 21 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:J-AI/Qwen_R1-PTBR",
"base_model:quantized:J-AI/Qwen_R1-PTBR",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T01:56:01Z |
---
base_model: J-AI/Qwen_R1-PTBR
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# J-AI/Qwen_R1-PTBR-Q4_K_M-GGUF
This model was converted to GGUF format from [`J-AI/Qwen_R1-PTBR`](https://huggingface.co/J-AI/Qwen_R1-PTBR) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/J-AI/Qwen_R1-PTBR) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo J-AI/Qwen_R1-PTBR-Q4_K_M-GGUF --hf-file qwen_r1-ptbr-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo J-AI/Qwen_R1-PTBR-Q4_K_M-GGUF --hf-file qwen_r1-ptbr-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo J-AI/Qwen_R1-PTBR-Q4_K_M-GGUF --hf-file qwen_r1-ptbr-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo J-AI/Qwen_R1-PTBR-Q4_K_M-GGUF --hf-file qwen_r1-ptbr-q4_k_m.gguf -c 2048
```
|
clarxus/27267c94-1388-421b-a1c5-003efd21926e
|
clarxus
| 2025-02-04T01:57:21Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T00:57:07Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27267c94-1388-421b-a1c5-003efd21926e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3e5eab4715297236_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e5eab4715297236_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: clarxus/27267c94-1388-421b-a1c5-003efd21926e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/3e5eab4715297236_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
wandb_project: Gradients-On-Seven
wandb_run: your_name
wandb_runid: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 27267c94-1388-421b-a1c5-003efd21926e
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0037 | 1 | 0.3281 |
| 1.0191 | 0.0333 | 9 | 0.2866 |
| 0.9663 | 0.0665 | 18 | 0.2518 |
| 0.8827 | 0.0998 | 27 | 0.2377 |
| 1.0331 | 0.1331 | 36 | 0.2311 |
| 0.8651 | 0.1664 | 45 | 0.2261 |
| 0.8807 | 0.1996 | 54 | 0.2241 |
| 0.8233 | 0.2329 | 63 | 0.2238 |
| 0.8239 | 0.2662 | 72 | 0.2222 |
| 0.8276 | 0.2994 | 81 | 0.2215 |
| 0.7441 | 0.3327 | 90 | 0.2213 |
| 0.7715 | 0.3660 | 99 | 0.2212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
jssky/f5f49405-0c67-4637-bed3-e72e471e7acd
|
jssky
| 2025-02-04T01:56:32Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-02-04T01:55:51Z |
---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5f49405-0c67-4637-bed3-e72e471e7acd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b77b35ef124b1260_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b77b35ef124b1260_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: jssky/f5f49405-0c67-4637-bed3-e72e471e7acd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- model.layers.5.mlp.gate_proj
- model.layers.23.mlp.up_proj
- model.layers.17.self_attn.k_proj
- model.layers.21.mlp.down_proj
- model.layers.23.self_attn.o_proj
- model.layers.4.self_attn.q_proj
- model.layers.9.self_attn.v_proj
- model.layers.23.self_attn.k_proj
- model.layers.14.mlp.down_proj
- model.layers.23.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.7.self_attn.q_proj
- model.layers.18.mlp.up_proj
- model.layers.10.self_attn.v_proj
- model.layers.11.mlp.gate_proj
- model.layers.22.self_attn.k_proj
- model.layers.6.self_attn.v_proj
- model.layers.3.mlp.down_proj
- model.layers.0.mlp.up_proj
- model.layers.13.self_attn.v_proj
- model.layers.18.mlp.down_proj
- model.layers.2.mlp.down_proj
- model.layers.11.self_attn.v_proj
- model.layers.8.self_attn.v_proj
- model.layers.20.mlp.gate_proj
- model.layers.22.mlp.down_proj
- model.layers.13.mlp.down_proj
- model.layers.1.self_attn.k_proj
- model.layers.12.mlp.up_proj
- model.layers.0.mlp.down_proj
- model.layers.8.self_attn.k_proj
- model.layers.21.self_attn.v_proj
- model.layers.7.self_attn.k_proj
- model.layers.15.mlp.up_proj
- model.layers.9.mlp.gate_proj
- model.layers.12.mlp.gate_proj
- model.layers.0.self_attn.q_proj
- model.layers.5.self_attn.k_proj
- model.layers.2.mlp.up_proj
- model.layers.6.mlp.gate_proj
- model.layers.22.self_attn.o_proj
- model.layers.6.self_attn.k_proj
- model.layers.22.self_attn.v_proj
- model.layers.23.mlp.gate_proj
- model.layers.18.self_attn.k_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.o_proj
- model.layers.8.mlp.down_proj
- model.layers.5.self_attn.o_proj
- model.layers.20.mlp.down_proj
- model.layers.10.mlp.gate_proj
- model.layers.18.self_attn.v_proj
- model.layers.22.self_attn.q_proj
- model.layers.15.self_attn.q_proj
- model.layers.16.self_attn.o_proj
- model.layers.10.self_attn.q_proj
- model.layers.17.self_attn.o_proj
- model.layers.5.mlp.down_proj
- model.layers.12.self_attn.o_proj
- model.layers.9.mlp.down_proj
- model.layers.19.mlp.up_proj
- model.layers.1.mlp.down_proj
- model.layers.4.self_attn.k_proj
- model.layers.21.self_attn.o_proj
- model.layers.16.self_attn.q_proj
- model.layers.9.self_attn.q_proj
- model.layers.17.self_attn.v_proj
- model.layers.8.mlp.gate_proj
- model.layers.17.mlp.down_proj
- model.layers.7.mlp.down_proj
- model.layers.16.self_attn.k_proj
- model.layers.14.mlp.gate_proj
- model.layers.20.mlp.up_proj
- model.layers.19.self_attn.q_proj
- model.layers.15.self_attn.k_proj
- model.layers.1.self_attn.q_proj
- model.layers.1.mlp.up_proj
- model.layers.23.mlp.down_proj
- model.layers.11.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.0.self_attn.v_proj
- model.layers.14.self_attn.k_proj
- model.layers.7.self_attn.o_proj
- model.layers.23.self_attn.q_proj
- model.layers.13.mlp.up_proj
- model.layers.21.self_attn.k_proj
- model.layers.22.mlp.gate_proj
- model.layers.2.mlp.gate_proj
- model.layers.20.self_attn.k_proj
- model.layers.11.self_attn.o_proj
- model.layers.16.mlp.down_proj
- model.layers.19.self_attn.k_proj
- model.layers.4.mlp.up_proj
- model.embed_tokens
- model.layers.4.mlp.down_proj
- model.layers.14.self_attn.q_proj
- model.layers.13.mlp.gate_proj
- model.layers.3.mlp.gate_proj
- model.layers.22.mlp.up_proj
- model.layers.6.self_attn.o_proj
- model.layers.12.self_attn.q_proj
- model.layers.19.mlp.down_proj
- model.layers.10.self_attn.o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b77b35ef124b1260_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 99997652-8a0b-462d-8035-54df350aea9e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 99997652-8a0b-462d-8035-54df350aea9e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f5f49405-0c67-4637-bed3-e72e471e7acd
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.2952 | 0.3571 | 50 | 10.2971 |
| 10.2559 | 0.7143 | 100 | 10.2941 |
| 10.2919 | 1.0714 | 150 | 10.2917 |
| 10.2771 | 1.4286 | 200 | 10.2921 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
philip-hightech/deb4cfcf-07d3-4ab6-af53-c2bbb701c14b
|
philip-hightech
| 2025-02-04T01:56:23Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-02-04T01:53:15Z |
---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: deb4cfcf-07d3-4ab6-af53-c2bbb701c14b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b177e99f9afc8918_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b177e99f9afc8918_train_data.json
type:
field_input: ''
field_instruction: title
field_output: cleaned_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/deb4cfcf-07d3-4ab6-af53-c2bbb701c14b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/b177e99f9afc8918_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6224a0bd-20f5-44b3-8193-1192471d4f6a
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6224a0bd-20f5-44b3-8193-1192471d4f6a
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# deb4cfcf-07d3-4ab6-af53-c2bbb701c14b
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.1054 |
| 5.1726 | 0.0391 | 63 | 2.0449 |
| 4.8189 | 0.0781 | 126 | 2.0042 |
| 4.5805 | 0.1172 | 189 | 1.9812 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MorningDusk/yugioh_recipe
|
MorningDusk
| 2025-02-04T01:53:30Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-04T01:51:29Z |
---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MorningDusk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
duyntnet/Mistral-Small-24B-Instruct-2501-imatrix-GGUF
|
duyntnet
| 2025-02-04T01:50:22Z | 708 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"Mistral-Small-24B-Instruct-2501",
"text-generation",
"en",
"license:other",
"region:us",
"conversational"
] |
text-generation
| 2025-02-03T17:43:07Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Mistral-Small-24B-Instruct-2501
---
Quantizations of https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501
### Inference Clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [ollama](https://github.com/ollama/ollama)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [jan](https://github.com/janhq/jan)
* [GPT4All](https://github.com/nomic-ai/gpt4all)
---
# From original readme
Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501).
Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized.
Perfect for:
- Fast response conversational agents.
- Low latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.
This release demonstrates our commitment to open source, serving as a strong base model.
Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/).
Model developper: Mistral AI Team
## Key Features
- **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
- **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 32k context window.
- **System Prompt:** Maintains strong adherence and support for system prompts.
- **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
### Basic Instruct Template (V7-Tekken)
```
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
```
*`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.*
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
## Usage
The model can be used with the following frameworks;
- [`vllm`](https://github.com/vllm-project/vllm): See [here](#vllm)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
### vLLM
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
system prompt:
```
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""
```
|
shibajustfor/34a94459-b2eb-4368-9de2-541c30a57a29
|
shibajustfor
| 2025-02-04T01:49:19Z | 13 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-02-04T01:45:13Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 34a94459-b2eb-4368-9de2-541c30a57a29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b6e5ed8190ccb774_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b6e5ed8190ccb774_train_data.json
type:
field_instruction: soru
field_output: cevap
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/34a94459-b2eb-4368-9de2-541c30a57a29
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b6e5ed8190ccb774_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 72e7b874-15da-42e2-ab22-791b74a29685
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 72e7b874-15da-42e2-ab22-791b74a29685
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 34a94459-b2eb-4368-9de2-541c30a57a29
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.9480 |
| 12.593 | 0.0065 | 50 | 3.2149 |
| 12.2702 | 0.0131 | 100 | 3.0013 |
| 11.3896 | 0.0196 | 150 | 2.8476 |
| 11.5377 | 0.0262 | 200 | 2.7612 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k5_task2_organization
|
MayBashendy
| 2025-02-04T01:48:34Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T01:41:29Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k5_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k5_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7827
- Qwk: 0.5331
- Mse: 0.7827
- Rmse: 0.8847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0690 | 2 | 4.5860 | 0.0042 | 4.5860 | 2.1415 |
| No log | 0.1379 | 4 | 2.6996 | -0.0420 | 2.6996 | 1.6430 |
| No log | 0.2069 | 6 | 1.6579 | 0.0372 | 1.6579 | 1.2876 |
| No log | 0.2759 | 8 | 1.2154 | 0.1590 | 1.2154 | 1.1025 |
| No log | 0.3448 | 10 | 1.1961 | 0.1370 | 1.1961 | 1.0937 |
| No log | 0.4138 | 12 | 1.3367 | 0.0366 | 1.3367 | 1.1561 |
| No log | 0.4828 | 14 | 1.2065 | 0.1944 | 1.2065 | 1.0984 |
| No log | 0.5517 | 16 | 1.2127 | 0.1246 | 1.2127 | 1.1012 |
| No log | 0.6207 | 18 | 1.2186 | 0.0806 | 1.2186 | 1.1039 |
| No log | 0.6897 | 20 | 1.2428 | 0.1422 | 1.2428 | 1.1148 |
| No log | 0.7586 | 22 | 1.3930 | 0.0610 | 1.3930 | 1.1803 |
| No log | 0.8276 | 24 | 1.4143 | 0.0524 | 1.4143 | 1.1892 |
| No log | 0.8966 | 26 | 1.3968 | 0.1512 | 1.3968 | 1.1819 |
| No log | 0.9655 | 28 | 1.6419 | 0.2037 | 1.6419 | 1.2813 |
| No log | 1.0345 | 30 | 1.5294 | 0.3149 | 1.5294 | 1.2367 |
| No log | 1.1034 | 32 | 1.2055 | 0.2444 | 1.2055 | 1.0979 |
| No log | 1.1724 | 34 | 1.4178 | 0.3275 | 1.4178 | 1.1907 |
| No log | 1.2414 | 36 | 1.3008 | 0.3548 | 1.3008 | 1.1405 |
| No log | 1.3103 | 38 | 1.0903 | 0.3311 | 1.0903 | 1.0442 |
| No log | 1.3793 | 40 | 1.3131 | 0.3139 | 1.3131 | 1.1459 |
| No log | 1.4483 | 42 | 1.6069 | 0.3000 | 1.6069 | 1.2676 |
| No log | 1.5172 | 44 | 1.4816 | 0.2962 | 1.4816 | 1.2172 |
| No log | 1.5862 | 46 | 1.1020 | 0.3723 | 1.1020 | 1.0498 |
| No log | 1.6552 | 48 | 0.9298 | 0.4181 | 0.9298 | 0.9643 |
| No log | 1.7241 | 50 | 0.9439 | 0.3160 | 0.9439 | 0.9716 |
| No log | 1.7931 | 52 | 0.9629 | 0.4321 | 0.9629 | 0.9813 |
| No log | 1.8621 | 54 | 0.9580 | 0.4148 | 0.9580 | 0.9788 |
| No log | 1.9310 | 56 | 1.0020 | 0.4894 | 1.0020 | 1.0010 |
| No log | 2.0 | 58 | 1.0422 | 0.5054 | 1.0422 | 1.0209 |
| No log | 2.0690 | 60 | 1.1156 | 0.5014 | 1.1156 | 1.0562 |
| No log | 2.1379 | 62 | 1.0398 | 0.5346 | 1.0398 | 1.0197 |
| No log | 2.2069 | 64 | 0.9351 | 0.5211 | 0.9351 | 0.9670 |
| No log | 2.2759 | 66 | 0.8907 | 0.5034 | 0.8907 | 0.9437 |
| No log | 2.3448 | 68 | 0.8700 | 0.5037 | 0.8700 | 0.9328 |
| No log | 2.4138 | 70 | 0.8873 | 0.4216 | 0.8873 | 0.9420 |
| No log | 2.4828 | 72 | 0.9484 | 0.3729 | 0.9484 | 0.9739 |
| No log | 2.5517 | 74 | 0.9031 | 0.3548 | 0.9031 | 0.9503 |
| No log | 2.6207 | 76 | 0.8846 | 0.3738 | 0.8846 | 0.9405 |
| No log | 2.6897 | 78 | 0.8391 | 0.4084 | 0.8391 | 0.9160 |
| No log | 2.7586 | 80 | 0.8028 | 0.5102 | 0.8028 | 0.8960 |
| No log | 2.8276 | 82 | 0.8226 | 0.5380 | 0.8226 | 0.9070 |
| No log | 2.8966 | 84 | 0.8731 | 0.5328 | 0.8731 | 0.9344 |
| No log | 2.9655 | 86 | 0.8231 | 0.5792 | 0.8231 | 0.9072 |
| No log | 3.0345 | 88 | 0.7597 | 0.5532 | 0.7597 | 0.8716 |
| No log | 3.1034 | 90 | 0.7814 | 0.5902 | 0.7814 | 0.8840 |
| No log | 3.1724 | 92 | 0.7441 | 0.5787 | 0.7441 | 0.8626 |
| No log | 3.2414 | 94 | 0.7280 | 0.5244 | 0.7280 | 0.8533 |
| No log | 3.3103 | 96 | 0.7301 | 0.5244 | 0.7301 | 0.8545 |
| No log | 3.3793 | 98 | 0.7402 | 0.5633 | 0.7402 | 0.8604 |
| No log | 3.4483 | 100 | 0.8598 | 0.5724 | 0.8598 | 0.9272 |
| No log | 3.5172 | 102 | 0.9052 | 0.5222 | 0.9052 | 0.9514 |
| No log | 3.5862 | 104 | 1.0297 | 0.4695 | 1.0297 | 1.0147 |
| No log | 3.6552 | 106 | 1.0569 | 0.5111 | 1.0569 | 1.0281 |
| No log | 3.7241 | 108 | 0.9216 | 0.5293 | 0.9216 | 0.9600 |
| No log | 3.7931 | 110 | 0.7665 | 0.5646 | 0.7665 | 0.8755 |
| No log | 3.8621 | 112 | 0.7128 | 0.5534 | 0.7128 | 0.8443 |
| No log | 3.9310 | 114 | 0.7355 | 0.5534 | 0.7355 | 0.8576 |
| No log | 4.0 | 116 | 0.8731 | 0.5007 | 0.8731 | 0.9344 |
| No log | 4.0690 | 118 | 0.9498 | 0.4596 | 0.9498 | 0.9746 |
| No log | 4.1379 | 120 | 0.8700 | 0.5497 | 0.8700 | 0.9327 |
| No log | 4.2069 | 122 | 0.8202 | 0.5964 | 0.8202 | 0.9057 |
| No log | 4.2759 | 124 | 0.7906 | 0.5661 | 0.7906 | 0.8891 |
| No log | 4.3448 | 126 | 0.8384 | 0.5649 | 0.8384 | 0.9156 |
| No log | 4.4138 | 128 | 0.8160 | 0.5473 | 0.8160 | 0.9033 |
| No log | 4.4828 | 130 | 0.7711 | 0.5155 | 0.7711 | 0.8781 |
| No log | 4.5517 | 132 | 0.7697 | 0.5155 | 0.7697 | 0.8773 |
| No log | 4.6207 | 134 | 0.7754 | 0.5380 | 0.7754 | 0.8806 |
| No log | 4.6897 | 136 | 0.8199 | 0.5625 | 0.8199 | 0.9055 |
| No log | 4.7586 | 138 | 0.8332 | 0.4811 | 0.8332 | 0.9128 |
| No log | 4.8276 | 140 | 0.7855 | 0.4700 | 0.7855 | 0.8863 |
| No log | 4.8966 | 142 | 0.7696 | 0.5517 | 0.7696 | 0.8773 |
| No log | 4.9655 | 144 | 0.7304 | 0.5930 | 0.7304 | 0.8546 |
| No log | 5.0345 | 146 | 0.7390 | 0.5467 | 0.7390 | 0.8596 |
| No log | 5.1034 | 148 | 0.7498 | 0.4944 | 0.7498 | 0.8659 |
| No log | 5.1724 | 150 | 0.7101 | 0.5647 | 0.7101 | 0.8427 |
| No log | 5.2414 | 152 | 0.7168 | 0.6281 | 0.7168 | 0.8466 |
| No log | 5.3103 | 154 | 0.7075 | 0.5648 | 0.7075 | 0.8411 |
| No log | 5.3793 | 156 | 0.7171 | 0.5505 | 0.7171 | 0.8468 |
| No log | 5.4483 | 158 | 0.7306 | 0.6074 | 0.7306 | 0.8547 |
| No log | 5.5172 | 160 | 0.7318 | 0.6054 | 0.7318 | 0.8555 |
| No log | 5.5862 | 162 | 0.7400 | 0.5838 | 0.7400 | 0.8602 |
| No log | 5.6552 | 164 | 0.7766 | 0.5631 | 0.7766 | 0.8812 |
| No log | 5.7241 | 166 | 0.7540 | 0.5495 | 0.7540 | 0.8683 |
| No log | 5.7931 | 168 | 0.7575 | 0.5997 | 0.7575 | 0.8703 |
| No log | 5.8621 | 170 | 0.7787 | 0.5621 | 0.7787 | 0.8824 |
| No log | 5.9310 | 172 | 0.8353 | 0.5825 | 0.8353 | 0.9139 |
| No log | 6.0 | 174 | 0.8169 | 0.6203 | 0.8169 | 0.9038 |
| No log | 6.0690 | 176 | 0.7620 | 0.5699 | 0.7620 | 0.8729 |
| No log | 6.1379 | 178 | 0.7530 | 0.5584 | 0.7530 | 0.8678 |
| No log | 6.2069 | 180 | 0.7610 | 0.5132 | 0.7610 | 0.8724 |
| No log | 6.2759 | 182 | 0.8526 | 0.4893 | 0.8526 | 0.9234 |
| No log | 6.3448 | 184 | 0.9633 | 0.4989 | 0.9633 | 0.9815 |
| No log | 6.4138 | 186 | 0.8603 | 0.4785 | 0.8603 | 0.9275 |
| No log | 6.4828 | 188 | 0.7646 | 0.5376 | 0.7646 | 0.8744 |
| No log | 6.5517 | 190 | 0.7210 | 0.6035 | 0.7210 | 0.8491 |
| No log | 6.6207 | 192 | 0.6899 | 0.6435 | 0.6899 | 0.8306 |
| No log | 6.6897 | 194 | 0.6940 | 0.6713 | 0.6940 | 0.8331 |
| No log | 6.7586 | 196 | 0.7134 | 0.6234 | 0.7134 | 0.8446 |
| No log | 6.8276 | 198 | 0.7676 | 0.5255 | 0.7676 | 0.8761 |
| No log | 6.8966 | 200 | 0.7394 | 0.4945 | 0.7394 | 0.8599 |
| No log | 6.9655 | 202 | 0.7510 | 0.5163 | 0.7510 | 0.8666 |
| No log | 7.0345 | 204 | 0.7595 | 0.5079 | 0.7595 | 0.8715 |
| No log | 7.1034 | 206 | 0.7511 | 0.6060 | 0.7511 | 0.8667 |
| No log | 7.1724 | 208 | 0.7605 | 0.6121 | 0.7605 | 0.8721 |
| No log | 7.2414 | 210 | 0.7441 | 0.5774 | 0.7441 | 0.8626 |
| No log | 7.3103 | 212 | 0.7427 | 0.5253 | 0.7427 | 0.8618 |
| No log | 7.3793 | 214 | 0.7376 | 0.5774 | 0.7376 | 0.8589 |
| No log | 7.4483 | 216 | 0.7578 | 0.6132 | 0.7578 | 0.8705 |
| No log | 7.5172 | 218 | 0.7791 | 0.6305 | 0.7791 | 0.8826 |
| No log | 7.5862 | 220 | 0.8327 | 0.6159 | 0.8327 | 0.9125 |
| No log | 7.6552 | 222 | 0.7940 | 0.5961 | 0.7940 | 0.8911 |
| No log | 7.7241 | 224 | 0.7380 | 0.6051 | 0.7380 | 0.8591 |
| No log | 7.7931 | 226 | 0.7456 | 0.5697 | 0.7456 | 0.8635 |
| No log | 7.8621 | 228 | 0.7428 | 0.5697 | 0.7428 | 0.8619 |
| No log | 7.9310 | 230 | 0.7183 | 0.6239 | 0.7183 | 0.8475 |
| No log | 8.0 | 232 | 0.7763 | 0.6384 | 0.7763 | 0.8811 |
| No log | 8.0690 | 234 | 0.8529 | 0.6092 | 0.8529 | 0.9235 |
| No log | 8.1379 | 236 | 0.7757 | 0.6400 | 0.7757 | 0.8807 |
| No log | 8.2069 | 238 | 0.7151 | 0.6239 | 0.7151 | 0.8456 |
| No log | 8.2759 | 240 | 0.7395 | 0.5301 | 0.7395 | 0.8599 |
| No log | 8.3448 | 242 | 0.7522 | 0.5333 | 0.7522 | 0.8673 |
| No log | 8.4138 | 244 | 0.7239 | 0.5866 | 0.7239 | 0.8508 |
| No log | 8.4828 | 246 | 0.7094 | 0.6625 | 0.7094 | 0.8422 |
| No log | 8.5517 | 248 | 0.7757 | 0.6499 | 0.7757 | 0.8807 |
| No log | 8.6207 | 250 | 0.8802 | 0.5899 | 0.8802 | 0.9382 |
| No log | 8.6897 | 252 | 0.8558 | 0.6200 | 0.8558 | 0.9251 |
| No log | 8.7586 | 254 | 0.7504 | 0.6066 | 0.7504 | 0.8662 |
| No log | 8.8276 | 256 | 0.7080 | 0.5483 | 0.7080 | 0.8414 |
| No log | 8.8966 | 258 | 0.8428 | 0.5549 | 0.8428 | 0.9181 |
| No log | 8.9655 | 260 | 0.8959 | 0.4563 | 0.8959 | 0.9465 |
| No log | 9.0345 | 262 | 0.8483 | 0.5145 | 0.8483 | 0.9210 |
| No log | 9.1034 | 264 | 0.7494 | 0.5561 | 0.7494 | 0.8657 |
| No log | 9.1724 | 266 | 0.7344 | 0.5505 | 0.7344 | 0.8570 |
| No log | 9.2414 | 268 | 0.7188 | 0.5705 | 0.7188 | 0.8478 |
| No log | 9.3103 | 270 | 0.7212 | 0.6107 | 0.7212 | 0.8492 |
| No log | 9.3793 | 272 | 0.7458 | 0.5992 | 0.7458 | 0.8636 |
| No log | 9.4483 | 274 | 0.7366 | 0.6278 | 0.7366 | 0.8583 |
| No log | 9.5172 | 276 | 0.7339 | 0.5633 | 0.7339 | 0.8567 |
| No log | 9.5862 | 278 | 0.7509 | 0.5245 | 0.7509 | 0.8665 |
| No log | 9.6552 | 280 | 0.7455 | 0.5420 | 0.7455 | 0.8634 |
| No log | 9.7241 | 282 | 0.7419 | 0.5420 | 0.7419 | 0.8613 |
| No log | 9.7931 | 284 | 0.7337 | 0.6415 | 0.7337 | 0.8566 |
| No log | 9.8621 | 286 | 0.7402 | 0.6395 | 0.7402 | 0.8604 |
| No log | 9.9310 | 288 | 0.7704 | 0.6175 | 0.7704 | 0.8777 |
| No log | 10.0 | 290 | 0.8105 | 0.6315 | 0.8105 | 0.9003 |
| No log | 10.0690 | 292 | 0.8148 | 0.6154 | 0.8148 | 0.9027 |
| No log | 10.1379 | 294 | 0.7538 | 0.6244 | 0.7538 | 0.8682 |
| No log | 10.2069 | 296 | 0.7352 | 0.5930 | 0.7352 | 0.8574 |
| No log | 10.2759 | 298 | 0.7286 | 0.5744 | 0.7286 | 0.8536 |
| No log | 10.3448 | 300 | 0.7381 | 0.5622 | 0.7381 | 0.8591 |
| No log | 10.4138 | 302 | 0.7885 | 0.5847 | 0.7885 | 0.8880 |
| No log | 10.4828 | 304 | 0.8325 | 0.5685 | 0.8325 | 0.9124 |
| No log | 10.5517 | 306 | 0.8751 | 0.5748 | 0.8751 | 0.9355 |
| No log | 10.6207 | 308 | 0.8765 | 0.5748 | 0.8765 | 0.9362 |
| No log | 10.6897 | 310 | 0.7974 | 0.6435 | 0.7974 | 0.8930 |
| No log | 10.7586 | 312 | 0.7459 | 0.5993 | 0.7459 | 0.8636 |
| No log | 10.8276 | 314 | 0.7526 | 0.5631 | 0.7526 | 0.8676 |
| No log | 10.8966 | 316 | 0.7533 | 0.5434 | 0.7533 | 0.8679 |
| No log | 10.9655 | 318 | 0.7508 | 0.5774 | 0.7508 | 0.8665 |
| No log | 11.0345 | 320 | 0.7991 | 0.4998 | 0.7991 | 0.8939 |
| No log | 11.1034 | 322 | 0.8500 | 0.5515 | 0.8500 | 0.9220 |
| No log | 11.1724 | 324 | 0.8932 | 0.5592 | 0.8932 | 0.9451 |
| No log | 11.2414 | 326 | 0.8462 | 0.5571 | 0.8462 | 0.9199 |
| No log | 11.3103 | 328 | 0.7751 | 0.6657 | 0.7751 | 0.8804 |
| No log | 11.3793 | 330 | 0.7703 | 0.6369 | 0.7703 | 0.8777 |
| No log | 11.4483 | 332 | 0.7693 | 0.6369 | 0.7693 | 0.8771 |
| No log | 11.5172 | 334 | 0.7576 | 0.5521 | 0.7576 | 0.8704 |
| No log | 11.5862 | 336 | 0.7904 | 0.5359 | 0.7904 | 0.8890 |
| No log | 11.6552 | 338 | 0.8023 | 0.4969 | 0.8023 | 0.8957 |
| No log | 11.7241 | 340 | 0.7846 | 0.4969 | 0.7846 | 0.8858 |
| No log | 11.7931 | 342 | 0.7395 | 0.5744 | 0.7395 | 0.8599 |
| No log | 11.8621 | 344 | 0.7217 | 0.6021 | 0.7217 | 0.8495 |
| No log | 11.9310 | 346 | 0.7100 | 0.6131 | 0.7100 | 0.8426 |
| No log | 12.0 | 348 | 0.6919 | 0.6011 | 0.6919 | 0.8318 |
| No log | 12.0690 | 350 | 0.6909 | 0.5676 | 0.6909 | 0.8312 |
| No log | 12.1379 | 352 | 0.6980 | 0.5649 | 0.6980 | 0.8355 |
| No log | 12.2069 | 354 | 0.7037 | 0.5648 | 0.7037 | 0.8389 |
| No log | 12.2759 | 356 | 0.7214 | 0.4948 | 0.7214 | 0.8494 |
| No log | 12.3448 | 358 | 0.7644 | 0.5013 | 0.7644 | 0.8743 |
| No log | 12.4138 | 360 | 0.8011 | 0.5175 | 0.8011 | 0.8950 |
| No log | 12.4828 | 362 | 0.8528 | 0.5405 | 0.8528 | 0.9235 |
| No log | 12.5517 | 364 | 0.8571 | 0.4663 | 0.8571 | 0.9258 |
| No log | 12.6207 | 366 | 0.8092 | 0.4820 | 0.8092 | 0.8996 |
| No log | 12.6897 | 368 | 0.7878 | 0.4963 | 0.7878 | 0.8876 |
| No log | 12.7586 | 370 | 0.7771 | 0.5136 | 0.7771 | 0.8815 |
| No log | 12.8276 | 372 | 0.7749 | 0.5473 | 0.7749 | 0.8803 |
| No log | 12.8966 | 374 | 0.7848 | 0.5876 | 0.7848 | 0.8859 |
| No log | 12.9655 | 376 | 0.8350 | 0.5797 | 0.8350 | 0.9138 |
| No log | 13.0345 | 378 | 0.8568 | 0.5797 | 0.8568 | 0.9256 |
| No log | 13.1034 | 380 | 0.7892 | 0.6038 | 0.7892 | 0.8884 |
| No log | 13.1724 | 382 | 0.7562 | 0.6013 | 0.7562 | 0.8696 |
| No log | 13.2414 | 384 | 0.7396 | 0.5815 | 0.7396 | 0.8600 |
| No log | 13.3103 | 386 | 0.7324 | 0.5815 | 0.7324 | 0.8558 |
| No log | 13.3793 | 388 | 0.7193 | 0.5450 | 0.7193 | 0.8481 |
| No log | 13.4483 | 390 | 0.7178 | 0.5570 | 0.7178 | 0.8472 |
| No log | 13.5172 | 392 | 0.7260 | 0.6154 | 0.7260 | 0.8521 |
| No log | 13.5862 | 394 | 0.7104 | 0.6263 | 0.7104 | 0.8428 |
| No log | 13.6552 | 396 | 0.7038 | 0.6263 | 0.7038 | 0.8390 |
| No log | 13.7241 | 398 | 0.6762 | 0.6567 | 0.6762 | 0.8223 |
| No log | 13.7931 | 400 | 0.6709 | 0.6212 | 0.6709 | 0.8191 |
| No log | 13.8621 | 402 | 0.6730 | 0.6220 | 0.6730 | 0.8203 |
| No log | 13.9310 | 404 | 0.6571 | 0.6682 | 0.6571 | 0.8106 |
| No log | 14.0 | 406 | 0.6548 | 0.7004 | 0.6548 | 0.8092 |
| No log | 14.0690 | 408 | 0.6445 | 0.6963 | 0.6445 | 0.8028 |
| No log | 14.1379 | 410 | 0.6535 | 0.6016 | 0.6535 | 0.8084 |
| No log | 14.2069 | 412 | 0.6702 | 0.6129 | 0.6702 | 0.8187 |
| No log | 14.2759 | 414 | 0.6819 | 0.6316 | 0.6819 | 0.8258 |
| No log | 14.3448 | 416 | 0.6742 | 0.6850 | 0.6742 | 0.8211 |
| No log | 14.4138 | 418 | 0.7031 | 0.6652 | 0.7031 | 0.8385 |
| No log | 14.4828 | 420 | 0.8189 | 0.6259 | 0.8189 | 0.9049 |
| No log | 14.5517 | 422 | 0.8710 | 0.6303 | 0.8710 | 0.9333 |
| No log | 14.6207 | 424 | 0.8088 | 0.6167 | 0.8088 | 0.8993 |
| No log | 14.6897 | 426 | 0.7069 | 0.6793 | 0.7069 | 0.8408 |
| No log | 14.7586 | 428 | 0.6750 | 0.6894 | 0.6750 | 0.8216 |
| No log | 14.8276 | 430 | 0.6986 | 0.6266 | 0.6986 | 0.8358 |
| No log | 14.8966 | 432 | 0.7090 | 0.5743 | 0.7090 | 0.8420 |
| No log | 14.9655 | 434 | 0.6881 | 0.6654 | 0.6881 | 0.8295 |
| No log | 15.0345 | 436 | 0.6834 | 0.6468 | 0.6834 | 0.8267 |
| No log | 15.1034 | 438 | 0.6910 | 0.6712 | 0.6910 | 0.8312 |
| No log | 15.1724 | 440 | 0.7237 | 0.6679 | 0.7237 | 0.8507 |
| No log | 15.2414 | 442 | 0.7332 | 0.6580 | 0.7332 | 0.8563 |
| No log | 15.3103 | 444 | 0.7368 | 0.6601 | 0.7368 | 0.8584 |
| No log | 15.3793 | 446 | 0.7136 | 0.6171 | 0.7136 | 0.8447 |
| No log | 15.4483 | 448 | 0.6934 | 0.6631 | 0.6934 | 0.8327 |
| No log | 15.5172 | 450 | 0.6874 | 0.6824 | 0.6874 | 0.8291 |
| No log | 15.5862 | 452 | 0.6965 | 0.6317 | 0.6965 | 0.8346 |
| No log | 15.6552 | 454 | 0.7033 | 0.7089 | 0.7033 | 0.8387 |
| No log | 15.7241 | 456 | 0.7267 | 0.6481 | 0.7267 | 0.8525 |
| No log | 15.7931 | 458 | 0.7437 | 0.6580 | 0.7437 | 0.8624 |
| No log | 15.8621 | 460 | 0.7309 | 0.5869 | 0.7309 | 0.8549 |
| No log | 15.9310 | 462 | 0.7151 | 0.6041 | 0.7151 | 0.8457 |
| No log | 16.0 | 464 | 0.6873 | 0.6237 | 0.6873 | 0.8291 |
| No log | 16.0690 | 466 | 0.6789 | 0.6272 | 0.6789 | 0.8240 |
| No log | 16.1379 | 468 | 0.6819 | 0.6934 | 0.6819 | 0.8257 |
| No log | 16.2069 | 470 | 0.6845 | 0.6809 | 0.6845 | 0.8274 |
| No log | 16.2759 | 472 | 0.6902 | 0.6809 | 0.6902 | 0.8308 |
| No log | 16.3448 | 474 | 0.6913 | 0.6578 | 0.6913 | 0.8314 |
| No log | 16.4138 | 476 | 0.6879 | 0.6578 | 0.6879 | 0.8294 |
| No log | 16.4828 | 478 | 0.6870 | 0.6578 | 0.6870 | 0.8288 |
| No log | 16.5517 | 480 | 0.6874 | 0.6436 | 0.6874 | 0.8291 |
| No log | 16.6207 | 482 | 0.6929 | 0.5781 | 0.6929 | 0.8324 |
| No log | 16.6897 | 484 | 0.6981 | 0.5403 | 0.6981 | 0.8355 |
| No log | 16.7586 | 486 | 0.6951 | 0.5403 | 0.6951 | 0.8337 |
| No log | 16.8276 | 488 | 0.6860 | 0.5921 | 0.6860 | 0.8282 |
| No log | 16.8966 | 490 | 0.6896 | 0.6404 | 0.6896 | 0.8304 |
| No log | 16.9655 | 492 | 0.7358 | 0.6315 | 0.7358 | 0.8578 |
| No log | 17.0345 | 494 | 0.7414 | 0.6464 | 0.7414 | 0.8610 |
| No log | 17.1034 | 496 | 0.6874 | 0.6441 | 0.6874 | 0.8291 |
| No log | 17.1724 | 498 | 0.6582 | 0.6528 | 0.6582 | 0.8113 |
| 0.2525 | 17.2414 | 500 | 0.6940 | 0.5684 | 0.6940 | 0.8330 |
| 0.2525 | 17.3103 | 502 | 0.7289 | 0.5737 | 0.7289 | 0.8537 |
| 0.2525 | 17.3793 | 504 | 0.6983 | 0.6105 | 0.6983 | 0.8356 |
| 0.2525 | 17.4483 | 506 | 0.6568 | 0.6212 | 0.6568 | 0.8104 |
| 0.2525 | 17.5172 | 508 | 0.6930 | 0.6266 | 0.6930 | 0.8325 |
| 0.2525 | 17.5862 | 510 | 0.7971 | 0.6194 | 0.7971 | 0.8928 |
| 0.2525 | 17.6552 | 512 | 0.8424 | 0.5883 | 0.8424 | 0.9178 |
| 0.2525 | 17.7241 | 514 | 0.8040 | 0.5962 | 0.8040 | 0.8967 |
| 0.2525 | 17.7931 | 516 | 0.7253 | 0.5352 | 0.7253 | 0.8517 |
| 0.2525 | 17.8621 | 518 | 0.6738 | 0.5835 | 0.6738 | 0.8208 |
| 0.2525 | 17.9310 | 520 | 0.6546 | 0.6362 | 0.6546 | 0.8091 |
| 0.2525 | 18.0 | 522 | 0.6491 | 0.6487 | 0.6491 | 0.8057 |
| 0.2525 | 18.0690 | 524 | 0.6587 | 0.6849 | 0.6587 | 0.8116 |
| 0.2525 | 18.1379 | 526 | 0.6859 | 0.6771 | 0.6859 | 0.8282 |
| 0.2525 | 18.2069 | 528 | 0.7243 | 0.6607 | 0.7243 | 0.8510 |
| 0.2525 | 18.2759 | 530 | 0.7094 | 0.6464 | 0.7094 | 0.8422 |
| 0.2525 | 18.3448 | 532 | 0.6948 | 0.6350 | 0.6948 | 0.8335 |
| 0.2525 | 18.4138 | 534 | 0.6912 | 0.5898 | 0.6912 | 0.8314 |
| 0.2525 | 18.4828 | 536 | 0.6953 | 0.5835 | 0.6953 | 0.8339 |
| 0.2525 | 18.5517 | 538 | 0.7008 | 0.5431 | 0.7008 | 0.8372 |
| 0.2525 | 18.6207 | 540 | 0.6999 | 0.5805 | 0.6999 | 0.8366 |
| 0.2525 | 18.6897 | 542 | 0.7019 | 0.6110 | 0.7019 | 0.8378 |
| 0.2525 | 18.7586 | 544 | 0.6946 | 0.6215 | 0.6946 | 0.8335 |
| 0.2525 | 18.8276 | 546 | 0.6709 | 0.6283 | 0.6709 | 0.8191 |
| 0.2525 | 18.8966 | 548 | 0.6557 | 0.6528 | 0.6557 | 0.8097 |
| 0.2525 | 18.9655 | 550 | 0.6592 | 0.6528 | 0.6592 | 0.8119 |
| 0.2525 | 19.0345 | 552 | 0.6643 | 0.6189 | 0.6643 | 0.8150 |
| 0.2525 | 19.1034 | 554 | 0.6693 | 0.6176 | 0.6693 | 0.8181 |
| 0.2525 | 19.1724 | 556 | 0.6684 | 0.6324 | 0.6684 | 0.8175 |
| 0.2525 | 19.2414 | 558 | 0.6892 | 0.6391 | 0.6892 | 0.8302 |
| 0.2525 | 19.3103 | 560 | 0.7126 | 0.6299 | 0.7126 | 0.8441 |
| 0.2525 | 19.3793 | 562 | 0.6958 | 0.6132 | 0.6958 | 0.8342 |
| 0.2525 | 19.4483 | 564 | 0.6891 | 0.6244 | 0.6891 | 0.8301 |
| 0.2525 | 19.5172 | 566 | 0.6755 | 0.6074 | 0.6755 | 0.8219 |
| 0.2525 | 19.5862 | 568 | 0.6952 | 0.6335 | 0.6952 | 0.8338 |
| 0.2525 | 19.6552 | 570 | 0.7054 | 0.5898 | 0.7054 | 0.8399 |
| 0.2525 | 19.7241 | 572 | 0.7122 | 0.5756 | 0.7122 | 0.8439 |
| 0.2525 | 19.7931 | 574 | 0.7098 | 0.5012 | 0.7098 | 0.8425 |
| 0.2525 | 19.8621 | 576 | 0.7195 | 0.4563 | 0.7195 | 0.8482 |
| 0.2525 | 19.9310 | 578 | 0.7092 | 0.4826 | 0.7092 | 0.8422 |
| 0.2525 | 20.0 | 580 | 0.6928 | 0.5676 | 0.6928 | 0.8323 |
| 0.2525 | 20.0690 | 582 | 0.6940 | 0.6108 | 0.6940 | 0.8331 |
| 0.2525 | 20.1379 | 584 | 0.6875 | 0.6641 | 0.6875 | 0.8292 |
| 0.2525 | 20.2069 | 586 | 0.6934 | 0.6641 | 0.6934 | 0.8327 |
| 0.2525 | 20.2759 | 588 | 0.7054 | 0.6163 | 0.7054 | 0.8399 |
| 0.2525 | 20.3448 | 590 | 0.7008 | 0.5797 | 0.7008 | 0.8371 |
| 0.2525 | 20.4138 | 592 | 0.6981 | 0.6641 | 0.6981 | 0.8355 |
| 0.2525 | 20.4828 | 594 | 0.6979 | 0.6611 | 0.6979 | 0.8354 |
| 0.2525 | 20.5517 | 596 | 0.6980 | 0.6557 | 0.6980 | 0.8355 |
| 0.2525 | 20.6207 | 598 | 0.7096 | 0.6795 | 0.7096 | 0.8423 |
| 0.2525 | 20.6897 | 600 | 0.7798 | 0.6483 | 0.7798 | 0.8831 |
| 0.2525 | 20.7586 | 602 | 0.8087 | 0.6425 | 0.8087 | 0.8993 |
| 0.2525 | 20.8276 | 604 | 0.7694 | 0.6457 | 0.7694 | 0.8771 |
| 0.2525 | 20.8966 | 606 | 0.7093 | 0.6353 | 0.7093 | 0.8422 |
| 0.2525 | 20.9655 | 608 | 0.6742 | 0.5801 | 0.6742 | 0.8211 |
| 0.2525 | 21.0345 | 610 | 0.6844 | 0.6190 | 0.6844 | 0.8273 |
| 0.2525 | 21.1034 | 612 | 0.6840 | 0.6154 | 0.6840 | 0.8270 |
| 0.2525 | 21.1724 | 614 | 0.6685 | 0.5746 | 0.6685 | 0.8176 |
| 0.2525 | 21.2414 | 616 | 0.6808 | 0.6338 | 0.6808 | 0.8251 |
| 0.2525 | 21.3103 | 618 | 0.6905 | 0.6338 | 0.6905 | 0.8309 |
| 0.2525 | 21.3793 | 620 | 0.6882 | 0.6809 | 0.6882 | 0.8296 |
| 0.2525 | 21.4483 | 622 | 0.6938 | 0.6809 | 0.6938 | 0.8330 |
| 0.2525 | 21.5172 | 624 | 0.7107 | 0.6239 | 0.7107 | 0.8431 |
| 0.2525 | 21.5862 | 626 | 0.7139 | 0.6252 | 0.7139 | 0.8449 |
| 0.2525 | 21.6552 | 628 | 0.7173 | 0.6239 | 0.7173 | 0.8469 |
| 0.2525 | 21.7241 | 630 | 0.7035 | 0.6534 | 0.7035 | 0.8388 |
| 0.2525 | 21.7931 | 632 | 0.7092 | 0.6611 | 0.7092 | 0.8422 |
| 0.2525 | 21.8621 | 634 | 0.7206 | 0.6573 | 0.7206 | 0.8489 |
| 0.2525 | 21.9310 | 636 | 0.7262 | 0.6573 | 0.7262 | 0.8522 |
| 0.2525 | 22.0 | 638 | 0.7256 | 0.6341 | 0.7256 | 0.8518 |
| 0.2525 | 22.0690 | 640 | 0.7145 | 0.5886 | 0.7145 | 0.8453 |
| 0.2525 | 22.1379 | 642 | 0.7298 | 0.5648 | 0.7298 | 0.8543 |
| 0.2525 | 22.2069 | 644 | 0.7534 | 0.5473 | 0.7534 | 0.8680 |
| 0.2525 | 22.2759 | 646 | 0.7642 | 0.5451 | 0.7642 | 0.8742 |
| 0.2525 | 22.3448 | 648 | 0.7507 | 0.5819 | 0.7507 | 0.8664 |
| 0.2525 | 22.4138 | 650 | 0.7374 | 0.5819 | 0.7374 | 0.8587 |
| 0.2525 | 22.4828 | 652 | 0.7355 | 0.6142 | 0.7355 | 0.8576 |
| 0.2525 | 22.5517 | 654 | 0.7177 | 0.5451 | 0.7177 | 0.8471 |
| 0.2525 | 22.6207 | 656 | 0.7046 | 0.5921 | 0.7046 | 0.8394 |
| 0.2525 | 22.6897 | 658 | 0.7132 | 0.5562 | 0.7132 | 0.8445 |
| 0.2525 | 22.7586 | 660 | 0.7659 | 0.5983 | 0.7659 | 0.8751 |
| 0.2525 | 22.8276 | 662 | 0.8272 | 0.6142 | 0.8272 | 0.9095 |
| 0.2525 | 22.8966 | 664 | 0.8678 | 0.6061 | 0.8678 | 0.9316 |
| 0.2525 | 22.9655 | 666 | 0.8336 | 0.6061 | 0.8336 | 0.9130 |
| 0.2525 | 23.0345 | 668 | 0.7549 | 0.5876 | 0.7549 | 0.8688 |
| 0.2525 | 23.1034 | 670 | 0.6739 | 0.6089 | 0.6739 | 0.8209 |
| 0.2525 | 23.1724 | 672 | 0.6379 | 0.6550 | 0.6379 | 0.7987 |
| 0.2525 | 23.2414 | 674 | 0.6604 | 0.6272 | 0.6604 | 0.8126 |
| 0.2525 | 23.3103 | 676 | 0.6891 | 0.5997 | 0.6891 | 0.8301 |
| 0.2525 | 23.3793 | 678 | 0.6736 | 0.6272 | 0.6736 | 0.8207 |
| 0.2525 | 23.4483 | 680 | 0.6531 | 0.6154 | 0.6531 | 0.8082 |
| 0.2525 | 23.5172 | 682 | 0.6576 | 0.6906 | 0.6576 | 0.8109 |
| 0.2525 | 23.5862 | 684 | 0.6791 | 0.5716 | 0.6791 | 0.8241 |
| 0.2525 | 23.6552 | 686 | 0.6949 | 0.5552 | 0.6949 | 0.8336 |
| 0.2525 | 23.7241 | 688 | 0.7094 | 0.5125 | 0.7094 | 0.8422 |
| 0.2525 | 23.7931 | 690 | 0.7309 | 0.5526 | 0.7309 | 0.8549 |
| 0.2525 | 23.8621 | 692 | 0.7574 | 0.5498 | 0.7574 | 0.8703 |
| 0.2525 | 23.9310 | 694 | 0.7827 | 0.5331 | 0.7827 | 0.8847 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF
|
mradermacher
| 2025-02-04T01:48:28Z | 1,840 | 0 |
transformers
|
[
"transformers",
"gguf",
"nvidia",
"code",
"math",
"en",
"dataset:nvidia/OpenMathInstruct-1",
"base_model:nvidia/OpenMath-CodeLlama-13b-Python-hf",
"base_model:quantized:nvidia/OpenMath-CodeLlama-13b-Python-hf",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-02-03T17:00:14Z |
---
base_model: nvidia/OpenMath-CodeLlama-13b-Python-hf
datasets:
- nvidia/OpenMathInstruct-1
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- nvidia
- code
- math
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q4_1.gguf) | i1-Q4_1 | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenMath-CodeLlama-13b-Python-hf-i1-GGUF/resolve/main/OpenMath-CodeLlama-13b-Python-hf.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
JalilH/fine_tuned_gemma
|
JalilH
| 2025-02-04T01:47:14Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T01:31:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhungphammmmm/4513ac72-6f2e-4072-b263-869876585fc1
|
nhungphammmmm
| 2025-02-04T01:45:36Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T00:57:02Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4513ac72-6f2e-4072-b263-869876585fc1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3e5eab4715297236_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e5eab4715297236_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/4513ac72-6f2e-4072-b263-869876585fc1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3e5eab4715297236_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4513ac72-6f2e-4072-b263-869876585fc1
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6921 | 0.1850 | 200 | 0.2258 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
shibajustfor/8095550f-8841-4095-9c70-b0a1c6843cd5
|
shibajustfor
| 2025-02-04T01:44:33Z | 13 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-02-04T01:40:21Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8095550f-8841-4095-9c70-b0a1c6843cd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b6e5ed8190ccb774_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b6e5ed8190ccb774_train_data.json
type:
field_instruction: soru
field_output: cevap
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/8095550f-8841-4095-9c70-b0a1c6843cd5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b6e5ed8190ccb774_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 72e7b874-15da-42e2-ab22-791b74a29685
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 72e7b874-15da-42e2-ab22-791b74a29685
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8095550f-8841-4095-9c70-b0a1c6843cd5
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.9911 |
| 12.5629 | 0.0065 | 50 | 3.2159 |
| 12.3334 | 0.0131 | 100 | 3.0247 |
| 11.6427 | 0.0196 | 150 | 2.9407 |
| 12.0462 | 0.0262 | 200 | 2.9188 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
genki10/ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold4
|
genki10
| 2025-02-04T01:42:01Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-03T21:31:11Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6089
- Qwk: 0.5812
- Mse: 0.6089
- Rmse: 0.7803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.25 | 2 | 10.3423 | 0.0092 | 10.3423 | 3.2159 |
| No log | 0.5 | 4 | 9.0145 | 0.0037 | 9.0145 | 3.0024 |
| No log | 0.75 | 6 | 7.7197 | 0.0018 | 7.7197 | 2.7784 |
| No log | 1.0 | 8 | 6.8532 | 0.0018 | 6.8532 | 2.6179 |
| 6.4018 | 1.25 | 10 | 6.0474 | 0.0018 | 6.0474 | 2.4591 |
| 6.4018 | 1.5 | 12 | 5.1275 | 0.0128 | 5.1275 | 2.2644 |
| 6.4018 | 1.75 | 14 | 4.1454 | 0.0040 | 4.1454 | 2.0360 |
| 6.4018 | 2.0 | 16 | 2.5905 | 0.0052 | 2.5905 | 1.6095 |
| 6.4018 | 2.25 | 18 | 2.1296 | 0.1216 | 2.1296 | 1.4593 |
| 2.7798 | 2.5 | 20 | 1.6199 | 0.0544 | 1.6199 | 1.2728 |
| 2.7798 | 2.75 | 22 | 1.5879 | 0.0238 | 1.5879 | 1.2601 |
| 2.7798 | 3.0 | 24 | 1.2737 | 0.0238 | 1.2737 | 1.1286 |
| 2.7798 | 3.25 | 26 | 1.1221 | 0.0212 | 1.1221 | 1.0593 |
| 2.7798 | 3.5 | 28 | 1.2226 | 0.0316 | 1.2226 | 1.1057 |
| 1.7553 | 3.75 | 30 | 1.5063 | 0.0420 | 1.5063 | 1.2273 |
| 1.7553 | 4.0 | 32 | 2.0288 | 0.1601 | 2.0288 | 1.4244 |
| 1.7553 | 4.25 | 34 | 1.8433 | 0.1109 | 1.8433 | 1.3577 |
| 1.7553 | 4.5 | 36 | 1.5115 | 0.0757 | 1.5115 | 1.2294 |
| 1.7553 | 4.75 | 38 | 1.3201 | 0.0445 | 1.3201 | 1.1490 |
| 1.7333 | 5.0 | 40 | 1.2861 | 0.0558 | 1.2861 | 1.1341 |
| 1.7333 | 5.25 | 42 | 1.1777 | 0.0509 | 1.1777 | 1.0852 |
| 1.7333 | 5.5 | 44 | 1.1542 | 0.0445 | 1.1542 | 1.0743 |
| 1.7333 | 5.75 | 46 | 1.0644 | 0.0509 | 1.0644 | 1.0317 |
| 1.7333 | 6.0 | 48 | 1.8744 | 0.1643 | 1.8744 | 1.3691 |
| 1.7201 | 6.25 | 50 | 2.1763 | 0.1570 | 2.1763 | 1.4752 |
| 1.7201 | 6.5 | 52 | 1.7457 | 0.1783 | 1.7457 | 1.3213 |
| 1.7201 | 6.75 | 54 | 1.1182 | 0.0610 | 1.1182 | 1.0574 |
| 1.7201 | 7.0 | 56 | 1.2662 | 0.0445 | 1.2662 | 1.1253 |
| 1.7201 | 7.25 | 58 | 0.9480 | 0.0666 | 0.9480 | 0.9737 |
| 1.5405 | 7.5 | 60 | 1.0434 | 0.1008 | 1.0434 | 1.0215 |
| 1.5405 | 7.75 | 62 | 1.1104 | 0.0958 | 1.1104 | 1.0537 |
| 1.5405 | 8.0 | 64 | 0.9350 | 0.1352 | 0.9350 | 0.9669 |
| 1.5405 | 8.25 | 66 | 0.8286 | 0.3014 | 0.8286 | 0.9103 |
| 1.5405 | 8.5 | 68 | 0.8852 | 0.2636 | 0.8852 | 0.9409 |
| 1.4268 | 8.75 | 70 | 0.9632 | 0.2423 | 0.9632 | 0.9814 |
| 1.4268 | 9.0 | 72 | 0.8796 | 0.3042 | 0.8796 | 0.9379 |
| 1.4268 | 9.25 | 74 | 0.7551 | 0.4221 | 0.7551 | 0.8690 |
| 1.4268 | 9.5 | 76 | 0.8424 | 0.3961 | 0.8424 | 0.9178 |
| 1.4268 | 9.75 | 78 | 1.2577 | 0.2541 | 1.2577 | 1.1215 |
| 1.0054 | 10.0 | 80 | 0.8085 | 0.3728 | 0.8085 | 0.8992 |
| 1.0054 | 10.25 | 82 | 0.5684 | 0.4533 | 0.5684 | 0.7540 |
| 1.0054 | 10.5 | 84 | 0.5676 | 0.5215 | 0.5676 | 0.7534 |
| 1.0054 | 10.75 | 86 | 0.6962 | 0.4452 | 0.6962 | 0.8344 |
| 1.0054 | 11.0 | 88 | 0.5826 | 0.4438 | 0.5826 | 0.7633 |
| 0.6941 | 11.25 | 90 | 0.6648 | 0.3708 | 0.6648 | 0.8154 |
| 0.6941 | 11.5 | 92 | 0.5580 | 0.4754 | 0.5580 | 0.7470 |
| 0.6941 | 11.75 | 94 | 0.6176 | 0.5360 | 0.6176 | 0.7859 |
| 0.6941 | 12.0 | 96 | 0.5918 | 0.5448 | 0.5918 | 0.7693 |
| 0.6941 | 12.25 | 98 | 0.5433 | 0.5664 | 0.5433 | 0.7371 |
| 0.4638 | 12.5 | 100 | 0.5908 | 0.5526 | 0.5908 | 0.7686 |
| 0.4638 | 12.75 | 102 | 0.5956 | 0.5379 | 0.5956 | 0.7718 |
| 0.4638 | 13.0 | 104 | 0.6086 | 0.5680 | 0.6086 | 0.7801 |
| 0.4638 | 13.25 | 106 | 0.5973 | 0.5890 | 0.5973 | 0.7728 |
| 0.4638 | 13.5 | 108 | 0.5626 | 0.5852 | 0.5626 | 0.7501 |
| 0.2885 | 13.75 | 110 | 0.6225 | 0.5863 | 0.6225 | 0.7890 |
| 0.2885 | 14.0 | 112 | 0.6296 | 0.5823 | 0.6296 | 0.7934 |
| 0.2885 | 14.25 | 114 | 0.5746 | 0.6220 | 0.5746 | 0.7580 |
| 0.2885 | 14.5 | 116 | 0.5443 | 0.6016 | 0.5443 | 0.7378 |
| 0.2885 | 14.75 | 118 | 0.5433 | 0.6266 | 0.5433 | 0.7371 |
| 0.2075 | 15.0 | 120 | 0.5344 | 0.6296 | 0.5344 | 0.7310 |
| 0.2075 | 15.25 | 122 | 0.5613 | 0.6253 | 0.5613 | 0.7492 |
| 0.2075 | 15.5 | 124 | 0.5944 | 0.6347 | 0.5944 | 0.7710 |
| 0.2075 | 15.75 | 126 | 0.6619 | 0.5735 | 0.6619 | 0.8136 |
| 0.2075 | 16.0 | 128 | 0.6595 | 0.5727 | 0.6595 | 0.8121 |
| 0.176 | 16.25 | 130 | 0.6753 | 0.5760 | 0.6753 | 0.8218 |
| 0.176 | 16.5 | 132 | 0.6061 | 0.5965 | 0.6061 | 0.7785 |
| 0.176 | 16.75 | 134 | 0.6268 | 0.5990 | 0.6268 | 0.7917 |
| 0.176 | 17.0 | 136 | 0.5943 | 0.5971 | 0.5943 | 0.7709 |
| 0.176 | 17.25 | 138 | 0.5691 | 0.6009 | 0.5691 | 0.7544 |
| 0.17 | 17.5 | 140 | 0.6318 | 0.6054 | 0.6318 | 0.7949 |
| 0.17 | 17.75 | 142 | 0.7145 | 0.5573 | 0.7145 | 0.8453 |
| 0.17 | 18.0 | 144 | 0.6089 | 0.5812 | 0.6089 | 0.7803 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task2_organization
|
MayBashendy
| 2025-02-04T01:41:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T01:35:25Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7527
- Qwk: 0.6277
- Mse: 0.7527
- Rmse: 0.8676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.25 | 2 | 4.5672 | -0.0103 | 4.5672 | 2.1371 |
| No log | 0.5 | 4 | 3.8256 | 0.0165 | 3.8256 | 1.9559 |
| No log | 0.75 | 6 | 1.9924 | 0.0879 | 1.9924 | 1.4115 |
| No log | 1.0 | 8 | 1.3135 | 0.0992 | 1.3135 | 1.1461 |
| No log | 1.25 | 10 | 1.1989 | 0.2532 | 1.1989 | 1.0950 |
| No log | 1.5 | 12 | 1.1159 | 0.3663 | 1.1159 | 1.0564 |
| No log | 1.75 | 14 | 1.2979 | 0.1395 | 1.2979 | 1.1392 |
| No log | 2.0 | 16 | 1.7774 | 0.2007 | 1.7774 | 1.3332 |
| No log | 2.25 | 18 | 1.8266 | 0.2007 | 1.8266 | 1.3515 |
| No log | 2.5 | 20 | 1.2240 | 0.2094 | 1.2240 | 1.1064 |
| No log | 2.75 | 22 | 1.1697 | 0.2991 | 1.1697 | 1.0815 |
| No log | 3.0 | 24 | 1.3357 | 0.3487 | 1.3357 | 1.1557 |
| No log | 3.25 | 26 | 1.4775 | 0.3467 | 1.4775 | 1.2155 |
| No log | 3.5 | 28 | 1.3308 | 0.3337 | 1.3308 | 1.1536 |
| No log | 3.75 | 30 | 1.1887 | 0.3662 | 1.1887 | 1.0903 |
| No log | 4.0 | 32 | 1.1607 | 0.4181 | 1.1607 | 1.0774 |
| No log | 4.25 | 34 | 1.1590 | 0.4145 | 1.1590 | 1.0766 |
| No log | 4.5 | 36 | 1.3189 | 0.3624 | 1.3189 | 1.1484 |
| No log | 4.75 | 38 | 1.1902 | 0.3496 | 1.1902 | 1.0910 |
| No log | 5.0 | 40 | 0.9968 | 0.5283 | 0.9968 | 0.9984 |
| No log | 5.25 | 42 | 0.9532 | 0.4628 | 0.9532 | 0.9763 |
| No log | 5.5 | 44 | 0.8992 | 0.5336 | 0.8992 | 0.9482 |
| No log | 5.75 | 46 | 0.9013 | 0.5661 | 0.9013 | 0.9494 |
| No log | 6.0 | 48 | 1.0160 | 0.3812 | 1.0160 | 1.0080 |
| No log | 6.25 | 50 | 1.0665 | 0.2857 | 1.0665 | 1.0327 |
| No log | 6.5 | 52 | 0.9579 | 0.5161 | 0.9579 | 0.9787 |
| No log | 6.75 | 54 | 0.9063 | 0.5540 | 0.9063 | 0.9520 |
| No log | 7.0 | 56 | 0.8729 | 0.5469 | 0.8729 | 0.9343 |
| No log | 7.25 | 58 | 0.9195 | 0.5925 | 0.9195 | 0.9589 |
| No log | 7.5 | 60 | 0.9483 | 0.5981 | 0.9483 | 0.9738 |
| No log | 7.75 | 62 | 0.8631 | 0.5727 | 0.8631 | 0.9290 |
| No log | 8.0 | 64 | 0.8401 | 0.5316 | 0.8401 | 0.9166 |
| No log | 8.25 | 66 | 0.8532 | 0.5621 | 0.8532 | 0.9237 |
| No log | 8.5 | 68 | 0.9241 | 0.5763 | 0.9241 | 0.9613 |
| No log | 8.75 | 70 | 0.8741 | 0.6476 | 0.8741 | 0.9349 |
| No log | 9.0 | 72 | 0.9365 | 0.5653 | 0.9365 | 0.9677 |
| No log | 9.25 | 74 | 1.1255 | 0.3949 | 1.1255 | 1.0609 |
| No log | 9.5 | 76 | 1.1053 | 0.4032 | 1.1053 | 1.0514 |
| No log | 9.75 | 78 | 0.9055 | 0.5872 | 0.9055 | 0.9516 |
| No log | 10.0 | 80 | 0.8272 | 0.6038 | 0.8272 | 0.9095 |
| No log | 10.25 | 82 | 0.7976 | 0.5886 | 0.7976 | 0.8931 |
| No log | 10.5 | 84 | 0.8105 | 0.6010 | 0.8105 | 0.9003 |
| No log | 10.75 | 86 | 0.9892 | 0.5072 | 0.9892 | 0.9946 |
| No log | 11.0 | 88 | 1.1295 | 0.3995 | 1.1295 | 1.0628 |
| No log | 11.25 | 90 | 1.1286 | 0.3995 | 1.1286 | 1.0623 |
| No log | 11.5 | 92 | 0.9918 | 0.5387 | 0.9918 | 0.9959 |
| No log | 11.75 | 94 | 0.9009 | 0.5643 | 0.9009 | 0.9492 |
| No log | 12.0 | 96 | 0.8678 | 0.5634 | 0.8678 | 0.9316 |
| No log | 12.25 | 98 | 0.8491 | 0.5527 | 0.8491 | 0.9215 |
| No log | 12.5 | 100 | 0.9288 | 0.5848 | 0.9288 | 0.9638 |
| No log | 12.75 | 102 | 1.0346 | 0.3845 | 1.0346 | 1.0172 |
| No log | 13.0 | 104 | 1.0993 | 0.3757 | 1.0993 | 1.0485 |
| No log | 13.25 | 106 | 1.0666 | 0.3757 | 1.0666 | 1.0328 |
| No log | 13.5 | 108 | 0.9152 | 0.5458 | 0.9152 | 0.9567 |
| No log | 13.75 | 110 | 0.8455 | 0.5159 | 0.8455 | 0.9195 |
| No log | 14.0 | 112 | 0.8476 | 0.5012 | 0.8476 | 0.9206 |
| No log | 14.25 | 114 | 0.8169 | 0.5769 | 0.8169 | 0.9038 |
| No log | 14.5 | 116 | 0.8390 | 0.5889 | 0.8390 | 0.9160 |
| No log | 14.75 | 118 | 0.8586 | 0.5889 | 0.8586 | 0.9266 |
| No log | 15.0 | 120 | 0.9263 | 0.5855 | 0.9263 | 0.9624 |
| No log | 15.25 | 122 | 0.9487 | 0.5090 | 0.9487 | 0.9740 |
| No log | 15.5 | 124 | 0.9345 | 0.5202 | 0.9345 | 0.9667 |
| No log | 15.75 | 126 | 0.9169 | 0.6106 | 0.9169 | 0.9576 |
| No log | 16.0 | 128 | 0.8941 | 0.5750 | 0.8941 | 0.9456 |
| No log | 16.25 | 130 | 0.8700 | 0.5706 | 0.8700 | 0.9327 |
| No log | 16.5 | 132 | 0.9050 | 0.5911 | 0.9050 | 0.9513 |
| No log | 16.75 | 134 | 0.9342 | 0.5479 | 0.9342 | 0.9665 |
| No log | 17.0 | 136 | 0.9597 | 0.4526 | 0.9597 | 0.9796 |
| No log | 17.25 | 138 | 0.9473 | 0.4449 | 0.9473 | 0.9733 |
| No log | 17.5 | 140 | 0.9224 | 0.5398 | 0.9224 | 0.9604 |
| No log | 17.75 | 142 | 0.8434 | 0.5835 | 0.8434 | 0.9184 |
| No log | 18.0 | 144 | 0.7839 | 0.6044 | 0.7839 | 0.8854 |
| No log | 18.25 | 146 | 0.8347 | 0.5835 | 0.8347 | 0.9136 |
| No log | 18.5 | 148 | 0.8247 | 0.6060 | 0.8247 | 0.9081 |
| No log | 18.75 | 150 | 0.7776 | 0.6328 | 0.7776 | 0.8818 |
| No log | 19.0 | 152 | 0.8599 | 0.5963 | 0.8599 | 0.9273 |
| No log | 19.25 | 154 | 0.9787 | 0.4765 | 0.9787 | 0.9893 |
| No log | 19.5 | 156 | 0.9388 | 0.5530 | 0.9388 | 0.9689 |
| No log | 19.75 | 158 | 0.8001 | 0.5683 | 0.8001 | 0.8945 |
| No log | 20.0 | 160 | 0.7520 | 0.5611 | 0.7520 | 0.8672 |
| No log | 20.25 | 162 | 0.7939 | 0.5519 | 0.7939 | 0.8910 |
| No log | 20.5 | 164 | 0.7527 | 0.5988 | 0.7527 | 0.8676 |
| No log | 20.75 | 166 | 0.7427 | 0.5993 | 0.7427 | 0.8618 |
| No log | 21.0 | 168 | 0.8660 | 0.5778 | 0.8660 | 0.9306 |
| No log | 21.25 | 170 | 0.9971 | 0.4767 | 0.9971 | 0.9986 |
| No log | 21.5 | 172 | 0.9048 | 0.5892 | 0.9048 | 0.9512 |
| No log | 21.75 | 174 | 0.7944 | 0.5968 | 0.7944 | 0.8913 |
| No log | 22.0 | 176 | 0.7774 | 0.5573 | 0.7774 | 0.8817 |
| No log | 22.25 | 178 | 0.7796 | 0.5621 | 0.7796 | 0.8829 |
| No log | 22.5 | 180 | 0.8166 | 0.5752 | 0.8166 | 0.9037 |
| No log | 22.75 | 182 | 0.8159 | 0.5167 | 0.8159 | 0.9033 |
| No log | 23.0 | 184 | 0.8132 | 0.5413 | 0.8132 | 0.9018 |
| No log | 23.25 | 186 | 0.8279 | 0.5548 | 0.8279 | 0.9099 |
| No log | 23.5 | 188 | 0.8567 | 0.6014 | 0.8567 | 0.9256 |
| No log | 23.75 | 190 | 0.8828 | 0.6067 | 0.8828 | 0.9396 |
| No log | 24.0 | 192 | 0.9324 | 0.5687 | 0.9324 | 0.9656 |
| No log | 24.25 | 194 | 0.9485 | 0.5392 | 0.9485 | 0.9739 |
| No log | 24.5 | 196 | 0.8357 | 0.5911 | 0.8357 | 0.9142 |
| No log | 24.75 | 198 | 0.7682 | 0.5413 | 0.7682 | 0.8765 |
| No log | 25.0 | 200 | 0.7771 | 0.4889 | 0.7771 | 0.8815 |
| No log | 25.25 | 202 | 0.7889 | 0.4889 | 0.7889 | 0.8882 |
| No log | 25.5 | 204 | 0.7767 | 0.4889 | 0.7767 | 0.8813 |
| No log | 25.75 | 206 | 0.7653 | 0.5481 | 0.7653 | 0.8748 |
| No log | 26.0 | 208 | 0.7854 | 0.5969 | 0.7854 | 0.8862 |
| No log | 26.25 | 210 | 0.9035 | 0.5737 | 0.9035 | 0.9505 |
| No log | 26.5 | 212 | 0.9183 | 0.5920 | 0.9183 | 0.9583 |
| No log | 26.75 | 214 | 0.8388 | 0.6212 | 0.8388 | 0.9158 |
| No log | 27.0 | 216 | 0.7822 | 0.6139 | 0.7822 | 0.8844 |
| No log | 27.25 | 218 | 0.7722 | 0.6086 | 0.7722 | 0.8787 |
| No log | 27.5 | 220 | 0.8329 | 0.6095 | 0.8329 | 0.9127 |
| No log | 27.75 | 222 | 0.9132 | 0.5228 | 0.9132 | 0.9556 |
| No log | 28.0 | 224 | 0.9593 | 0.5228 | 0.9593 | 0.9795 |
| No log | 28.25 | 226 | 0.9844 | 0.5148 | 0.9844 | 0.9922 |
| No log | 28.5 | 228 | 0.9307 | 0.5383 | 0.9307 | 0.9647 |
| No log | 28.75 | 230 | 0.8670 | 0.6201 | 0.8670 | 0.9311 |
| No log | 29.0 | 232 | 0.7697 | 0.5854 | 0.7697 | 0.8773 |
| No log | 29.25 | 234 | 0.7627 | 0.5012 | 0.7627 | 0.8733 |
| No log | 29.5 | 236 | 0.7568 | 0.5239 | 0.7568 | 0.8699 |
| No log | 29.75 | 238 | 0.7426 | 0.5315 | 0.7426 | 0.8618 |
| No log | 30.0 | 240 | 0.7565 | 0.5581 | 0.7565 | 0.8698 |
| No log | 30.25 | 242 | 0.7949 | 0.5226 | 0.7949 | 0.8916 |
| No log | 30.5 | 244 | 0.7903 | 0.5279 | 0.7903 | 0.8890 |
| No log | 30.75 | 246 | 0.7935 | 0.5581 | 0.7935 | 0.8908 |
| No log | 31.0 | 248 | 0.7920 | 0.5581 | 0.7920 | 0.8899 |
| No log | 31.25 | 250 | 0.7911 | 0.5581 | 0.7911 | 0.8895 |
| No log | 31.5 | 252 | 0.8080 | 0.5226 | 0.8080 | 0.8989 |
| No log | 31.75 | 254 | 0.7824 | 0.5279 | 0.7824 | 0.8845 |
| No log | 32.0 | 256 | 0.7594 | 0.5830 | 0.7594 | 0.8714 |
| No log | 32.25 | 258 | 0.7512 | 0.5773 | 0.7512 | 0.8667 |
| No log | 32.5 | 260 | 0.7516 | 0.5458 | 0.7516 | 0.8670 |
| No log | 32.75 | 262 | 0.7596 | 0.5944 | 0.7596 | 0.8715 |
| No log | 33.0 | 264 | 0.7556 | 0.5773 | 0.7556 | 0.8693 |
| No log | 33.25 | 266 | 0.7606 | 0.5462 | 0.7606 | 0.8721 |
| No log | 33.5 | 268 | 0.7813 | 0.5443 | 0.7813 | 0.8839 |
| No log | 33.75 | 270 | 0.7974 | 0.5633 | 0.7974 | 0.8930 |
| No log | 34.0 | 272 | 0.8128 | 0.5787 | 0.8128 | 0.9016 |
| No log | 34.25 | 274 | 0.7962 | 0.5691 | 0.7962 | 0.8923 |
| No log | 34.5 | 276 | 0.8200 | 0.6026 | 0.8200 | 0.9055 |
| No log | 34.75 | 278 | 0.8420 | 0.6167 | 0.8420 | 0.9176 |
| No log | 35.0 | 280 | 0.8093 | 0.6167 | 0.8093 | 0.8996 |
| No log | 35.25 | 282 | 0.7566 | 0.6032 | 0.7566 | 0.8698 |
| No log | 35.5 | 284 | 0.7621 | 0.5681 | 0.7621 | 0.8730 |
| No log | 35.75 | 286 | 0.7825 | 0.5956 | 0.7825 | 0.8846 |
| No log | 36.0 | 288 | 0.7786 | 0.5816 | 0.7786 | 0.8824 |
| No log | 36.25 | 290 | 0.7601 | 0.5530 | 0.7601 | 0.8718 |
| No log | 36.5 | 292 | 0.7678 | 0.5194 | 0.7678 | 0.8762 |
| No log | 36.75 | 294 | 0.8233 | 0.5926 | 0.8233 | 0.9073 |
| No log | 37.0 | 296 | 0.8563 | 0.5571 | 0.8563 | 0.9254 |
| No log | 37.25 | 298 | 0.8415 | 0.5513 | 0.8415 | 0.9173 |
| No log | 37.5 | 300 | 0.7797 | 0.5563 | 0.7797 | 0.8830 |
| No log | 37.75 | 302 | 0.7475 | 0.5596 | 0.7475 | 0.8646 |
| No log | 38.0 | 304 | 0.7587 | 0.5915 | 0.7587 | 0.8710 |
| No log | 38.25 | 306 | 0.7819 | 0.6277 | 0.7819 | 0.8843 |
| No log | 38.5 | 308 | 0.8275 | 0.6097 | 0.8275 | 0.9097 |
| No log | 38.75 | 310 | 0.8783 | 0.6321 | 0.8783 | 0.9372 |
| No log | 39.0 | 312 | 0.8847 | 0.6340 | 0.8847 | 0.9406 |
| No log | 39.25 | 314 | 0.8524 | 0.6136 | 0.8524 | 0.9232 |
| No log | 39.5 | 316 | 0.8067 | 0.5971 | 0.8067 | 0.8981 |
| No log | 39.75 | 318 | 0.7862 | 0.5658 | 0.7862 | 0.8867 |
| No log | 40.0 | 320 | 0.7645 | 0.5397 | 0.7645 | 0.8744 |
| No log | 40.25 | 322 | 0.7652 | 0.5582 | 0.7652 | 0.8747 |
| No log | 40.5 | 324 | 0.7895 | 0.5693 | 0.7895 | 0.8886 |
| No log | 40.75 | 326 | 0.8312 | 0.6011 | 0.8312 | 0.9117 |
| No log | 41.0 | 328 | 0.8201 | 0.6074 | 0.8201 | 0.9056 |
| No log | 41.25 | 330 | 0.8093 | 0.6074 | 0.8093 | 0.8996 |
| No log | 41.5 | 332 | 0.7577 | 0.5759 | 0.7577 | 0.8705 |
| No log | 41.75 | 334 | 0.7257 | 0.6108 | 0.7257 | 0.8519 |
| No log | 42.0 | 336 | 0.7305 | 0.5148 | 0.7305 | 0.8547 |
| No log | 42.25 | 338 | 0.7625 | 0.5359 | 0.7625 | 0.8732 |
| No log | 42.5 | 340 | 0.7724 | 0.5515 | 0.7724 | 0.8789 |
| No log | 42.75 | 342 | 0.7583 | 0.5481 | 0.7583 | 0.8708 |
| No log | 43.0 | 344 | 0.7621 | 0.6218 | 0.7621 | 0.8730 |
| No log | 43.25 | 346 | 0.8241 | 0.5783 | 0.8241 | 0.9078 |
| No log | 43.5 | 348 | 0.8800 | 0.6305 | 0.8800 | 0.9381 |
| No log | 43.75 | 350 | 0.9163 | 0.6274 | 0.9163 | 0.9572 |
| No log | 44.0 | 352 | 0.9326 | 0.6241 | 0.9326 | 0.9657 |
| No log | 44.25 | 354 | 0.8969 | 0.6434 | 0.8969 | 0.9470 |
| No log | 44.5 | 356 | 0.8530 | 0.6098 | 0.8530 | 0.9236 |
| No log | 44.75 | 358 | 0.8156 | 0.6108 | 0.8156 | 0.9031 |
| No log | 45.0 | 360 | 0.7856 | 0.5573 | 0.7856 | 0.8864 |
| No log | 45.25 | 362 | 0.7766 | 0.5391 | 0.7766 | 0.8813 |
| No log | 45.5 | 364 | 0.7773 | 0.5396 | 0.7773 | 0.8816 |
| No log | 45.75 | 366 | 0.7936 | 0.6078 | 0.7936 | 0.8908 |
| No log | 46.0 | 368 | 0.8269 | 0.6151 | 0.8269 | 0.9094 |
| No log | 46.25 | 370 | 0.8661 | 0.6026 | 0.8661 | 0.9306 |
| No log | 46.5 | 372 | 0.8898 | 0.5739 | 0.8898 | 0.9433 |
| No log | 46.75 | 374 | 0.9149 | 0.5763 | 0.9149 | 0.9565 |
| No log | 47.0 | 376 | 0.8884 | 0.5763 | 0.8884 | 0.9426 |
| No log | 47.25 | 378 | 0.8597 | 0.5816 | 0.8597 | 0.9272 |
| No log | 47.5 | 380 | 0.8038 | 0.6048 | 0.8038 | 0.8966 |
| No log | 47.75 | 382 | 0.7453 | 0.5930 | 0.7453 | 0.8633 |
| No log | 48.0 | 384 | 0.7387 | 0.5958 | 0.7387 | 0.8595 |
| No log | 48.25 | 386 | 0.7387 | 0.5793 | 0.7387 | 0.8595 |
| No log | 48.5 | 388 | 0.7377 | 0.6075 | 0.7377 | 0.8589 |
| No log | 48.75 | 390 | 0.7344 | 0.5828 | 0.7344 | 0.8570 |
| No log | 49.0 | 392 | 0.7425 | 0.5793 | 0.7425 | 0.8617 |
| No log | 49.25 | 394 | 0.7764 | 0.5976 | 0.7764 | 0.8811 |
| No log | 49.5 | 396 | 0.8145 | 0.6157 | 0.8145 | 0.9025 |
| No log | 49.75 | 398 | 0.8599 | 0.5856 | 0.8599 | 0.9273 |
| No log | 50.0 | 400 | 0.9296 | 0.5533 | 0.9296 | 0.9642 |
| No log | 50.25 | 402 | 0.9645 | 0.5681 | 0.9645 | 0.9821 |
| No log | 50.5 | 404 | 0.9089 | 0.5681 | 0.9089 | 0.9534 |
| No log | 50.75 | 406 | 0.8218 | 0.6202 | 0.8218 | 0.9065 |
| No log | 51.0 | 408 | 0.7478 | 0.6172 | 0.7478 | 0.8648 |
| No log | 51.25 | 410 | 0.7114 | 0.6151 | 0.7114 | 0.8435 |
| No log | 51.5 | 412 | 0.6909 | 0.6487 | 0.6909 | 0.8312 |
| No log | 51.75 | 414 | 0.6946 | 0.6244 | 0.6946 | 0.8334 |
| No log | 52.0 | 416 | 0.7115 | 0.6044 | 0.7115 | 0.8435 |
| No log | 52.25 | 418 | 0.7130 | 0.6228 | 0.7130 | 0.8444 |
| No log | 52.5 | 420 | 0.7106 | 0.6404 | 0.7106 | 0.8430 |
| No log | 52.75 | 422 | 0.7094 | 0.6089 | 0.7094 | 0.8423 |
| No log | 53.0 | 424 | 0.7039 | 0.6629 | 0.7039 | 0.8390 |
| No log | 53.25 | 426 | 0.6985 | 0.6237 | 0.6985 | 0.8358 |
| No log | 53.5 | 428 | 0.7047 | 0.5988 | 0.7047 | 0.8395 |
| No log | 53.75 | 430 | 0.7138 | 0.6328 | 0.7138 | 0.8449 |
| No log | 54.0 | 432 | 0.7195 | 0.5815 | 0.7195 | 0.8482 |
| No log | 54.25 | 434 | 0.7271 | 0.6097 | 0.7271 | 0.8527 |
| No log | 54.5 | 436 | 0.7365 | 0.6647 | 0.7365 | 0.8582 |
| No log | 54.75 | 438 | 0.7400 | 0.6669 | 0.7400 | 0.8602 |
| No log | 55.0 | 440 | 0.7317 | 0.6343 | 0.7317 | 0.8554 |
| No log | 55.25 | 442 | 0.7222 | 0.6343 | 0.7222 | 0.8498 |
| No log | 55.5 | 444 | 0.7139 | 0.6218 | 0.7139 | 0.8450 |
| No log | 55.75 | 446 | 0.7090 | 0.6468 | 0.7090 | 0.8420 |
| No log | 56.0 | 448 | 0.7157 | 0.6393 | 0.7157 | 0.8460 |
| No log | 56.25 | 450 | 0.7200 | 0.6468 | 0.7200 | 0.8486 |
| No log | 56.5 | 452 | 0.7242 | 0.6119 | 0.7242 | 0.8510 |
| No log | 56.75 | 454 | 0.7298 | 0.6119 | 0.7298 | 0.8543 |
| No log | 57.0 | 456 | 0.7402 | 0.5974 | 0.7402 | 0.8604 |
| No log | 57.25 | 458 | 0.7547 | 0.6044 | 0.7547 | 0.8687 |
| No log | 57.5 | 460 | 0.7690 | 0.5931 | 0.7690 | 0.8769 |
| No log | 57.75 | 462 | 0.7846 | 0.6151 | 0.7846 | 0.8858 |
| No log | 58.0 | 464 | 0.8249 | 0.6160 | 0.8249 | 0.9082 |
| No log | 58.25 | 466 | 0.8499 | 0.6146 | 0.8499 | 0.9219 |
| No log | 58.5 | 468 | 0.8405 | 0.6404 | 0.8405 | 0.9168 |
| No log | 58.75 | 470 | 0.8178 | 0.6246 | 0.8178 | 0.9043 |
| No log | 59.0 | 472 | 0.7906 | 0.6225 | 0.7906 | 0.8892 |
| No log | 59.25 | 474 | 0.7889 | 0.6228 | 0.7889 | 0.8882 |
| No log | 59.5 | 476 | 0.7943 | 0.6024 | 0.7943 | 0.8912 |
| No log | 59.75 | 478 | 0.7907 | 0.6024 | 0.7907 | 0.8892 |
| No log | 60.0 | 480 | 0.7812 | 0.6024 | 0.7812 | 0.8838 |
| No log | 60.25 | 482 | 0.7588 | 0.5827 | 0.7588 | 0.8711 |
| No log | 60.5 | 484 | 0.7538 | 0.6343 | 0.7538 | 0.8682 |
| No log | 60.75 | 486 | 0.7566 | 0.6065 | 0.7566 | 0.8699 |
| No log | 61.0 | 488 | 0.7507 | 0.6065 | 0.7507 | 0.8664 |
| No log | 61.25 | 490 | 0.7477 | 0.6205 | 0.7477 | 0.8647 |
| No log | 61.5 | 492 | 0.7367 | 0.6205 | 0.7367 | 0.8583 |
| No log | 61.75 | 494 | 0.7180 | 0.6097 | 0.7180 | 0.8474 |
| No log | 62.0 | 496 | 0.7051 | 0.6257 | 0.7051 | 0.8397 |
| No log | 62.25 | 498 | 0.7055 | 0.6479 | 0.7055 | 0.8399 |
| 0.2158 | 62.5 | 500 | 0.7168 | 0.7044 | 0.7168 | 0.8466 |
| 0.2158 | 62.75 | 502 | 0.7305 | 0.6725 | 0.7305 | 0.8547 |
| 0.2158 | 63.0 | 504 | 0.7416 | 0.6414 | 0.7416 | 0.8611 |
| 0.2158 | 63.25 | 506 | 0.7493 | 0.6414 | 0.7493 | 0.8656 |
| 0.2158 | 63.5 | 508 | 0.7530 | 0.6366 | 0.7530 | 0.8678 |
| 0.2158 | 63.75 | 510 | 0.7527 | 0.6277 | 0.7527 | 0.8676 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nghiatrannnnnn/64ec99b1-05f7-4ad5-b325-1c61c81b3a35
|
nghiatrannnnnn
| 2025-02-04T01:40:16Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T00:57:06Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 64ec99b1-05f7-4ad5-b325-1c61c81b3a35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3e5eab4715297236_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3e5eab4715297236_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/64ec99b1-05f7-4ad5-b325-1c61c81b3a35
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3e5eab4715297236_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 06993ad5-9e1b-472b-9fb0-ffdcec07b62e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 64ec99b1-05f7-4ad5-b325-1c61c81b3a35
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.695 | 0.1850 | 200 | 0.2259 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
earnxus/feeedd92-4513-4eec-8dfc-51f5f9029d6b
|
earnxus
| 2025-02-04T01:39:43Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-04T01:36:22Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: feeedd92-4513-4eec-8dfc-51f5f9029d6b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 63a8db7d9fe5f771_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/63a8db7d9fe5f771_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/feeedd92-4513-4eec-8dfc-51f5f9029d6b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/63a8db7d9fe5f771_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 952c906e-f742-4409-8070-fb5697cfd498
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: 952c906e-f742-4409-8070-fb5697cfd498
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# feeedd92-4513-4eec-8dfc-51f5f9029d6b
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.2214 | 0.1719 | 200 | 1.9170 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
abaddon182/99b70ade-88da-45b0-ad84-853556d185cf
|
abaddon182
| 2025-02-04T01:38:53Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T01:36:32Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 99b70ade-88da-45b0-ad84-853556d185cf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 63a8db7d9fe5f771_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/63a8db7d9fe5f771_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: abaddon182/99b70ade-88da-45b0-ad84-853556d185cf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/63a8db7d9fe5f771_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 952c906e-f742-4409-8070-fb5697cfd498
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 952c906e-f742-4409-8070-fb5697cfd498
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 99b70ade-88da-45b0-ad84-853556d185cf
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.2819 | 0.0034 | 1 | 2.0815 |
| 14.3357 | 0.1718 | 50 | 2.7596 |
| 22.6478 | 0.3436 | 100 | 3.1222 |
| 14.0829 | 0.5155 | 150 | 2.7104 |
| 16.3383 | 0.6873 | 200 | 2.6779 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
philip-hightech/2cef0780-8485-419f-a866-e836dfdfd61c
|
philip-hightech
| 2025-02-04T01:37:36Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T01:36:42Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2cef0780-8485-419f-a866-e836dfdfd61c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 63a8db7d9fe5f771_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/63a8db7d9fe5f771_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/2cef0780-8485-419f-a866-e836dfdfd61c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/63a8db7d9fe5f771_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 952c906e-f742-4409-8070-fb5697cfd498
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 952c906e-f742-4409-8070-fb5697cfd498
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2cef0780-8485-419f-a866-e836dfdfd61c
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 1.9756 |
| 5.7022 | 0.0271 | 63 | 3.2679 |
| 16.4928 | 0.0541 | 126 | 5.1578 |
| 6.8067 | 0.0812 | 189 | 3.6602 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nttx/d32b52f5-925c-48e5-86c4-6550b657119f
|
nttx
| 2025-02-04T01:37:25Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T01:36:05Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d32b52f5-925c-48e5-86c4-6550b657119f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 63a8db7d9fe5f771_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/63a8db7d9fe5f771_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/d32b52f5-925c-48e5-86c4-6550b657119f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/63a8db7d9fe5f771_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 952c906e-f742-4409-8070-fb5697cfd498
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 952c906e-f742-4409-8070-fb5697cfd498
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d32b52f5-925c-48e5-86c4-6550b657119f
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.2544 | 0.3436 | 200 | 1.9402 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task1_organization
|
MayBashendy
| 2025-02-04T01:35:00Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T01:29:05Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k20_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7784
- Qwk: 0.7
- Mse: 0.7784
- Rmse: 0.8822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0196 | 2 | 6.7818 | 0.0176 | 6.7818 | 2.6042 |
| No log | 0.0392 | 4 | 4.6522 | 0.0543 | 4.6522 | 2.1569 |
| No log | 0.0588 | 6 | 3.0219 | 0.0848 | 3.0219 | 1.7384 |
| No log | 0.0784 | 8 | 2.3178 | 0.1096 | 2.3178 | 1.5224 |
| No log | 0.0980 | 10 | 1.9280 | 0.2857 | 1.9280 | 1.3885 |
| No log | 0.1176 | 12 | 1.7788 | 0.1495 | 1.7788 | 1.3337 |
| No log | 0.1373 | 14 | 1.9031 | 0.1356 | 1.9031 | 1.3795 |
| No log | 0.1569 | 16 | 2.0256 | 0.1626 | 2.0256 | 1.4233 |
| No log | 0.1765 | 18 | 1.7571 | 0.1391 | 1.7571 | 1.3256 |
| No log | 0.1961 | 20 | 1.6395 | 0.2124 | 1.6395 | 1.2804 |
| No log | 0.2157 | 22 | 1.6378 | 0.2479 | 1.6378 | 1.2797 |
| No log | 0.2353 | 24 | 1.6649 | 0.3125 | 1.6649 | 1.2903 |
| No log | 0.2549 | 26 | 1.4446 | 0.3902 | 1.4446 | 1.2019 |
| No log | 0.2745 | 28 | 1.3152 | 0.3967 | 1.3152 | 1.1468 |
| No log | 0.2941 | 30 | 1.3054 | 0.4202 | 1.3054 | 1.1425 |
| No log | 0.3137 | 32 | 1.3463 | 0.4677 | 1.3463 | 1.1603 |
| No log | 0.3333 | 34 | 1.5629 | 0.3040 | 1.5629 | 1.2501 |
| No log | 0.3529 | 36 | 1.6729 | 0.3881 | 1.6729 | 1.2934 |
| No log | 0.3725 | 38 | 1.4413 | 0.4030 | 1.4413 | 1.2006 |
| No log | 0.3922 | 40 | 1.3489 | 0.4429 | 1.3489 | 1.1614 |
| No log | 0.4118 | 42 | 1.1669 | 0.5344 | 1.1669 | 1.0802 |
| No log | 0.4314 | 44 | 1.1770 | 0.4961 | 1.1770 | 1.0849 |
| No log | 0.4510 | 46 | 1.3361 | 0.4219 | 1.3361 | 1.1559 |
| No log | 0.4706 | 48 | 1.1585 | 0.5191 | 1.1585 | 1.0764 |
| No log | 0.4902 | 50 | 1.0191 | 0.5556 | 1.0191 | 1.0095 |
| No log | 0.5098 | 52 | 1.0268 | 0.512 | 1.0268 | 1.0133 |
| No log | 0.5294 | 54 | 1.0604 | 0.5238 | 1.0604 | 1.0298 |
| No log | 0.5490 | 56 | 1.0327 | 0.5469 | 1.0327 | 1.0162 |
| No log | 0.5686 | 58 | 1.0514 | 0.5397 | 1.0514 | 1.0254 |
| No log | 0.5882 | 60 | 1.1474 | 0.5692 | 1.1474 | 1.0712 |
| No log | 0.6078 | 62 | 1.2184 | 0.5692 | 1.2184 | 1.1038 |
| No log | 0.6275 | 64 | 1.2375 | 0.5581 | 1.2375 | 1.1124 |
| No log | 0.6471 | 66 | 1.2029 | 0.5156 | 1.2029 | 1.0968 |
| No log | 0.6667 | 68 | 1.2033 | 0.5625 | 1.2033 | 1.0969 |
| No log | 0.6863 | 70 | 1.1511 | 0.5649 | 1.1511 | 1.0729 |
| No log | 0.7059 | 72 | 1.1070 | 0.5556 | 1.1070 | 1.0521 |
| No log | 0.7255 | 74 | 1.1231 | 0.5397 | 1.1231 | 1.0598 |
| No log | 0.7451 | 76 | 1.1984 | 0.5649 | 1.1984 | 1.0947 |
| No log | 0.7647 | 78 | 1.3196 | 0.5294 | 1.3196 | 1.1488 |
| No log | 0.7843 | 80 | 1.3995 | 0.4604 | 1.3995 | 1.1830 |
| No log | 0.8039 | 82 | 1.3966 | 0.4604 | 1.3966 | 1.1818 |
| No log | 0.8235 | 84 | 1.2886 | 0.5116 | 1.2886 | 1.1352 |
| No log | 0.8431 | 86 | 1.2660 | 0.4688 | 1.2660 | 1.1252 |
| No log | 0.8627 | 88 | 1.3945 | 0.3810 | 1.3945 | 1.1809 |
| No log | 0.8824 | 90 | 1.3077 | 0.4252 | 1.3077 | 1.1435 |
| No log | 0.9020 | 92 | 1.1490 | 0.4706 | 1.1490 | 1.0719 |
| No log | 0.9216 | 94 | 1.2295 | 0.5692 | 1.2295 | 1.1088 |
| No log | 0.9412 | 96 | 1.2993 | 0.5606 | 1.2993 | 1.1399 |
| No log | 0.9608 | 98 | 1.1591 | 0.5909 | 1.1591 | 1.0766 |
| No log | 0.9804 | 100 | 1.0642 | 0.4839 | 1.0642 | 1.0316 |
| No log | 1.0 | 102 | 1.1627 | 0.4516 | 1.1627 | 1.0783 |
| No log | 1.0196 | 104 | 1.1855 | 0.4762 | 1.1855 | 1.0888 |
| No log | 1.0392 | 106 | 1.0879 | 0.4677 | 1.0879 | 1.0430 |
| No log | 1.0588 | 108 | 1.0408 | 0.5538 | 1.0408 | 1.0202 |
| No log | 1.0784 | 110 | 1.1507 | 0.6197 | 1.1507 | 1.0727 |
| No log | 1.0980 | 112 | 1.1172 | 0.6286 | 1.1172 | 1.0570 |
| No log | 1.1176 | 114 | 1.0629 | 0.5985 | 1.0629 | 1.0310 |
| No log | 1.1373 | 116 | 1.1036 | 0.6029 | 1.1036 | 1.0505 |
| No log | 1.1569 | 118 | 1.1590 | 0.5414 | 1.1590 | 1.0766 |
| No log | 1.1765 | 120 | 1.1617 | 0.5156 | 1.1617 | 1.0778 |
| No log | 1.1961 | 122 | 1.1835 | 0.5484 | 1.1835 | 1.0879 |
| No log | 1.2157 | 124 | 1.3976 | 0.4265 | 1.3976 | 1.1822 |
| No log | 1.2353 | 126 | 1.4555 | 0.3942 | 1.4555 | 1.2064 |
| No log | 1.2549 | 128 | 1.2962 | 0.4962 | 1.2962 | 1.1385 |
| No log | 1.2745 | 130 | 1.1217 | 0.5366 | 1.1217 | 1.0591 |
| No log | 1.2941 | 132 | 1.0895 | 0.4715 | 1.0895 | 1.0438 |
| No log | 1.3137 | 134 | 1.0374 | 0.5238 | 1.0374 | 1.0185 |
| No log | 1.3333 | 136 | 1.0111 | 0.5781 | 1.0111 | 1.0055 |
| No log | 1.3529 | 138 | 1.0378 | 0.5714 | 1.0378 | 1.0187 |
| No log | 1.3725 | 140 | 1.0869 | 0.5760 | 1.0869 | 1.0425 |
| No log | 1.3922 | 142 | 1.1235 | 0.6 | 1.1235 | 1.0600 |
| No log | 1.4118 | 144 | 1.1780 | 0.5802 | 1.1780 | 1.0854 |
| No log | 1.4314 | 146 | 1.1252 | 0.6047 | 1.1252 | 1.0608 |
| No log | 1.4510 | 148 | 1.0457 | 0.5760 | 1.0457 | 1.0226 |
| No log | 1.4706 | 150 | 0.9769 | 0.5873 | 0.9769 | 0.9884 |
| No log | 1.4902 | 152 | 0.9491 | 0.6107 | 0.9491 | 0.9742 |
| No log | 1.5098 | 154 | 0.9790 | 0.5954 | 0.9790 | 0.9894 |
| No log | 1.5294 | 156 | 1.0869 | 0.5846 | 1.0869 | 1.0426 |
| No log | 1.5490 | 158 | 1.0766 | 0.5802 | 1.0766 | 1.0376 |
| No log | 1.5686 | 160 | 1.0261 | 0.6277 | 1.0261 | 1.0130 |
| No log | 1.5882 | 162 | 1.0265 | 0.6331 | 1.0265 | 1.0132 |
| No log | 1.6078 | 164 | 1.1473 | 0.5867 | 1.1473 | 1.0711 |
| No log | 1.6275 | 166 | 1.1671 | 0.6093 | 1.1671 | 1.0803 |
| No log | 1.6471 | 168 | 0.9925 | 0.6803 | 0.9925 | 0.9962 |
| No log | 1.6667 | 170 | 0.8294 | 0.6475 | 0.8294 | 0.9107 |
| No log | 1.6863 | 172 | 0.8660 | 0.6308 | 0.8660 | 0.9306 |
| No log | 1.7059 | 174 | 0.8770 | 0.6406 | 0.8770 | 0.9365 |
| No log | 1.7255 | 176 | 0.8696 | 0.6047 | 0.8696 | 0.9325 |
| No log | 1.7451 | 178 | 0.9208 | 0.5781 | 0.9208 | 0.9596 |
| No log | 1.7647 | 180 | 0.9094 | 0.5827 | 0.9094 | 0.9536 |
| No log | 1.7843 | 182 | 0.8658 | 0.6154 | 0.8658 | 0.9305 |
| No log | 1.8039 | 184 | 0.8416 | 0.6202 | 0.8416 | 0.9174 |
| No log | 1.8235 | 186 | 0.8190 | 0.6565 | 0.8190 | 0.9050 |
| No log | 1.8431 | 188 | 0.8062 | 0.7068 | 0.8062 | 0.8979 |
| No log | 1.8627 | 190 | 0.8040 | 0.6466 | 0.8040 | 0.8967 |
| No log | 1.8824 | 192 | 0.8161 | 0.6466 | 0.8161 | 0.9034 |
| No log | 1.9020 | 194 | 0.8314 | 0.6462 | 0.8314 | 0.9118 |
| No log | 1.9216 | 196 | 0.8418 | 0.6094 | 0.8418 | 0.9175 |
| No log | 1.9412 | 198 | 0.8488 | 0.6565 | 0.8488 | 0.9213 |
| No log | 1.9608 | 200 | 0.8410 | 0.6716 | 0.8410 | 0.9170 |
| No log | 1.9804 | 202 | 0.8487 | 0.6308 | 0.8487 | 0.9213 |
| No log | 2.0 | 204 | 0.8848 | 0.6047 | 0.8848 | 0.9407 |
| No log | 2.0196 | 206 | 0.9316 | 0.5760 | 0.9316 | 0.9652 |
| No log | 2.0392 | 208 | 0.9567 | 0.6190 | 0.9567 | 0.9781 |
| No log | 2.0588 | 210 | 0.9743 | 0.6190 | 0.9743 | 0.9870 |
| No log | 2.0784 | 212 | 0.9355 | 0.6406 | 0.9355 | 0.9672 |
| No log | 2.0980 | 214 | 0.9231 | 0.5426 | 0.9231 | 0.9608 |
| No log | 2.1176 | 216 | 0.9246 | 0.5649 | 0.9246 | 0.9615 |
| No log | 2.1373 | 218 | 0.8056 | 0.6418 | 0.8056 | 0.8975 |
| No log | 2.1569 | 220 | 0.7596 | 0.6567 | 0.7596 | 0.8715 |
| No log | 2.1765 | 222 | 0.7760 | 0.6763 | 0.7760 | 0.8809 |
| No log | 2.1961 | 224 | 0.7778 | 0.6316 | 0.7778 | 0.8819 |
| No log | 2.2157 | 226 | 0.8593 | 0.6620 | 0.8593 | 0.9270 |
| No log | 2.2353 | 228 | 0.9575 | 0.6187 | 0.9575 | 0.9785 |
| No log | 2.2549 | 230 | 0.9236 | 0.5839 | 0.9236 | 0.9610 |
| No log | 2.2745 | 232 | 0.9092 | 0.6423 | 0.9092 | 0.9535 |
| No log | 2.2941 | 234 | 0.9472 | 0.6176 | 0.9472 | 0.9733 |
| No log | 2.3137 | 236 | 1.0041 | 0.5926 | 1.0041 | 1.0020 |
| No log | 2.3333 | 238 | 1.1589 | 0.5714 | 1.1589 | 1.0765 |
| No log | 2.3529 | 240 | 1.1044 | 0.5344 | 1.1044 | 1.0509 |
| No log | 2.3725 | 242 | 1.0228 | 0.5354 | 1.0228 | 1.0113 |
| No log | 2.3922 | 244 | 0.9589 | 0.5781 | 0.9589 | 0.9793 |
| No log | 2.4118 | 246 | 0.9432 | 0.5714 | 0.9432 | 0.9712 |
| No log | 2.4314 | 248 | 0.9273 | 0.5827 | 0.9273 | 0.9629 |
| No log | 2.4510 | 250 | 0.8770 | 0.6107 | 0.8770 | 0.9365 |
| No log | 2.4706 | 252 | 0.8746 | 0.6107 | 0.8746 | 0.9352 |
| No log | 2.4902 | 254 | 0.9031 | 0.6212 | 0.9031 | 0.9503 |
| No log | 2.5098 | 256 | 1.0400 | 0.56 | 1.0400 | 1.0198 |
| No log | 2.5294 | 258 | 1.1501 | 0.4407 | 1.1501 | 1.0724 |
| No log | 2.5490 | 260 | 1.1596 | 0.3826 | 1.1596 | 1.0768 |
| No log | 2.5686 | 262 | 1.1035 | 0.5366 | 1.1035 | 1.0505 |
| No log | 2.5882 | 264 | 1.0509 | 0.5645 | 1.0509 | 1.0251 |
| No log | 2.6078 | 266 | 1.0043 | 0.5873 | 1.0043 | 1.0022 |
| No log | 2.6275 | 268 | 0.9603 | 0.56 | 0.9603 | 0.9800 |
| No log | 2.6471 | 270 | 0.9206 | 0.6047 | 0.9206 | 0.9595 |
| No log | 2.6667 | 272 | 0.9667 | 0.5758 | 0.9667 | 0.9832 |
| No log | 2.6863 | 274 | 0.9398 | 0.5926 | 0.9398 | 0.9694 |
| No log | 2.7059 | 276 | 0.9272 | 0.6232 | 0.9272 | 0.9629 |
| No log | 2.7255 | 278 | 0.9537 | 0.6131 | 0.9537 | 0.9766 |
| No log | 2.7451 | 280 | 0.9515 | 0.6357 | 0.9515 | 0.9754 |
| No log | 2.7647 | 282 | 0.9669 | 0.5736 | 0.9669 | 0.9833 |
| No log | 2.7843 | 284 | 1.0894 | 0.5581 | 1.0894 | 1.0437 |
| No log | 2.8039 | 286 | 1.1140 | 0.5312 | 1.1140 | 1.0555 |
| No log | 2.8235 | 288 | 1.0541 | 0.528 | 1.0541 | 1.0267 |
| No log | 2.8431 | 290 | 1.0112 | 0.5984 | 1.0112 | 1.0056 |
| No log | 2.8627 | 292 | 0.9725 | 0.5891 | 0.9725 | 0.9862 |
| No log | 2.8824 | 294 | 0.9451 | 0.5649 | 0.9451 | 0.9722 |
| No log | 2.9020 | 296 | 0.9430 | 0.5649 | 0.9430 | 0.9711 |
| No log | 2.9216 | 298 | 0.9765 | 0.5238 | 0.9765 | 0.9882 |
| No log | 2.9412 | 300 | 1.0507 | 0.5203 | 1.0507 | 1.0250 |
| No log | 2.9608 | 302 | 1.0440 | 0.5246 | 1.0440 | 1.0217 |
| No log | 2.9804 | 304 | 1.0122 | 0.5410 | 1.0122 | 1.0061 |
| No log | 3.0 | 306 | 0.9528 | 0.5873 | 0.9528 | 0.9761 |
| No log | 3.0196 | 308 | 0.8923 | 0.5806 | 0.8923 | 0.9446 |
| No log | 3.0392 | 310 | 0.8179 | 0.5984 | 0.8179 | 0.9044 |
| No log | 3.0588 | 312 | 0.7404 | 0.6618 | 0.7404 | 0.8604 |
| No log | 3.0784 | 314 | 0.7016 | 0.7101 | 0.7016 | 0.8376 |
| No log | 3.0980 | 316 | 0.6815 | 0.7007 | 0.6815 | 0.8255 |
| No log | 3.1176 | 318 | 0.6921 | 0.6957 | 0.6921 | 0.8319 |
| No log | 3.1373 | 320 | 0.6881 | 0.7007 | 0.6881 | 0.8295 |
| No log | 3.1569 | 322 | 0.7288 | 0.6866 | 0.7288 | 0.8537 |
| No log | 3.1765 | 324 | 0.7873 | 0.6269 | 0.7873 | 0.8873 |
| No log | 3.1961 | 326 | 0.7895 | 0.6370 | 0.7895 | 0.8885 |
| No log | 3.2157 | 328 | 0.7754 | 0.6866 | 0.7754 | 0.8806 |
| No log | 3.2353 | 330 | 0.8123 | 0.6617 | 0.8123 | 0.9013 |
| No log | 3.2549 | 332 | 0.8661 | 0.5984 | 0.8661 | 0.9306 |
| No log | 3.2745 | 334 | 0.8900 | 0.6202 | 0.8900 | 0.9434 |
| No log | 3.2941 | 336 | 0.8836 | 0.6308 | 0.8836 | 0.9400 |
| No log | 3.3137 | 338 | 0.8621 | 0.6406 | 0.8621 | 0.9285 |
| No log | 3.3333 | 340 | 0.8041 | 0.6512 | 0.8041 | 0.8967 |
| No log | 3.3529 | 342 | 0.7597 | 0.6212 | 0.7597 | 0.8716 |
| No log | 3.3725 | 344 | 0.7960 | 0.6765 | 0.7960 | 0.8922 |
| No log | 3.3922 | 346 | 0.7790 | 0.5802 | 0.7790 | 0.8826 |
| No log | 3.4118 | 348 | 0.8225 | 0.576 | 0.8225 | 0.9069 |
| No log | 3.4314 | 350 | 0.8779 | 0.5484 | 0.8779 | 0.9369 |
| No log | 3.4510 | 352 | 1.0074 | 0.4959 | 1.0074 | 1.0037 |
| No log | 3.4706 | 354 | 1.1250 | 0.4553 | 1.1250 | 1.0607 |
| No log | 3.4902 | 356 | 1.1112 | 0.4762 | 1.1112 | 1.0541 |
| No log | 3.5098 | 358 | 0.9957 | 0.5538 | 0.9957 | 0.9979 |
| No log | 3.5294 | 360 | 0.8745 | 0.5827 | 0.8745 | 0.9352 |
| No log | 3.5490 | 362 | 0.8605 | 0.6667 | 0.8605 | 0.9276 |
| No log | 3.5686 | 364 | 0.8609 | 0.6462 | 0.8609 | 0.9279 |
| No log | 3.5882 | 366 | 0.8289 | 0.6667 | 0.8289 | 0.9105 |
| No log | 3.6078 | 368 | 0.8040 | 0.625 | 0.8040 | 0.8967 |
| No log | 3.6275 | 370 | 0.7942 | 0.6202 | 0.7942 | 0.8912 |
| No log | 3.6471 | 372 | 0.7906 | 0.6462 | 0.7906 | 0.8892 |
| No log | 3.6667 | 374 | 0.8043 | 0.6462 | 0.8043 | 0.8968 |
| No log | 3.6863 | 376 | 0.8543 | 0.6466 | 0.8543 | 0.9243 |
| No log | 3.7059 | 378 | 0.8877 | 0.6418 | 0.8877 | 0.9422 |
| No log | 3.7255 | 380 | 0.9122 | 0.6515 | 0.9122 | 0.9551 |
| No log | 3.7451 | 382 | 0.8954 | 0.6466 | 0.8954 | 0.9463 |
| No log | 3.7647 | 384 | 0.8760 | 0.6466 | 0.8760 | 0.9359 |
| No log | 3.7843 | 386 | 0.8540 | 0.6154 | 0.8540 | 0.9241 |
| No log | 3.8039 | 388 | 0.8909 | 0.6032 | 0.8909 | 0.9439 |
| No log | 3.8235 | 390 | 0.9215 | 0.6032 | 0.9215 | 0.9599 |
| No log | 3.8431 | 392 | 0.9296 | 0.5806 | 0.9296 | 0.9641 |
| No log | 3.8627 | 394 | 0.9548 | 0.5645 | 0.9548 | 0.9772 |
| No log | 3.8824 | 396 | 0.9898 | 0.5691 | 0.9898 | 0.9949 |
| No log | 3.9020 | 398 | 0.9848 | 0.5645 | 0.9848 | 0.9924 |
| No log | 3.9216 | 400 | 0.9266 | 0.6299 | 0.9266 | 0.9626 |
| No log | 3.9412 | 402 | 0.8332 | 0.6667 | 0.8332 | 0.9128 |
| No log | 3.9608 | 404 | 0.7684 | 0.6667 | 0.7684 | 0.8766 |
| No log | 3.9804 | 406 | 0.7530 | 0.6667 | 0.7530 | 0.8677 |
| No log | 4.0 | 408 | 0.7637 | 0.6769 | 0.7637 | 0.8739 |
| No log | 4.0196 | 410 | 0.8047 | 0.6667 | 0.8047 | 0.8970 |
| No log | 4.0392 | 412 | 0.8344 | 0.6457 | 0.8344 | 0.9135 |
| No log | 4.0588 | 414 | 0.8651 | 0.6562 | 0.8651 | 0.9301 |
| No log | 4.0784 | 416 | 0.9159 | 0.5806 | 0.9159 | 0.9570 |
| No log | 4.0980 | 418 | 0.9491 | 0.5366 | 0.9491 | 0.9742 |
| No log | 4.1176 | 420 | 0.9376 | 0.5691 | 0.9376 | 0.9683 |
| No log | 4.1373 | 422 | 0.8972 | 0.6190 | 0.8972 | 0.9472 |
| No log | 4.1569 | 424 | 0.9206 | 0.6667 | 0.9206 | 0.9595 |
| No log | 4.1765 | 426 | 0.9081 | 0.6615 | 0.9081 | 0.9529 |
| No log | 4.1961 | 428 | 0.8408 | 0.6667 | 0.8408 | 0.9170 |
| No log | 4.2157 | 430 | 0.8603 | 0.6015 | 0.8603 | 0.9275 |
| No log | 4.2353 | 432 | 0.9558 | 0.5758 | 0.9558 | 0.9776 |
| No log | 4.2549 | 434 | 0.9901 | 0.5496 | 0.9901 | 0.9950 |
| No log | 4.2745 | 436 | 0.9626 | 0.5528 | 0.9626 | 0.9811 |
| No log | 4.2941 | 438 | 0.9366 | 0.6357 | 0.9366 | 0.9678 |
| No log | 4.3137 | 440 | 0.9171 | 0.6615 | 0.9171 | 0.9576 |
| No log | 4.3333 | 442 | 0.8374 | 0.6912 | 0.8374 | 0.9151 |
| No log | 4.3529 | 444 | 0.7750 | 0.6571 | 0.7750 | 0.8803 |
| No log | 4.3725 | 446 | 0.7853 | 0.6809 | 0.7853 | 0.8862 |
| No log | 4.3922 | 448 | 0.7972 | 0.6853 | 0.7972 | 0.8929 |
| No log | 4.4118 | 450 | 0.7953 | 0.6522 | 0.7953 | 0.8918 |
| No log | 4.4314 | 452 | 0.8065 | 0.6423 | 0.8065 | 0.8980 |
| No log | 4.4510 | 454 | 0.8453 | 0.6222 | 0.8453 | 0.9194 |
| No log | 4.4706 | 456 | 0.8459 | 0.6222 | 0.8459 | 0.9197 |
| No log | 4.4902 | 458 | 0.8731 | 0.5954 | 0.8731 | 0.9344 |
| No log | 4.5098 | 460 | 0.8983 | 0.5891 | 0.8983 | 0.9478 |
| No log | 4.5294 | 462 | 0.9076 | 0.5938 | 0.9076 | 0.9527 |
| No log | 4.5490 | 464 | 0.9134 | 0.6142 | 0.9134 | 0.9557 |
| No log | 4.5686 | 466 | 0.9136 | 0.6142 | 0.9136 | 0.9558 |
| No log | 4.5882 | 468 | 0.8786 | 0.6406 | 0.8786 | 0.9373 |
| No log | 4.6078 | 470 | 0.8642 | 0.6462 | 0.8642 | 0.9296 |
| No log | 4.6275 | 472 | 0.8582 | 0.6462 | 0.8582 | 0.9264 |
| No log | 4.6471 | 474 | 0.8347 | 0.6718 | 0.8347 | 0.9136 |
| No log | 4.6667 | 476 | 0.9075 | 0.625 | 0.9075 | 0.9526 |
| No log | 4.6863 | 478 | 0.9897 | 0.5366 | 0.9897 | 0.9948 |
| No log | 4.7059 | 480 | 1.0402 | 0.5289 | 1.0402 | 1.0199 |
| No log | 4.7255 | 482 | 1.0209 | 0.5806 | 1.0209 | 1.0104 |
| No log | 4.7451 | 484 | 0.9984 | 0.6349 | 0.9984 | 0.9992 |
| No log | 4.7647 | 486 | 0.9289 | 0.625 | 0.9289 | 0.9638 |
| No log | 4.7843 | 488 | 0.8248 | 0.6565 | 0.8248 | 0.9082 |
| No log | 4.8039 | 490 | 0.7036 | 0.7121 | 0.7036 | 0.8388 |
| No log | 4.8235 | 492 | 0.6861 | 0.7273 | 0.6861 | 0.8283 |
| No log | 4.8431 | 494 | 0.8577 | 0.6575 | 0.8577 | 0.9261 |
| No log | 4.8627 | 496 | 0.8961 | 0.6575 | 0.8961 | 0.9466 |
| No log | 4.8824 | 498 | 0.7991 | 0.6944 | 0.7991 | 0.8939 |
| 0.28 | 4.9020 | 500 | 0.7181 | 0.7448 | 0.7181 | 0.8474 |
| 0.28 | 4.9216 | 502 | 0.6866 | 0.7429 | 0.6866 | 0.8286 |
| 0.28 | 4.9412 | 504 | 0.7055 | 0.6963 | 0.7055 | 0.8399 |
| 0.28 | 4.9608 | 506 | 0.7292 | 0.6866 | 0.7292 | 0.8539 |
| 0.28 | 4.9804 | 508 | 0.7557 | 0.6957 | 0.7557 | 0.8693 |
| 0.28 | 5.0 | 510 | 0.7784 | 0.7 | 0.7784 | 0.8822 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mlfoundations-dev/llama3-1_8b_r1_annotated_aime
|
mlfoundations-dev
| 2025-02-04T01:31:46Z | 3,866 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-01T20:45:01Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-1_8b_r1_annotated_aime
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-1_8b_r1_annotated_aime
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/r1_annotated_aime dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
great0001/cc10c3c4-531e-4202-9c9a-40d8f05a7183
|
great0001
| 2025-02-04T01:31:12Z | 15 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-02-04T01:27:17Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc10c3c4-531e-4202-9c9a-40d8f05a7183
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b6e5ed8190ccb774_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b6e5ed8190ccb774_train_data.json
type:
field_instruction: soru
field_output: cevap
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/cc10c3c4-531e-4202-9c9a-40d8f05a7183
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b6e5ed8190ccb774_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 72e7b874-15da-42e2-ab22-791b74a29685
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 72e7b874-15da-42e2-ab22-791b74a29685
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cc10c3c4-531e-4202-9c9a-40d8f05a7183
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.9911 |
| 6.1996 | 0.0033 | 50 | 3.1534 |
| 5.7947 | 0.0065 | 100 | 2.9425 |
| 5.7563 | 0.0098 | 150 | 2.8391 |
| 5.7844 | 0.0131 | 200 | 2.8097 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
romainnn/5124e4a7-7800-44ab-9e16-18241b79982b
|
romainnn
| 2025-02-04T01:30:41Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T00:48:44Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5124e4a7-7800-44ab-9e16-18241b79982b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1029f694c22a0116_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1029f694c22a0116_train_data.json
type:
field_instruction: instructions
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: romainnn/5124e4a7-7800-44ab-9e16-18241b79982b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 38
micro_batch_size: 4
mlflow_experiment_name: /tmp/1029f694c22a0116_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7aeff1cf-86b0-475f-8cec-ec31521214cb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7aeff1cf-86b0-475f-8cec-ec31521214cb
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5124e4a7-7800-44ab-9e16-18241b79982b
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 38
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 15.0039 | 0.0098 | 1 | 0.9270 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
elmame/speecht5
|
elmame
| 2025-02-04T01:29:23Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:lj_speech",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-02-03T16:05:09Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- lj_speech
model-index:
- name: speecht5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the lj_speech dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
havinash-ai/65196f61-1c8d-4bf4-a086-9ea61bc7ad5b
|
havinash-ai
| 2025-02-04T01:27:05Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-02-04T01:26:39Z |
---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 65196f61-1c8d-4bf4-a086-9ea61bc7ad5b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b77b35ef124b1260_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b77b35ef124b1260_train_data.json
type:
field_input: ''
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/65196f61-1c8d-4bf4-a086-9ea61bc7ad5b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b77b35ef124b1260_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f87570f7-b1f0-48ca-b737-ebd938967009
wandb_project: Birthday-SN56-9-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f87570f7-b1f0-48ca-b737-ebd938967009
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 65196f61-1c8d-4bf4-a086-9ea61bc7ad5b
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 10.3793 |
| 10.3735 | 0.0894 | 50 | 10.3745 |
| 10.3606 | 0.1788 | 100 | 10.3588 |
| 10.3552 | 0.2682 | 150 | 10.3541 |
| 10.355 | 0.3576 | 200 | 10.3537 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
philip-hightech/a4ae87be-8cce-422b-96fd-939aaf1076f5
|
philip-hightech
| 2025-02-04T01:26:15Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-02-04T00:33:16Z |
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4ae87be-8cce-422b-96fd-939aaf1076f5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 711eb262493f89e0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/711eb262493f89e0_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/a4ae87be-8cce-422b-96fd-939aaf1076f5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/711eb262493f89e0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 24cc9eb4-7f5e-4d72-a2ff-2c216f2efd51
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 24cc9eb4-7f5e-4d72-a2ff-2c216f2efd51
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4ae87be-8cce-422b-96fd-939aaf1076f5
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.9216 |
| 0.5381 | 0.0007 | 63 | 0.5674 |
| 0.5471 | 0.0013 | 126 | 0.5365 |
| 0.5064 | 0.0020 | 189 | 0.4999 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JacksonBrune/30ca64ea-f0ce-47af-9032-4b8115f4230e
|
JacksonBrune
| 2025-02-04T01:26:07Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-02-04T00:32:31Z |
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 30ca64ea-f0ce-47af-9032-4b8115f4230e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 711eb262493f89e0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/711eb262493f89e0_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/30ca64ea-f0ce-47af-9032-4b8115f4230e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/711eb262493f89e0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 24cc9eb4-7f5e-4d72-a2ff-2c216f2efd51
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 24cc9eb4-7f5e-4d72-a2ff-2c216f2efd51
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 30ca64ea-f0ce-47af-9032-4b8115f4230e
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.9216 |
| 0.5361 | 0.0013 | 63 | 0.5189 |
| 0.5116 | 0.0027 | 126 | 0.5003 |
| 0.4793 | 0.0040 | 189 | 0.4874 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
genki10/ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold3
|
genki10
| 2025-02-04T01:25:54Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-03T21:27:43Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9359
- Qwk: 0.1548
- Mse: 1.9372
- Rmse: 1.3918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.25 | 2 | 11.3660 | 0.0117 | 11.3650 | 3.3712 |
| No log | 0.5 | 4 | 10.2282 | 0.0 | 10.2275 | 3.1980 |
| No log | 0.75 | 6 | 8.2858 | 0.0 | 8.2852 | 2.8784 |
| No log | 1.0 | 8 | 6.6621 | 0.0 | 6.6617 | 2.5810 |
| 6.5469 | 1.25 | 10 | 5.1710 | 0.0329 | 5.1706 | 2.2739 |
| 6.5469 | 1.5 | 12 | 4.0013 | 0.0076 | 4.0010 | 2.0003 |
| 6.5469 | 1.75 | 14 | 2.8683 | 0.0 | 2.8685 | 1.6937 |
| 6.5469 | 2.0 | 16 | 2.1810 | 0.0853 | 2.1812 | 1.4769 |
| 6.5469 | 2.25 | 18 | 1.5909 | 0.0329 | 1.5914 | 1.2615 |
| 2.3647 | 2.5 | 20 | 1.5947 | 0.0129 | 1.5955 | 1.2631 |
| 2.3647 | 2.75 | 22 | 1.4075 | 0.0157 | 1.4083 | 1.1867 |
| 2.3647 | 3.0 | 24 | 1.1561 | 0.0102 | 1.1570 | 1.0756 |
| 2.3647 | 3.25 | 26 | 1.3992 | 0.0329 | 1.3999 | 1.1832 |
| 2.3647 | 3.5 | 28 | 1.7034 | 0.0926 | 1.7040 | 1.3054 |
| 1.7022 | 3.75 | 30 | 1.7301 | 0.1104 | 1.7308 | 1.3156 |
| 1.7022 | 4.0 | 32 | 1.1643 | 0.0452 | 1.1651 | 1.0794 |
| 1.7022 | 4.25 | 34 | 1.8599 | 0.1272 | 1.8606 | 1.3641 |
| 1.7022 | 4.5 | 36 | 1.5065 | 0.1093 | 1.5073 | 1.2277 |
| 1.7022 | 4.75 | 38 | 0.7897 | 0.3557 | 0.7905 | 0.8891 |
| 1.5456 | 5.0 | 40 | 1.2770 | 0.1380 | 1.2779 | 1.1304 |
| 1.5456 | 5.25 | 42 | 2.0538 | 0.1825 | 2.0547 | 1.4334 |
| 1.5456 | 5.5 | 44 | 0.8848 | 0.3190 | 0.8856 | 0.9411 |
| 1.5456 | 5.75 | 46 | 0.8883 | 0.3191 | 0.8891 | 0.9429 |
| 1.5456 | 6.0 | 48 | 1.6953 | 0.1913 | 1.6963 | 1.3024 |
| 1.2794 | 6.25 | 50 | 0.8889 | 0.3603 | 0.8896 | 0.9432 |
| 1.2794 | 6.5 | 52 | 1.1502 | 0.2700 | 1.1512 | 1.0729 |
| 1.2794 | 6.75 | 54 | 2.3053 | 0.0932 | 2.3064 | 1.5187 |
| 1.2794 | 7.0 | 56 | 1.0398 | 0.3160 | 1.0407 | 1.0201 |
| 1.2794 | 7.25 | 58 | 0.7600 | 0.4084 | 0.7605 | 0.8720 |
| 1.0687 | 7.5 | 60 | 1.9526 | 0.1243 | 1.9537 | 1.3977 |
| 1.0687 | 7.75 | 62 | 1.9734 | 0.1187 | 1.9744 | 1.4051 |
| 1.0687 | 8.0 | 64 | 0.7830 | 0.3927 | 0.7835 | 0.8851 |
| 1.0687 | 8.25 | 66 | 0.9158 | 0.3456 | 0.9164 | 0.9573 |
| 1.0687 | 8.5 | 68 | 1.8712 | 0.1380 | 1.8724 | 1.3684 |
| 0.7516 | 8.75 | 70 | 0.9993 | 0.3261 | 0.9999 | 1.0000 |
| 0.7516 | 9.0 | 72 | 0.6742 | 0.4639 | 0.6743 | 0.8212 |
| 0.7516 | 9.25 | 74 | 0.8835 | 0.4050 | 0.8840 | 0.9402 |
| 0.7516 | 9.5 | 76 | 2.2347 | 0.1095 | 2.2360 | 1.4953 |
| 0.7516 | 9.75 | 78 | 1.5217 | 0.2001 | 1.5229 | 1.2341 |
| 0.7336 | 10.0 | 80 | 0.7839 | 0.4550 | 0.7844 | 0.8856 |
| 0.7336 | 10.25 | 82 | 1.1400 | 0.2530 | 1.1411 | 1.0682 |
| 0.7336 | 10.5 | 84 | 2.7707 | 0.0328 | 2.7723 | 1.6650 |
| 0.7336 | 10.75 | 86 | 1.8539 | 0.0965 | 1.8553 | 1.3621 |
| 0.7336 | 11.0 | 88 | 0.8147 | 0.4035 | 0.8151 | 0.9028 |
| 0.5475 | 11.25 | 90 | 0.8105 | 0.4264 | 0.8108 | 0.9005 |
| 0.5475 | 11.5 | 92 | 1.9359 | 0.1548 | 1.9372 | 1.3918 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
abaddon182/c26d6791-8563-4cb6-8c81-aa49701b2eb8
|
abaddon182
| 2025-02-04T01:25:34Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T01:18:48Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c26d6791-8563-4cb6-8c81-aa49701b2eb8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9ee4c7d4f914610d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9ee4c7d4f914610d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: abaddon182/c26d6791-8563-4cb6-8c81-aa49701b2eb8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/9ee4c7d4f914610d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8c04dad1-b647-409f-8c82-04b3516dd360
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8c04dad1-b647-409f-8c82-04b3516dd360
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c26d6791-8563-4cb6-8c81-aa49701b2eb8
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.351 | 0.0144 | 1 | 0.7267 |
| 0.6173 | 0.7220 | 50 | 0.5162 |
| 0.4829 | 1.4440 | 100 | 0.5058 |
| 0.4136 | 2.1661 | 150 | 0.4906 |
| 0.3405 | 2.8881 | 200 | 0.4880 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dabrown/9d3cf128-6371-409d-9a94-42ad5d25b4e4
|
dabrown
| 2025-02-04T01:23:09Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-02-04T00:59:01Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d3cf128-6371-409d-9a94-42ad5d25b4e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b6e5ed8190ccb774_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b6e5ed8190ccb774_train_data.json
type:
field_instruction: soru
field_output: cevap
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dabrown/9d3cf128-6371-409d-9a94-42ad5d25b4e4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/b6e5ed8190ccb774_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 72e7b874-15da-42e2-ab22-791b74a29685
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 72e7b874-15da-42e2-ab22-791b74a29685
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9d3cf128-6371-409d-9a94-42ad5d25b4e4
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.354 | 0.0131 | 50 | 3.2901 |
| 2.9141 | 0.0262 | 100 | 3.0311 |
| 2.7874 | 0.0392 | 150 | 2.9442 |
| 2.8469 | 0.0523 | 200 | 2.9138 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
PLM-Team/plm-instruct-dpo-gguf
|
PLM-Team
| 2025-02-04T01:22:57Z | 76 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-03T16:45:08Z |
---
license: apache-2.0
---
|
lesso/802d179a-214a-465d-a232-d219245c37fb
|
lesso
| 2025-02-04T01:22:26Z | 14 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-02-04T01:20:36Z |
---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 802d179a-214a-465d-a232-d219245c37fb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 8b74be9ab0373a6f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8b74be9ab0373a6f_train_data.json
type:
field_input: references
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/802d179a-214a-465d-a232-d219245c37fb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god01/8b74be9ab0373a6f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ab2cd1d3-a8f2-4277-a76f-00c40e9d7b71
wandb_project: ab-god01
wandb_run: your_name
wandb_runid: ab2cd1d3-a8f2-4277-a76f-00c40e9d7b71
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 802d179a-214a-465d-a232-d219245c37fb
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3833 | 0.0002 | 1 | 10.3801 |
| 10.3572 | 0.0094 | 50 | 10.3558 |
| 10.3457 | 0.0188 | 100 | 10.3469 |
| 10.3403 | 0.0281 | 150 | 10.3455 |
| 10.3457 | 0.0375 | 200 | 10.3449 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/0a04dd1a-7337-44bc-85f9-d780e7c92e21
|
lesso
| 2025-02-04T01:16:02Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T00:48:39Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0a04dd1a-7337-44bc-85f9-d780e7c92e21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 1029f694c22a0116_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1029f694c22a0116_train_data.json
type:
field_instruction: instructions
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/0a04dd1a-7337-44bc-85f9-d780e7c92e21
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god01/1029f694c22a0116_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7aeff1cf-86b0-475f-8cec-ec31521214cb
wandb_project: ab-god01
wandb_run: your_name
wandb_runid: 7aeff1cf-86b0-475f-8cec-ec31521214cb
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0a04dd1a-7337-44bc-85f9-d780e7c92e21
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3012 | 0.0012 | 1 | 1.1002 |
| 1.0936 | 0.0613 | 50 | 0.4213 |
| 0.9928 | 0.1226 | 100 | 0.3831 |
| 0.945 | 0.1839 | 150 | 0.3597 |
| 0.8969 | 0.2452 | 200 | 0.3480 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task1_organization
|
MayBashendy
| 2025-02-04T01:10:44Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-04T00:52:42Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingWellWrittenEssays_FineTuningAraBERT_run999_AugV5_k1_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8221
- Qwk: 0.6479
- Mse: 0.8221
- Rmse: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.25 | 2 | 8.4015 | -0.0227 | 8.4015 | 2.8985 |
| No log | 0.5 | 4 | 6.0779 | 0.0 | 6.0779 | 2.4653 |
| No log | 0.75 | 6 | 4.2145 | 0.0185 | 4.2145 | 2.0529 |
| No log | 1.0 | 8 | 3.2580 | 0.0585 | 3.2580 | 1.8050 |
| No log | 1.25 | 10 | 2.6967 | 0.0261 | 2.6967 | 1.6422 |
| No log | 1.5 | 12 | 2.2909 | 0.1135 | 2.2909 | 1.5136 |
| No log | 1.75 | 14 | 2.1738 | 0.2406 | 2.1738 | 1.4744 |
| No log | 2.0 | 16 | 1.8376 | 0.1930 | 1.8376 | 1.3556 |
| No log | 2.25 | 18 | 1.6283 | 0.1165 | 1.6283 | 1.2761 |
| No log | 2.5 | 20 | 1.5942 | 0.1538 | 1.5942 | 1.2626 |
| No log | 2.75 | 22 | 1.5268 | 0.1538 | 1.5268 | 1.2356 |
| No log | 3.0 | 24 | 1.6219 | 0.3761 | 1.6219 | 1.2735 |
| No log | 3.25 | 26 | 1.7328 | 0.375 | 1.7328 | 1.3163 |
| No log | 3.5 | 28 | 1.5396 | 0.4390 | 1.5396 | 1.2408 |
| No log | 3.75 | 30 | 1.3455 | 0.3063 | 1.3455 | 1.1600 |
| No log | 4.0 | 32 | 1.4300 | 0.3214 | 1.4300 | 1.1958 |
| No log | 4.25 | 34 | 1.5067 | 0.3333 | 1.5067 | 1.2275 |
| No log | 4.5 | 36 | 1.4349 | 0.3276 | 1.4349 | 1.1979 |
| No log | 4.75 | 38 | 1.4388 | 0.3770 | 1.4388 | 1.1995 |
| No log | 5.0 | 40 | 1.3805 | 0.3902 | 1.3805 | 1.1750 |
| No log | 5.25 | 42 | 1.2901 | 0.4444 | 1.2901 | 1.1358 |
| No log | 5.5 | 44 | 1.1932 | 0.3833 | 1.1932 | 1.0923 |
| No log | 5.75 | 46 | 1.1241 | 0.5312 | 1.1241 | 1.0602 |
| No log | 6.0 | 48 | 1.1396 | 0.5581 | 1.1396 | 1.0675 |
| No log | 6.25 | 50 | 1.1444 | 0.5736 | 1.1444 | 1.0698 |
| No log | 6.5 | 52 | 1.0692 | 0.5455 | 1.0692 | 1.0340 |
| No log | 6.75 | 54 | 1.0340 | 0.5714 | 1.0340 | 1.0168 |
| No log | 7.0 | 56 | 1.0228 | 0.5775 | 1.0228 | 1.0113 |
| No log | 7.25 | 58 | 1.1170 | 0.5857 | 1.1170 | 1.0569 |
| No log | 7.5 | 60 | 1.0616 | 0.6277 | 1.0616 | 1.0303 |
| No log | 7.75 | 62 | 1.0069 | 0.5414 | 1.0069 | 1.0034 |
| No log | 8.0 | 64 | 0.9424 | 0.6131 | 0.9424 | 0.9708 |
| No log | 8.25 | 66 | 0.8892 | 0.6286 | 0.8892 | 0.9430 |
| No log | 8.5 | 68 | 0.9638 | 0.6429 | 0.9638 | 0.9818 |
| No log | 8.75 | 70 | 0.9483 | 0.6 | 0.9483 | 0.9738 |
| No log | 9.0 | 72 | 0.9040 | 0.6412 | 0.9040 | 0.9508 |
| No log | 9.25 | 74 | 0.8926 | 0.6667 | 0.8926 | 0.9448 |
| No log | 9.5 | 76 | 0.8033 | 0.6619 | 0.8033 | 0.8963 |
| No log | 9.75 | 78 | 0.9307 | 0.5942 | 0.9307 | 0.9647 |
| No log | 10.0 | 80 | 0.9813 | 0.6015 | 0.9813 | 0.9906 |
| No log | 10.25 | 82 | 0.9614 | 0.6047 | 0.9614 | 0.9805 |
| No log | 10.5 | 84 | 0.8625 | 0.6667 | 0.8625 | 0.9287 |
| No log | 10.75 | 86 | 0.7352 | 0.6957 | 0.7352 | 0.8574 |
| No log | 11.0 | 88 | 0.7344 | 0.6957 | 0.7344 | 0.8569 |
| No log | 11.25 | 90 | 0.7717 | 0.7133 | 0.7717 | 0.8784 |
| No log | 11.5 | 92 | 0.7450 | 0.6815 | 0.7450 | 0.8631 |
| No log | 11.75 | 94 | 0.8799 | 0.7050 | 0.8799 | 0.9380 |
| No log | 12.0 | 96 | 0.9895 | 0.6207 | 0.9895 | 0.9948 |
| No log | 12.25 | 98 | 0.8847 | 0.6667 | 0.8847 | 0.9406 |
| No log | 12.5 | 100 | 0.8559 | 0.6619 | 0.8559 | 0.9251 |
| No log | 12.75 | 102 | 0.9125 | 0.6423 | 0.9125 | 0.9553 |
| No log | 13.0 | 104 | 0.8910 | 0.6277 | 0.8910 | 0.9439 |
| No log | 13.25 | 106 | 0.9034 | 0.6571 | 0.9034 | 0.9505 |
| No log | 13.5 | 108 | 1.0215 | 0.6131 | 1.0215 | 1.0107 |
| No log | 13.75 | 110 | 0.9906 | 0.6316 | 0.9906 | 0.9953 |
| No log | 14.0 | 112 | 0.9342 | 0.6316 | 0.9342 | 0.9665 |
| No log | 14.25 | 114 | 0.8580 | 0.6471 | 0.8580 | 0.9263 |
| No log | 14.5 | 116 | 0.8055 | 0.6714 | 0.8055 | 0.8975 |
| No log | 14.75 | 118 | 0.8014 | 0.6714 | 0.8014 | 0.8952 |
| No log | 15.0 | 120 | 0.8003 | 0.6571 | 0.8003 | 0.8946 |
| No log | 15.25 | 122 | 0.8637 | 0.6525 | 0.8637 | 0.9294 |
| No log | 15.5 | 124 | 0.8291 | 0.6525 | 0.8291 | 0.9105 |
| No log | 15.75 | 126 | 0.7882 | 0.7211 | 0.7882 | 0.8878 |
| No log | 16.0 | 128 | 0.8044 | 0.6993 | 0.8044 | 0.8969 |
| No log | 16.25 | 130 | 0.8094 | 0.7042 | 0.8094 | 0.8996 |
| No log | 16.5 | 132 | 0.8166 | 0.6475 | 0.8166 | 0.9037 |
| No log | 16.75 | 134 | 0.8957 | 0.6569 | 0.8957 | 0.9464 |
| No log | 17.0 | 136 | 0.9082 | 0.6471 | 0.9082 | 0.9530 |
| No log | 17.25 | 138 | 0.9031 | 0.6471 | 0.9031 | 0.9503 |
| No log | 17.5 | 140 | 0.8356 | 0.6569 | 0.8356 | 0.9141 |
| No log | 17.75 | 142 | 0.8495 | 0.6316 | 0.8495 | 0.9217 |
| No log | 18.0 | 144 | 0.8449 | 0.6759 | 0.8449 | 0.9192 |
| No log | 18.25 | 146 | 0.8068 | 0.6857 | 0.8068 | 0.8982 |
| No log | 18.5 | 148 | 0.8712 | 0.6620 | 0.8712 | 0.9334 |
| No log | 18.75 | 150 | 0.9683 | 0.6533 | 0.9683 | 0.9840 |
| No log | 19.0 | 152 | 0.9026 | 0.6525 | 0.9026 | 0.9500 |
| No log | 19.25 | 154 | 0.8443 | 0.6857 | 0.8443 | 0.9189 |
| No log | 19.5 | 156 | 0.9082 | 0.6377 | 0.9082 | 0.9530 |
| No log | 19.75 | 158 | 0.9250 | 0.6423 | 0.9250 | 0.9618 |
| No log | 20.0 | 160 | 0.9205 | 0.6015 | 0.9205 | 0.9594 |
| No log | 20.25 | 162 | 1.0278 | 0.5674 | 1.0278 | 1.0138 |
| No log | 20.5 | 164 | 1.1169 | 0.5634 | 1.1169 | 1.0568 |
| No log | 20.75 | 166 | 1.0358 | 0.5942 | 1.0358 | 1.0178 |
| No log | 21.0 | 168 | 0.9633 | 0.6074 | 0.9633 | 0.9815 |
| No log | 21.25 | 170 | 0.9961 | 0.6475 | 0.9961 | 0.9981 |
| No log | 21.5 | 172 | 0.9387 | 0.6571 | 0.9387 | 0.9689 |
| No log | 21.75 | 174 | 0.8516 | 0.6522 | 0.8516 | 0.9228 |
| No log | 22.0 | 176 | 0.9413 | 0.6486 | 0.9413 | 0.9702 |
| No log | 22.25 | 178 | 1.0079 | 0.64 | 1.0079 | 1.0039 |
| No log | 22.5 | 180 | 0.9115 | 0.6853 | 0.9115 | 0.9547 |
| No log | 22.75 | 182 | 0.8071 | 0.6866 | 0.8071 | 0.8984 |
| No log | 23.0 | 184 | 0.8037 | 0.6716 | 0.8037 | 0.8965 |
| No log | 23.25 | 186 | 0.8030 | 0.6565 | 0.8030 | 0.8961 |
| No log | 23.5 | 188 | 0.7986 | 0.6718 | 0.7986 | 0.8937 |
| No log | 23.75 | 190 | 0.8289 | 0.6815 | 0.8289 | 0.9104 |
| No log | 24.0 | 192 | 0.8615 | 0.6957 | 0.8615 | 0.9282 |
| No log | 24.25 | 194 | 0.8470 | 0.6765 | 0.8470 | 0.9203 |
| No log | 24.5 | 196 | 0.8552 | 0.6713 | 0.8552 | 0.9248 |
| No log | 24.75 | 198 | 0.8551 | 0.6713 | 0.8551 | 0.9247 |
| No log | 25.0 | 200 | 0.9062 | 0.6577 | 0.9062 | 0.9519 |
| No log | 25.25 | 202 | 0.9288 | 0.6040 | 0.9288 | 0.9637 |
| No log | 25.5 | 204 | 0.9146 | 0.6531 | 0.9146 | 0.9563 |
| No log | 25.75 | 206 | 0.8998 | 0.6757 | 0.8998 | 0.9486 |
| No log | 26.0 | 208 | 0.8990 | 0.6846 | 0.8990 | 0.9482 |
| No log | 26.25 | 210 | 0.8652 | 0.6800 | 0.8652 | 0.9302 |
| No log | 26.5 | 212 | 0.8451 | 0.6887 | 0.8451 | 0.9193 |
| No log | 26.75 | 214 | 0.8546 | 0.6842 | 0.8546 | 0.9244 |
| No log | 27.0 | 216 | 0.8321 | 0.6939 | 0.8321 | 0.9122 |
| No log | 27.25 | 218 | 0.8130 | 0.7007 | 0.8130 | 0.9017 |
| No log | 27.5 | 220 | 0.8149 | 0.6912 | 0.8149 | 0.9027 |
| No log | 27.75 | 222 | 0.8107 | 0.6912 | 0.8107 | 0.9004 |
| No log | 28.0 | 224 | 0.8081 | 0.6812 | 0.8081 | 0.8989 |
| No log | 28.25 | 226 | 0.8082 | 0.6763 | 0.8082 | 0.8990 |
| No log | 28.5 | 228 | 0.8091 | 0.6619 | 0.8091 | 0.8995 |
| No log | 28.75 | 230 | 0.8305 | 0.6906 | 0.8305 | 0.9113 |
| No log | 29.0 | 232 | 0.8839 | 0.6711 | 0.8839 | 0.9401 |
| No log | 29.25 | 234 | 0.8924 | 0.6667 | 0.8924 | 0.9447 |
| No log | 29.5 | 236 | 0.8792 | 0.72 | 0.8792 | 0.9377 |
| No log | 29.75 | 238 | 0.9161 | 0.6573 | 0.9161 | 0.9571 |
| No log | 30.0 | 240 | 0.9565 | 0.6338 | 0.9565 | 0.9780 |
| No log | 30.25 | 242 | 0.9384 | 0.6620 | 0.9384 | 0.9687 |
| No log | 30.5 | 244 | 1.0006 | 0.6309 | 1.0006 | 1.0003 |
| No log | 30.75 | 246 | 1.1285 | 0.5882 | 1.1285 | 1.0623 |
| No log | 31.0 | 248 | 1.1021 | 0.6483 | 1.1021 | 1.0498 |
| No log | 31.25 | 250 | 1.0230 | 0.6316 | 1.0230 | 1.0114 |
| No log | 31.5 | 252 | 0.9600 | 0.6515 | 0.9600 | 0.9798 |
| No log | 31.75 | 254 | 0.8991 | 0.6815 | 0.8991 | 0.9482 |
| No log | 32.0 | 256 | 0.8515 | 0.6618 | 0.8515 | 0.9228 |
| No log | 32.25 | 258 | 0.8376 | 0.6423 | 0.8376 | 0.9152 |
| No log | 32.5 | 260 | 0.8025 | 0.6906 | 0.8025 | 0.8958 |
| No log | 32.75 | 262 | 0.7954 | 0.6906 | 0.7954 | 0.8918 |
| No log | 33.0 | 264 | 0.8295 | 0.6667 | 0.8295 | 0.9108 |
| No log | 33.25 | 266 | 0.8124 | 0.6906 | 0.8124 | 0.9013 |
| No log | 33.5 | 268 | 0.7775 | 0.7101 | 0.7775 | 0.8817 |
| No log | 33.75 | 270 | 0.7782 | 0.6861 | 0.7782 | 0.8821 |
| No log | 34.0 | 272 | 0.7994 | 0.6763 | 0.7994 | 0.8941 |
| No log | 34.25 | 274 | 0.7895 | 0.6763 | 0.7895 | 0.8886 |
| No log | 34.5 | 276 | 0.7922 | 0.7 | 0.7922 | 0.8900 |
| No log | 34.75 | 278 | 0.8814 | 0.6667 | 0.8814 | 0.9388 |
| No log | 35.0 | 280 | 0.9255 | 0.6709 | 0.9255 | 0.9620 |
| No log | 35.25 | 282 | 0.8943 | 0.6579 | 0.8943 | 0.9457 |
| No log | 35.5 | 284 | 0.8204 | 0.6815 | 0.8204 | 0.9058 |
| No log | 35.75 | 286 | 0.7941 | 0.6963 | 0.7941 | 0.8911 |
| No log | 36.0 | 288 | 0.7871 | 0.6963 | 0.7871 | 0.8872 |
| No log | 36.25 | 290 | 0.8084 | 0.6815 | 0.8084 | 0.8991 |
| No log | 36.5 | 292 | 0.8878 | 0.6846 | 0.8878 | 0.9422 |
| No log | 36.75 | 294 | 0.8893 | 0.7067 | 0.8893 | 0.9430 |
| No log | 37.0 | 296 | 0.8098 | 0.7083 | 0.8098 | 0.8999 |
| No log | 37.25 | 298 | 0.7644 | 0.6861 | 0.7644 | 0.8743 |
| No log | 37.5 | 300 | 0.7743 | 0.6765 | 0.7743 | 0.8800 |
| No log | 37.75 | 302 | 0.7814 | 0.6861 | 0.7814 | 0.8840 |
| No log | 38.0 | 304 | 0.8011 | 0.6912 | 0.8011 | 0.8950 |
| No log | 38.25 | 306 | 0.8478 | 0.6812 | 0.8478 | 0.9207 |
| No log | 38.5 | 308 | 0.9024 | 0.6475 | 0.9024 | 0.9499 |
| No log | 38.75 | 310 | 0.9025 | 0.6667 | 0.9025 | 0.9500 |
| No log | 39.0 | 312 | 0.8917 | 0.6714 | 0.8917 | 0.9443 |
| No log | 39.25 | 314 | 0.9079 | 0.6531 | 0.9079 | 0.9528 |
| No log | 39.5 | 316 | 0.8944 | 0.6667 | 0.8944 | 0.9457 |
| No log | 39.75 | 318 | 0.8692 | 0.6901 | 0.8692 | 0.9323 |
| No log | 40.0 | 320 | 0.8531 | 0.6912 | 0.8531 | 0.9237 |
| No log | 40.25 | 322 | 0.8540 | 0.6912 | 0.8540 | 0.9241 |
| No log | 40.5 | 324 | 0.8601 | 0.6912 | 0.8601 | 0.9274 |
| No log | 40.75 | 326 | 0.8993 | 0.7042 | 0.8993 | 0.9483 |
| No log | 41.0 | 328 | 0.9632 | 0.6711 | 0.9632 | 0.9814 |
| No log | 41.25 | 330 | 0.9402 | 0.6800 | 0.9402 | 0.9696 |
| No log | 41.5 | 332 | 0.8785 | 0.6812 | 0.8785 | 0.9373 |
| No log | 41.75 | 334 | 0.8495 | 0.6715 | 0.8495 | 0.9217 |
| No log | 42.0 | 336 | 0.8450 | 0.6765 | 0.8450 | 0.9193 |
| No log | 42.25 | 338 | 0.8416 | 0.6569 | 0.8416 | 0.9174 |
| No log | 42.5 | 340 | 0.8395 | 0.6522 | 0.8395 | 0.9163 |
| No log | 42.75 | 342 | 0.8550 | 0.6812 | 0.8550 | 0.9247 |
| No log | 43.0 | 344 | 0.8482 | 0.6806 | 0.8482 | 0.9210 |
| No log | 43.25 | 346 | 0.8422 | 0.6812 | 0.8422 | 0.9177 |
| No log | 43.5 | 348 | 0.8610 | 0.6861 | 0.8610 | 0.9279 |
| No log | 43.75 | 350 | 0.9155 | 0.6316 | 0.9155 | 0.9568 |
| No log | 44.0 | 352 | 0.9693 | 0.6471 | 0.9693 | 0.9845 |
| No log | 44.25 | 354 | 0.9751 | 0.6475 | 0.9751 | 0.9875 |
| No log | 44.5 | 356 | 0.9037 | 0.6316 | 0.9037 | 0.9507 |
| No log | 44.75 | 358 | 0.8283 | 0.6667 | 0.8283 | 0.9101 |
| No log | 45.0 | 360 | 0.7955 | 0.6466 | 0.7955 | 0.8919 |
| No log | 45.25 | 362 | 0.8170 | 0.6418 | 0.8170 | 0.9039 |
| No log | 45.5 | 364 | 0.8169 | 0.6176 | 0.8169 | 0.9038 |
| No log | 45.75 | 366 | 0.7803 | 0.6765 | 0.7803 | 0.8833 |
| No log | 46.0 | 368 | 0.7711 | 0.6906 | 0.7711 | 0.8781 |
| No log | 46.25 | 370 | 0.8085 | 0.6573 | 0.8085 | 0.8992 |
| No log | 46.5 | 372 | 0.8154 | 0.6429 | 0.8154 | 0.9030 |
| No log | 46.75 | 374 | 0.7906 | 0.6667 | 0.7906 | 0.8892 |
| No log | 47.0 | 376 | 0.7908 | 0.6912 | 0.7908 | 0.8893 |
| No log | 47.25 | 378 | 0.7923 | 0.6912 | 0.7923 | 0.8901 |
| No log | 47.5 | 380 | 0.7983 | 0.6861 | 0.7983 | 0.8935 |
| No log | 47.75 | 382 | 0.8121 | 0.6853 | 0.8121 | 0.9012 |
| No log | 48.0 | 384 | 0.8306 | 0.6795 | 0.8306 | 0.9114 |
| No log | 48.25 | 386 | 0.8504 | 0.6790 | 0.8504 | 0.9222 |
| No log | 48.5 | 388 | 0.8054 | 0.7205 | 0.8054 | 0.8974 |
| No log | 48.75 | 390 | 0.7628 | 0.7248 | 0.7628 | 0.8734 |
| No log | 49.0 | 392 | 0.7536 | 0.6815 | 0.7536 | 0.8681 |
| No log | 49.25 | 394 | 0.7692 | 0.6815 | 0.7692 | 0.8770 |
| No log | 49.5 | 396 | 0.7919 | 0.6815 | 0.7919 | 0.8899 |
| No log | 49.75 | 398 | 0.8288 | 0.6617 | 0.8288 | 0.9104 |
| No log | 50.0 | 400 | 0.8675 | 0.6475 | 0.8675 | 0.9314 |
| No log | 50.25 | 402 | 0.8982 | 0.6759 | 0.8982 | 0.9478 |
| No log | 50.5 | 404 | 0.8867 | 0.6712 | 0.8867 | 0.9417 |
| No log | 50.75 | 406 | 0.8422 | 0.6573 | 0.8422 | 0.9177 |
| No log | 51.0 | 408 | 0.8027 | 0.6618 | 0.8027 | 0.8960 |
| No log | 51.25 | 410 | 0.7910 | 0.6849 | 0.7910 | 0.8894 |
| No log | 51.5 | 412 | 0.7784 | 0.7075 | 0.7784 | 0.8823 |
| No log | 51.75 | 414 | 0.7626 | 0.7162 | 0.7626 | 0.8732 |
| No log | 52.0 | 416 | 0.7661 | 0.7451 | 0.7661 | 0.8753 |
| No log | 52.25 | 418 | 0.7850 | 0.7273 | 0.7850 | 0.8860 |
| No log | 52.5 | 420 | 0.8173 | 0.6753 | 0.8173 | 0.9041 |
| No log | 52.75 | 422 | 0.8199 | 0.6620 | 0.8199 | 0.9055 |
| No log | 53.0 | 424 | 0.8112 | 0.6667 | 0.8112 | 0.9007 |
| No log | 53.25 | 426 | 0.8038 | 0.6765 | 0.8038 | 0.8966 |
| No log | 53.5 | 428 | 0.7847 | 0.6861 | 0.7847 | 0.8858 |
| No log | 53.75 | 430 | 0.7724 | 0.7042 | 0.7724 | 0.8789 |
| No log | 54.0 | 432 | 0.7696 | 0.7034 | 0.7696 | 0.8773 |
| No log | 54.25 | 434 | 0.7805 | 0.7152 | 0.7805 | 0.8835 |
| No log | 54.5 | 436 | 0.7928 | 0.7067 | 0.7928 | 0.8904 |
| No log | 54.75 | 438 | 0.8000 | 0.6993 | 0.8000 | 0.8944 |
| No log | 55.0 | 440 | 0.7988 | 0.6901 | 0.7988 | 0.8937 |
| No log | 55.25 | 442 | 0.8042 | 0.6901 | 0.8042 | 0.8968 |
| No log | 55.5 | 444 | 0.7971 | 0.6812 | 0.7971 | 0.8928 |
| No log | 55.75 | 446 | 0.7862 | 0.6812 | 0.7862 | 0.8867 |
| No log | 56.0 | 448 | 0.7890 | 0.7101 | 0.7890 | 0.8882 |
| No log | 56.25 | 450 | 0.7860 | 0.7042 | 0.7860 | 0.8865 |
| No log | 56.5 | 452 | 0.7644 | 0.6901 | 0.7644 | 0.8743 |
| No log | 56.75 | 454 | 0.7627 | 0.7172 | 0.7627 | 0.8733 |
| No log | 57.0 | 456 | 0.7815 | 0.7034 | 0.7815 | 0.8840 |
| No log | 57.25 | 458 | 0.8063 | 0.6944 | 0.8063 | 0.8980 |
| No log | 57.5 | 460 | 0.8137 | 0.6812 | 0.8137 | 0.9020 |
| No log | 57.75 | 462 | 0.8071 | 0.6812 | 0.8071 | 0.8984 |
| No log | 58.0 | 464 | 0.8032 | 0.6812 | 0.8032 | 0.8962 |
| No log | 58.25 | 466 | 0.7977 | 0.6812 | 0.7977 | 0.8931 |
| No log | 58.5 | 468 | 0.8032 | 0.6812 | 0.8032 | 0.8962 |
| No log | 58.75 | 470 | 0.8188 | 0.6957 | 0.8188 | 0.9049 |
| No log | 59.0 | 472 | 0.8434 | 0.6912 | 0.8434 | 0.9184 |
| No log | 59.25 | 474 | 0.8588 | 0.7050 | 0.8588 | 0.9267 |
| No log | 59.5 | 476 | 0.8394 | 0.6912 | 0.8394 | 0.9162 |
| No log | 59.75 | 478 | 0.8176 | 0.6912 | 0.8176 | 0.9042 |
| No log | 60.0 | 480 | 0.7989 | 0.6957 | 0.7989 | 0.8938 |
| No log | 60.25 | 482 | 0.7873 | 0.6861 | 0.7873 | 0.8873 |
| No log | 60.5 | 484 | 0.7818 | 0.6957 | 0.7818 | 0.8842 |
| No log | 60.75 | 486 | 0.7880 | 0.6957 | 0.7880 | 0.8877 |
| No log | 61.0 | 488 | 0.8108 | 0.7 | 0.8108 | 0.9005 |
| No log | 61.25 | 490 | 0.8220 | 0.6950 | 0.8220 | 0.9066 |
| No log | 61.5 | 492 | 0.8198 | 0.6957 | 0.8198 | 0.9055 |
| No log | 61.75 | 494 | 0.8154 | 0.6763 | 0.8154 | 0.9030 |
| No log | 62.0 | 496 | 0.8163 | 0.6763 | 0.8163 | 0.9035 |
| No log | 62.25 | 498 | 0.8176 | 0.6763 | 0.8176 | 0.9042 |
| 0.2952 | 62.5 | 500 | 0.8150 | 0.6763 | 0.8150 | 0.9028 |
| 0.2952 | 62.75 | 502 | 0.8122 | 0.6906 | 0.8122 | 0.9012 |
| 0.2952 | 63.0 | 504 | 0.8279 | 0.6763 | 0.8279 | 0.9099 |
| 0.2952 | 63.25 | 506 | 0.8400 | 0.6812 | 0.8400 | 0.9165 |
| 0.2952 | 63.5 | 508 | 0.8349 | 0.6765 | 0.8349 | 0.9137 |
| 0.2952 | 63.75 | 510 | 0.8267 | 0.6957 | 0.8267 | 0.9092 |
| 0.2952 | 64.0 | 512 | 0.8220 | 0.6957 | 0.8220 | 0.9067 |
| 0.2952 | 64.25 | 514 | 0.8166 | 0.6763 | 0.8166 | 0.9037 |
| 0.2952 | 64.5 | 516 | 0.8071 | 0.6857 | 0.8071 | 0.8984 |
| 0.2952 | 64.75 | 518 | 0.8025 | 0.6906 | 0.8025 | 0.8958 |
| 0.2952 | 65.0 | 520 | 0.7983 | 0.6906 | 0.7983 | 0.8935 |
| 0.2952 | 65.25 | 522 | 0.7980 | 0.6763 | 0.7980 | 0.8933 |
| 0.2952 | 65.5 | 524 | 0.8152 | 0.6713 | 0.8152 | 0.9029 |
| 0.2952 | 65.75 | 526 | 0.8301 | 0.6479 | 0.8301 | 0.9111 |
| 0.2952 | 66.0 | 528 | 0.8295 | 0.6479 | 0.8295 | 0.9108 |
| 0.2952 | 66.25 | 530 | 0.8221 | 0.6479 | 0.8221 | 0.9067 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Nohobby/L3.3-Prikol-70B-v0.4
|
Nohobby
| 2025-02-04T01:07:30Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4",
"base_model:merge:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4",
"base_model:Nohobby/AbominationSnowPig",
"base_model:merge:Nohobby/AbominationSnowPig",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1",
"base_model:merge:sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1",
"base_model:sophosympatheia/Nova-Tempus-70B-v0.2",
"base_model:merge:sophosympatheia/Nova-Tempus-70B-v0.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-03T17:54:11Z |
---
base_model:
- sophosympatheia/Nova-Tempus-70B-v0.2
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- Nohobby/AbominationSnowPig
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# Prikol
> I don't even know anymore

### Overview
I have yet to try it UPD: it sucks, bleh
Sometimes mistakes {{user}} for {{char}} and can't think. Other than that, the behavior is similar to the predecessors.
It sometimes gives some funny replies tho, yay!
If you still want to give it a try, here's the cursed text completion preset for cursed models, which makes them somewhat bearable:
https://files.catbox.moe/qr3s64.json
Or this one:
https://files.catbox.moe/97xryh.json
Prompt format: Llama3
### Quants
https://huggingface.co/bartowski/Nohobby_L3.3-Prikol-70B-v0.4-GGUF
## Merge Details
### Step1
```yaml
base_model: sophosympatheia/Nova-Tempus-70B-v0.2
merge_method: model_stock
dtype: bfloat16
models:
- model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- model: sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
tokenizer:
source: sophosympatheia/Nova-Tempus-70B-v0.2
```
### Step2
```yaml
models:
- model: unsloth/DeepSeek-R1-Distill-Llama-70B
- model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
parameters:
select_topk:
- value: [0.18, 0.3, 0.32, 0.38, 0.32, 0.3]
- model: Nohobby/AbominationSnowPig
parameters:
select_topk:
- value: [0.1, 0.06, 0.05, 0.05, 0.08]
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
select_topk: 0.17
- model: mergekit-community/L3.3-L3.1-NewTempusBlated-70B
parameters:
select_topk: 0.55
base_model: mergekit-community/L3.3-L3.1-NewTempusBlated-70B
merge_method: sce
parameters:
int8_mask: true
rescale: true
normalize: true
dtype: float32
out_dtype: bfloat16
tokenizer_source: base
```
|
genki10/ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold0
|
genki10
| 2025-02-04T01:06:33Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-03T21:19:36Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV8_k10_task1_organization_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5943
- Qwk: 0.5472
- Mse: 0.5943
- Rmse: 0.7709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.25 | 2 | 9.0420 | 0.0 | 9.0420 | 3.0070 |
| No log | 0.5 | 4 | 7.6145 | 0.0 | 7.6145 | 2.7594 |
| No log | 0.75 | 6 | 6.8108 | 0.0 | 6.8108 | 2.6097 |
| No log | 1.0 | 8 | 5.9556 | 0.0209 | 5.9556 | 2.4404 |
| 5.2789 | 1.25 | 10 | 5.0773 | 0.0115 | 5.0773 | 2.2533 |
| 5.2789 | 1.5 | 12 | 4.2390 | 0.0039 | 4.2390 | 2.0589 |
| 5.2789 | 1.75 | 14 | 3.4491 | 0.0 | 3.4491 | 1.8572 |
| 5.2789 | 2.0 | 16 | 2.7368 | 0.0 | 2.7368 | 1.6543 |
| 5.2789 | 2.25 | 18 | 2.0942 | 0.0689 | 2.0942 | 1.4471 |
| 2.4977 | 2.5 | 20 | 1.6274 | 0.0316 | 1.6274 | 1.2757 |
| 2.4977 | 2.75 | 22 | 1.3581 | 0.0316 | 1.3581 | 1.1654 |
| 2.4977 | 3.0 | 24 | 1.1374 | 0.0316 | 1.1374 | 1.0665 |
| 2.4977 | 3.25 | 26 | 1.5673 | 0.0316 | 1.5673 | 1.2519 |
| 2.4977 | 3.5 | 28 | 2.0877 | 0.1765 | 2.0877 | 1.4449 |
| 1.9099 | 3.75 | 30 | 1.4721 | 0.0575 | 1.4721 | 1.2133 |
| 1.9099 | 4.0 | 32 | 0.8284 | 0.2557 | 0.8284 | 0.9101 |
| 1.9099 | 4.25 | 34 | 1.0157 | 0.0567 | 1.0157 | 1.0078 |
| 1.9099 | 4.5 | 36 | 1.7498 | 0.1415 | 1.7498 | 1.3228 |
| 1.9099 | 4.75 | 38 | 1.3940 | 0.0714 | 1.3940 | 1.1807 |
| 1.7092 | 5.0 | 40 | 1.1002 | 0.0779 | 1.1002 | 1.0489 |
| 1.7092 | 5.25 | 42 | 1.1317 | 0.1516 | 1.1318 | 1.0638 |
| 1.7092 | 5.5 | 44 | 1.0288 | 0.2613 | 1.0288 | 1.0143 |
| 1.7092 | 5.75 | 46 | 1.0641 | 0.2996 | 1.0641 | 1.0315 |
| 1.7092 | 6.0 | 48 | 1.0741 | 0.3357 | 1.0741 | 1.0364 |
| 1.319 | 6.25 | 50 | 0.6738 | 0.4565 | 0.6738 | 0.8209 |
| 1.319 | 6.5 | 52 | 0.7498 | 0.4629 | 0.7498 | 0.8659 |
| 1.319 | 6.75 | 54 | 0.6227 | 0.4495 | 0.6227 | 0.7891 |
| 1.319 | 7.0 | 56 | 0.6315 | 0.4608 | 0.6315 | 0.7947 |
| 1.319 | 7.25 | 58 | 0.5875 | 0.4590 | 0.5875 | 0.7665 |
| 0.7315 | 7.5 | 60 | 0.6049 | 0.4375 | 0.6049 | 0.7777 |
| 0.7315 | 7.75 | 62 | 0.6788 | 0.4699 | 0.6788 | 0.8239 |
| 0.7315 | 8.0 | 64 | 0.6401 | 0.4462 | 0.6401 | 0.8000 |
| 0.7315 | 8.25 | 66 | 0.7471 | 0.4749 | 0.7471 | 0.8643 |
| 0.7315 | 8.5 | 68 | 0.6558 | 0.4991 | 0.6558 | 0.8098 |
| 0.4584 | 8.75 | 70 | 0.6045 | 0.5298 | 0.6045 | 0.7775 |
| 0.4584 | 9.0 | 72 | 1.0280 | 0.4024 | 1.0280 | 1.0139 |
| 0.4584 | 9.25 | 74 | 0.5699 | 0.5009 | 0.5699 | 0.7549 |
| 0.4584 | 9.5 | 76 | 0.5599 | 0.5188 | 0.5599 | 0.7483 |
| 0.4584 | 9.75 | 78 | 0.8521 | 0.4420 | 0.8521 | 0.9231 |
| 0.3911 | 10.0 | 80 | 0.5990 | 0.5326 | 0.5990 | 0.7740 |
| 0.3911 | 10.25 | 82 | 0.6045 | 0.5448 | 0.6045 | 0.7775 |
| 0.3911 | 10.5 | 84 | 0.7424 | 0.5166 | 0.7424 | 0.8616 |
| 0.3911 | 10.75 | 86 | 0.6233 | 0.5375 | 0.6233 | 0.7895 |
| 0.3911 | 11.0 | 88 | 0.6030 | 0.5613 | 0.6030 | 0.7765 |
| 0.2934 | 11.25 | 90 | 0.7415 | 0.5094 | 0.7415 | 0.8611 |
| 0.2934 | 11.5 | 92 | 0.6086 | 0.5581 | 0.6086 | 0.7801 |
| 0.2934 | 11.75 | 94 | 0.5970 | 0.5577 | 0.5970 | 0.7727 |
| 0.2934 | 12.0 | 96 | 0.7225 | 0.5170 | 0.7225 | 0.8500 |
| 0.2934 | 12.25 | 98 | 0.6135 | 0.5511 | 0.6135 | 0.7833 |
| 0.2689 | 12.5 | 100 | 0.5936 | 0.5480 | 0.5936 | 0.7705 |
| 0.2689 | 12.75 | 102 | 0.7169 | 0.5183 | 0.7169 | 0.8467 |
| 0.2689 | 13.0 | 104 | 0.5848 | 0.5551 | 0.5848 | 0.7647 |
| 0.2689 | 13.25 | 106 | 0.6048 | 0.5350 | 0.6048 | 0.7777 |
| 0.2689 | 13.5 | 108 | 0.5943 | 0.5472 | 0.5943 | 0.7709 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
modelteam-ai/lop_jan2025
|
modelteam-ai
| 2025-02-04T01:05:10Z | 34 | 0 |
peft
|
[
"peft",
"safetensors",
"code",
"base_model:Salesforce/codet5p-770m",
"base_model:adapter:Salesforce/codet5p-770m",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-01-17T21:37:06Z |
---
base_model: Salesforce/codet5p-770m
library_name: peft
license: bigcode-openrail-m
tags:
- code
---
# Overview
This model is used to build the ModelTeam profile for engineers, allowing them to validate and showcase their skills. It is a PEFT-finetuned version of the Salesforce/codet5p-770m model, specifically trained to predict sections of the profile. The model is lightweight and efficient, making it suitable for running on a laptop.
Website: [modelteam.ai](https://www.modelteam.ai/)
Instructions to Build your profile: [modelteam git](https://github.com/modelteam-ai/modelteam.ai)
### Framework versions
- PEFT 0.11.0
|
mrferr3t/74aa452d-d1d0-4df0-a0ac-5f6ce0a1c439
|
mrferr3t
| 2025-02-04T01:03:36Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-04T00:58:25Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 74aa452d-d1d0-4df0-a0ac-5f6ce0a1c439
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 9ee4c7d4f914610d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9ee4c7d4f914610d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/74aa452d-d1d0-4df0-a0ac-5f6ce0a1c439
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/9ee4c7d4f914610d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8c04dad1-b647-409f-8c82-04b3516dd360
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8c04dad1-b647-409f-8c82-04b3516dd360
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 74aa452d-d1d0-4df0-a0ac-5f6ce0a1c439
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 86
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0144 | 1 | 0.6688 |
| No log | 0.5755 | 40 | 0.5475 |
| No log | 1.1511 | 80 | 0.5128 |
| 0.4897 | 1.7266 | 120 | 0.4924 |
| 0.4897 | 2.3022 | 160 | 0.4927 |
| 0.382 | 2.8777 | 200 | 0.4873 |
| 0.382 | 3.4532 | 240 | 0.5103 |
| 0.382 | 4.0288 | 280 | 0.5063 |
| 0.3035 | 4.6043 | 320 | 0.5382 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso/f476907e-5bb3-461e-ad9c-06f3257a91ea
|
lesso
| 2025-02-04T01:03:11Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-02-04T01:02:21Z |
---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f476907e-5bb3-461e-ad9c-06f3257a91ea
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b77b35ef124b1260_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b77b35ef124b1260_train_data.json
type:
field_input: ''
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/f476907e-5bb3-461e-ad9c-06f3257a91ea
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001015
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 150
micro_batch_size: 2
mlflow_experiment_name: /tmp/G.O.D/b77b35ef124b1260_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f87570f7-b1f0-48ca-b737-ebd938967009
wandb_project: ab-god15
wandb_run: your_name
wandb_runid: f87570f7-b1f0-48ca-b737-ebd938967009
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f476907e-5bb3-461e-ad9c-06f3257a91ea
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3758 | 0.0009 | 1 | 10.3794 |
| 10.4098 | 0.0447 | 50 | 10.3757 |
| 10.3686 | 0.0894 | 100 | 10.3681 |
| 10.3609 | 0.1341 | 150 | 10.3643 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
jssky/c9161c2a-8930-433a-a6d7-78263d62f53e
|
jssky
| 2025-02-04T01:02:48Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-02-04T01:02:05Z |
---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c9161c2a-8930-433a-a6d7-78263d62f53e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b77b35ef124b1260_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b77b35ef124b1260_train_data.json
type:
field_input: ''
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: jssky/c9161c2a-8930-433a-a6d7-78263d62f53e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b77b35ef124b1260_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f87570f7-b1f0-48ca-b737-ebd938967009
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f87570f7-b1f0-48ca-b737-ebd938967009
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c9161c2a-8930-433a-a6d7-78263d62f53e
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3533 | 0.3571 | 50 | 10.3521 |
| 10.3272 | 0.7143 | 100 | 10.3465 |
| 10.3443 | 1.0714 | 150 | 10.3447 |
| 10.3403 | 1.4286 | 200 | 10.3442 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
robiual-awal/caabc5d1-34d8-4728-b31c-b67f48c84ed2
|
robiual-awal
| 2025-02-04T01:02:47Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-02-04T01:02:22Z |
---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: caabc5d1-34d8-4728-b31c-b67f48c84ed2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b77b35ef124b1260_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b77b35ef124b1260_train_data.json
type:
field_input: ''
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/caabc5d1-34d8-4728-b31c-b67f48c84ed2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b77b35ef124b1260_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f87570f7-b1f0-48ca-b737-ebd938967009
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f87570f7-b1f0-48ca-b737-ebd938967009
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# caabc5d1-34d8-4728-b31c-b67f48c84ed2
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 10.3793 |
| 10.3733 | 0.0894 | 50 | 10.3743 |
| 10.358 | 0.1788 | 100 | 10.3557 |
| 10.3514 | 0.2682 | 150 | 10.3509 |
| 10.3501 | 0.3576 | 200 | 10.3500 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.