MyModel / README.md
Wade5's picture
Upload folder using huggingface_hub
bbd8814 verified
metadata
library_name: transformers
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
  - generated_from_trainer
model-index:
  - name: MyModel
    results: []

MyModel

This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2093

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.9491 0.2693 500 0.6303
0.6241 0.5385 1000 0.5958
0.5923 0.8078 1500 0.5590
0.5584 1.0770 2000 0.5180
0.5264 1.3463 2500 0.4764
0.5164 1.6155 3000 0.4459
0.5046 1.8848 3500 0.4069
0.3944 2.1540 4000 0.3134
0.3362 2.4233 4500 0.2675
0.32 2.6925 5000 0.2293
0.3115 2.9618 5500 0.2093

Framework versions

  • Transformers 4.48.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0