rotating-head-lr-gpt2-medium-wikitext

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2045
  • Accuracy: 0.4189
  • Perplexity: 24.6437
  • Bleu: 0.1335

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy Perplexity Bleu
5.9064 0.2806 500 5.7488 0.2222 313.8089 0.0490
4.8623 0.5612 1000 4.7435 0.2805 114.8386 0.0709
4.3005 0.8418 1500 4.2307 0.3177 68.7669 0.0831
3.9583 1.1223 2000 3.9262 0.3464 50.7124 0.0916
3.7778 1.4029 2500 3.7522 0.3627 42.6128 0.1011
3.6805 1.6835 3000 3.6366 0.3738 37.9607 0.1052
3.5792 1.9641 3500 3.5434 0.3833 34.5829 0.1103
3.4655 2.2447 4000 3.4749 0.3900 32.2934 0.1168
3.4063 2.5253 4500 3.4235 0.3950 30.6766 0.1217
3.376 2.8058 5000 3.3767 0.4000 29.2745 0.1242
3.2591 3.0864 5500 3.3395 0.4039 28.2042 0.1258
3.2488 3.3670 6000 3.3072 0.4070 27.3098 0.1278
3.2244 3.6476 6500 3.2740 0.4109 26.4161 0.1309
3.1981 3.9282 7000 3.2526 0.4128 25.8571 0.1290
3.1294 4.2088 7500 3.2318 0.4156 25.3256 0.1293
3.0899 4.4893 8000 3.2180 0.4169 24.9787 0.1296
3.0993 4.7699 8500 3.2045 0.4189 24.6437 0.1335

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
15
Safetensors
Model size
355M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support