Fer_vit_jaffe_crop_GOOGLE_1

This model is a fine-tuned version of WinKawaks/vit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2564
  • Accuracy: 0.9

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 1
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 2.8373 0.1667
No log 2.0 2 2.6726 0.1667
No log 3.0 3 2.4131 0.1
No log 4.0 4 2.1618 0.1
No log 5.0 5 1.9925 0.2333
No log 6.0 6 2.0082 0.1
No log 7.0 7 2.0631 0.1667
No log 8.0 8 1.9582 0.1667
No log 9.0 9 1.9078 0.1
2.2546 10.0 10 1.8412 0.2333
2.2546 11.0 11 1.7763 0.3
2.2546 12.0 12 1.7447 0.4
2.2546 13.0 13 1.6744 0.2667
2.2546 14.0 14 1.6643 0.3333
2.2546 15.0 15 1.5688 0.4333
2.2546 16.0 16 1.5458 0.4333
2.2546 17.0 17 1.5068 0.5
2.2546 18.0 18 1.4505 0.4333
2.2546 19.0 19 1.3640 0.4
1.352 20.0 20 1.3531 0.3667
1.352 21.0 21 1.3028 0.4667
1.352 22.0 22 1.2587 0.5
1.352 23.0 23 1.3161 0.5
1.352 24.0 24 1.2402 0.4333
1.352 25.0 25 1.1801 0.4333
1.352 26.0 26 1.1508 0.5333
1.352 27.0 27 1.0463 0.6667
1.352 28.0 28 1.0176 0.5
1.352 29.0 29 1.0326 0.5333
0.6369 30.0 30 0.9021 0.6
0.6369 31.0 31 0.9485 0.5
0.6369 32.0 32 0.8393 0.6
0.6369 33.0 33 0.9536 0.5667
0.6369 34.0 34 0.8815 0.6333
0.6369 35.0 35 0.8329 0.6333
0.6369 36.0 36 0.7946 0.7333
0.6369 37.0 37 0.8582 0.6333
0.6369 38.0 38 0.7418 0.7667
0.6369 39.0 39 0.7232 0.7667
0.2532 40.0 40 0.7750 0.7333
0.2532 41.0 41 0.7209 0.7333
0.2532 42.0 42 0.6851 0.7
0.2532 43.0 43 0.6823 0.7667
0.2532 44.0 44 0.5122 0.7667
0.2532 45.0 45 0.5930 0.7667
0.2532 46.0 46 0.6531 0.7333
0.2532 47.0 47 0.5651 0.8
0.2532 48.0 48 0.5014 0.8667
0.2532 49.0 49 0.4853 0.8333
0.098 50.0 50 0.4904 0.8667
0.098 51.0 51 0.6781 0.7
0.098 52.0 52 0.6540 0.8
0.098 53.0 53 0.7150 0.7
0.098 54.0 54 0.5828 0.8
0.098 55.0 55 0.5115 0.8
0.098 56.0 56 0.4744 0.8
0.098 57.0 57 0.4548 0.8667
0.098 58.0 58 0.4936 0.8667
0.098 59.0 59 0.3534 0.8667
0.0473 60.0 60 0.6354 0.7333
0.0473 61.0 61 0.4243 0.9
0.0473 62.0 62 0.2744 0.9333
0.0473 63.0 63 0.4937 0.8333
0.0473 64.0 64 0.3869 0.9
0.0473 65.0 65 0.5379 0.8667
0.0473 66.0 66 0.4878 0.8
0.0473 67.0 67 0.6310 0.7667
0.0473 68.0 68 0.5021 0.8
0.0473 69.0 69 0.5109 0.8667
0.0218 70.0 70 0.4052 0.8667
0.0218 71.0 71 0.3340 0.9
0.0218 72.0 72 0.4823 0.8333
0.0218 73.0 73 0.2980 0.9
0.0218 74.0 74 0.3515 0.8667
0.0218 75.0 75 0.4199 0.8
0.0218 76.0 76 0.4145 0.9
0.0218 77.0 77 0.4639 0.7667
0.0218 78.0 78 0.3376 0.8667
0.0218 79.0 79 0.3546 0.8667
0.0121 80.0 80 0.3863 0.8667
0.0121 81.0 81 0.3637 0.8667
0.0121 82.0 82 0.3622 0.8667
0.0121 83.0 83 0.4142 0.8667
0.0121 84.0 84 0.4829 0.7667
0.0121 85.0 85 0.4039 0.8667
0.0121 86.0 86 0.3893 0.9
0.0121 87.0 87 0.5483 0.8333
0.0121 88.0 88 0.3928 0.8333
0.0121 89.0 89 0.3336 0.8667
0.0077 90.0 90 0.2689 0.9333
0.0077 91.0 91 0.3586 0.9333
0.0077 92.0 92 0.4284 0.9
0.0077 93.0 93 0.4150 0.8333
0.0077 94.0 94 0.2941 0.9
0.0077 95.0 95 0.2634 0.8667
0.0077 96.0 96 0.2631 0.9333
0.0077 97.0 97 0.3490 0.9333
0.0077 98.0 98 0.3602 0.9
0.0077 99.0 99 0.2326 0.9333
0.0065 100.0 100 0.2564 0.9

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.4.0
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
9
Safetensors
Model size
5.53M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ricardoSLabs/Fer_vit_jaffe_crop_GOOGLE_1

Finetuned
(48)
this model

Evaluation results