segformer-b0-finetuned-lipid-droplets-v2

This model is a fine-tuned version of nvidia/mit-b0 on the jhaberbe/lipid-droplets-v4 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1330
  • Mean Iou: 0.004
  • Mean Accuracy: 0.008
  • Overall Accuracy: 0.008
  • Accuracy Unlabeled: nan
  • Accuracy Lipid: 0.008
  • Iou Unlabeled: 0.0
  • Iou Lipid: 0.008

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Lipid Iou Unlabeled Iou Lipid
0.5568 5.0 20 0.6918 0.0435 0.0870 0.0870 nan 0.0870 0.0 0.0870
0.4064 10.0 40 0.6370 0.0311 0.0622 0.0622 nan 0.0622 0.0 0.0622
0.298 15.0 60 0.5943 0.0174 0.0348 0.0348 nan 0.0348 0.0 0.0348
0.2508 20.0 80 0.4472 0.0069 0.0138 0.0138 nan 0.0138 0.0 0.0138
0.2184 25.0 100 0.4683 0.0079 0.0159 0.0159 nan 0.0159 0.0 0.0159
0.2213 30.0 120 0.4272 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1669 35.0 140 0.2754 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1871 40.0 160 0.2788 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1202 45.0 180 0.2453 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1081 50.0 200 0.2138 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1062 55.0 220 0.2132 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1026 60.0 240 0.1336 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1009 65.0 260 0.1952 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.07 70.0 280 0.1577 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.0626 75.0 300 0.1556 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.0723 80.0 320 0.1446 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.0669 85.0 340 0.1151 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.1126 90.0 360 0.1668 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.077 95.0 380 0.1536 0.004 0.008 0.008 nan 0.008 0.0 0.008
0.0758 100.0 400 0.1330 0.004 0.008 0.008 nan 0.008 0.0 0.008

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.4.1
  • Tokenizers 0.21.1
Downloads last month
4
Safetensors
Model size
3.72M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jhaberbe/segformer-b0-finetuned-lipid-droplets-v2

Base model

nvidia/mit-b0
Finetuned
(454)
this model