barec-base-finetune-sent
This model is a fine-tuned version of bensapir/pixel-barec-pretrain on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 6.5546
- Accuracy: 0.3538
- Qwk: 0.6191
- Mae: 1.9391
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 50000
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Qwk | Mae |
|---|---|---|---|---|---|---|
| 1.9165 | 1.17 | 1000 | 1.9485 | 0.3074 | 0.6338 | 1.9832 |
| 1.7575 | 2.33 | 2000 | 1.9038 | 0.3438 | 0.6283 | 1.8642 |
| 1.5789 | 3.5 | 3000 | 1.8731 | 0.3595 | 0.6409 | 1.9171 |
| 1.4313 | 4.67 | 4000 | 1.9007 | 0.3622 | 0.6230 | 1.8940 |
| 1.2578 | 5.83 | 5000 | 2.0439 | 0.3674 | 0.6296 | 1.9421 |
| 1.086 | 7.0 | 6000 | 2.2058 | 0.3584 | 0.6337 | 1.9475 |
| 0.7325 | 8.17 | 7000 | 2.6200 | 0.3566 | 0.6300 | 1.9146 |
| 0.5493 | 9.33 | 8000 | 2.8226 | 0.3495 | 0.6096 | 1.9665 |
| 0.4671 | 10.5 | 9000 | 3.1466 | 0.3368 | 0.6115 | 1.9643 |
| 0.3892 | 11.67 | 10000 | 3.3661 | 0.3446 | 0.6094 | 1.9643 |
| 0.3393 | 12.84 | 11000 | 3.4706 | 0.3183 | 0.5977 | 2.0435 |
| 0.297 | 14.0 | 12000 | 3.5575 | 0.3453 | 0.6022 | 1.9891 |
| 0.1983 | 15.17 | 13000 | 3.9345 | 0.3371 | 0.6095 | 2.0380 |
| 0.1615 | 16.34 | 14000 | 4.0767 | 0.3509 | 0.6155 | 1.9029 |
| 0.1622 | 17.5 | 15000 | 4.0375 | 0.3446 | 0.6190 | 1.9390 |
| 0.1366 | 18.67 | 16000 | 4.1847 | 0.3427 | 0.6183 | 1.9622 |
| 0.126 | 19.84 | 17000 | 4.2534 | 0.3487 | 0.6212 | 1.9285 |
| 0.1198 | 21.0 | 18000 | 4.4174 | 0.3413 | 0.6080 | 1.9911 |
| 0.0885 | 22.17 | 19000 | 4.6410 | 0.3339 | 0.6149 | 2.0514 |
| 0.0818 | 23.34 | 20000 | 4.6826 | 0.3409 | 0.6079 | 1.9544 |
| 0.0792 | 24.5 | 21000 | 4.8105 | 0.3477 | 0.6197 | 1.9428 |
| 0.0811 | 25.67 | 22000 | 4.8232 | 0.3438 | 0.6069 | 1.9865 |
| 0.0761 | 26.84 | 23000 | 4.8745 | 0.3389 | 0.6183 | 1.9248 |
| 0.0704 | 28.0 | 24000 | 4.8785 | 0.3518 | 0.5999 | 1.9681 |
| 0.0603 | 29.17 | 25000 | 5.2340 | 0.3394 | 0.6196 | 1.9793 |
| 0.0513 | 30.34 | 26000 | 5.3277 | 0.3383 | 0.6078 | 1.9665 |
| 0.0521 | 31.51 | 27000 | 5.3942 | 0.3360 | 0.6033 | 2.0167 |
| 0.0518 | 32.67 | 28000 | 5.4359 | 0.3356 | 0.6072 | 2.0219 |
| 0.0491 | 33.84 | 29000 | 5.5302 | 0.3398 | 0.6109 | 1.9944 |
| 0.0407 | 35.01 | 30000 | 5.6421 | 0.3408 | 0.6118 | 1.9773 |
| 0.033 | 36.17 | 31000 | 5.6559 | 0.3568 | 0.6133 | 1.9461 |
| 0.0355 | 37.34 | 32000 | 5.8496 | 0.3458 | 0.6159 | 1.9594 |
| 0.0381 | 38.51 | 33000 | 5.9810 | 0.3457 | 0.5981 | 2.0120 |
| 0.0335 | 39.67 | 34000 | 6.0368 | 0.3465 | 0.5942 | 2.0148 |
| 0.0282 | 40.84 | 35000 | 6.1605 | 0.3423 | 0.5993 | 2.0044 |
| 0.0262 | 42.01 | 36000 | 6.2392 | 0.3468 | 0.5993 | 1.9595 |
| 0.0177 | 43.17 | 37000 | 6.3454 | 0.3456 | 0.6096 | 1.9534 |
| 0.022 | 44.34 | 38000 | 6.2977 | 0.3492 | 0.6176 | 1.9588 |
| 0.02 | 45.51 | 39000 | 6.4121 | 0.3431 | 0.5953 | 1.9843 |
| 0.0178 | 46.67 | 40000 | 6.4294 | 0.3528 | 0.6129 | 1.9550 |
| 0.0219 | 47.84 | 41000 | 6.5038 | 0.3506 | 0.6039 | 1.9653 |
| 0.0172 | 49.01 | 42000 | 6.5427 | 0.3454 | 0.6152 | 1.9949 |
| 0.0169 | 50.18 | 43000 | 6.5222 | 0.3505 | 0.6076 | 1.9714 |
| 0.0132 | 51.34 | 44000 | 6.4575 | 0.3568 | 0.6194 | 1.9421 |
| 0.0127 | 52.51 | 45000 | 6.5038 | 0.3613 | 0.5985 | 1.9590 |
| 0.0131 | 53.68 | 46000 | 6.5518 | 0.3466 | 0.6074 | 1.9647 |
| 0.0113 | 54.84 | 47000 | 6.5016 | 0.3535 | 0.6187 | 1.9436 |
| 0.0131 | 56.01 | 48000 | 6.5359 | 0.3505 | 0.6172 | 1.9465 |
| 0.0112 | 57.18 | 49000 | 6.5665 | 0.3542 | 0.6195 | 1.9379 |
| 0.0116 | 58.34 | 50000 | 6.5546 | 0.3538 | 0.6191 | 1.9391 |
Framework versions
- Transformers 4.17.0
- Pytorch 2.5.1
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support