Lyra4-Gutenberg2-12B
Sao10K/MN-12B-Lyra-v4 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Features an increased sequence length from Lyra4-Gutenberg-12B.
Method
ORPO Finetuned using 2x RTX 3090 for 3 epochs.
Training data was formatted with ChatML.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 19.74 |
| IFEval (0-Shot) | 25.85 |
| BBH (3-Shot) | 33.73 |
| MATH Lvl 5 (4-Shot) | 10.50 |
| GPQA (0-shot) | 8.39 |
| MuSR (0-shot) | 11.49 |
| MMLU-PRO (5-shot) | 28.51 |
- Downloads last month
- 208
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for mav23/Lyra4-Gutenberg2-12B-GGUF
Base model
Sao10K/MN-12B-Lyra-v4Datasets used to train mav23/Lyra4-Gutenberg2-12B-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard25.850
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard33.730
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard10.500
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.390
- acc_norm on MuSR (0-shot)Open LLM Leaderboard11.490
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard28.510