Update: The model image itself is now available as an importable character card for SillyTavern. This serves as an example of how to prepare your own card for use with this model.

Training Notes: This model was developed using a combination of multi-stage supervised fine-tuning, pre-trained QLoRA adapters, and multi-stage RLHF optimized with GRPO. The final model was created by merging the most promising candidates identified during the process.
SillyTavern Reasoning Block Parsing Example:


Series Comparison:

The following YAML configuration was used to produce this final version of the model:
slices:
- sources:
- model: Nitral-AI/Captain-Eris_Violet-0.420-Rebased
layer_range: [0, 40]
- model: Nitral-AI/Captain-Eris_Violet-GRPO-Rebased
layer_range: [0, 40]
merge_method: slerp
base_model: Nitral-AI/Captain-Eris_Violet-0.420-Rebased
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.420
dtype: bfloat16