MLP-etri
Collection
5 items
•
Updated
axolotl version: 0.13.0.dev0
base_model: axolotl-ai-co/gpt-oss-120b-dequantized
use_kernels: false
dp_shard_size: 4 # Number of GPUs
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
experimental_skip_move_to_device: true
adapter: lora
lora_r: 16
lora_alpha: 32
lora_target_modules: "all-linear"
# lora_target_parameters:
# - "mlp.experts.gate_up_proj"
# - "mlp.experts.down_proj"
lora_bias: "none"
lora_task_type: "CAUSAL_LM"
# Your combined training dataset
datasets:
- path: ./data/train_combined_with_stem.jsonl
type: chat_template
field_thinking: thinking
template_thinking_key: thinking
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./outputs/gpt-oss-domain-llm-lora/
# Checkpoint settings
save_strategy: steps
save_total_limit: 2 # Keep last 2 checkpoints for safety (LoRA adapters are ~38MB each)
save_steps: 160
save_safetensors: false # Disable safetensors to fix FSDP checkpoint saving issue
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2 # Adjust to 1 if OOM occurs
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_torch_fused # 8bit optimizer not compatible with FSDP2 offload
lr_scheduler: constant_with_warmup
learning_rate: 2e-5
bf16: true
tf32: true
flash_attention: true
attn_implementation: kernels-community/vllm-flash-attn3 # Not needed if flash_attn >= 2.8.3
gradient_checkpointing: true
activation_offloading: true
logging_steps: 1
# saves_per_epoch: 1 # Commented out: conflicts with step-based saving
warmup_ratio: 0.03
special_tokens:
eot_tokens:
- "<|end|>"
fsdp_version: 2
fsdp_config:
offload_params: true
state_dict_type: SHARDED_STATE_DICT
auto_wrap_policy: TRANSFORMER_BASED_WRAP
transformer_layer_cls_to_wrap: GptOssDecoderLayer
reshard_after_forward: true
cpu_ram_efficient_loading: true
save_optimizer_state: false
This model is a fine-tuned version of axolotl-ai-co/gpt-oss-120b-dequantized on the ./data/train_combined_with_stem.jsonl dataset.
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Base model
axolotl-ai-co/gpt-oss-120b-dequantized