merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT-v2  # Replace with actual model IDs
        layer_range: [0, 5]  # All 6 layers
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT-COT
        layer_range: [0, 4]  # First 5 layers
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT-medical
        layer_range: [0, 4]
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT-code
        layer_range: [0, 4]
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT-math
        layer_range: [0, 4]
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT-ifeval
        layer_range: [0, 4]
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT-v2
        layer_range: [0, 4]
  - sources:
      - model: rootxhacker/mini-Llama-70M-SFT
        layer_range: [0, 3]
merge_method: passthrough
dtype: bfloat16
Downloads last month
4
Safetensors
Model size
200M params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rootxhacker/mini-llama-200M