merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

dtype: float16
merge_method: linear
slices:
 - sources:
      - layer_range: [0, 8] # Reduce the first half of the model to 4B parameters
        model: mistralai/Mistral-7B-Instruct-v0.2
        parameters:
          weight: 0.25 # Reduce the weight of the first half to make room for the second half
      - layer_range: [8, 16] # Reduce the second half of the model to 4B parameters
        model: mistralai/Mistral-7B-Instruct-v0.2
        parameters:
          weight: 0.25 # Maintain the weight of the second half
      - layer_range: [16, 24] # Reduce the third half of the model to 4B parameters
        model: mistralai/Mistral-7B-Instruct-v0.2
        parameters:
          weight: 0.25 # Reduce the weight of the third half
      - layer_range: [24, 32] # Reduce the fourth half of the model to 4B parameters
        model: mistralai/Mistral-7B-Instruct-v0.2
        parameters:
          weight: 0.25 # Maintain the weight of the fourth half
Downloads last month
2
Safetensors
Model size
2.01B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Tech-Meld/HX-Mistral-1.5B_v0.1

Finetuned
(989)
this model
Quantizations
1 model