Llama.cpp hybrid layer quantization of Ling-mini-2.0 by inclusionAI

Original model: https://huggingface.co/inclusionAI/Ling-mini-2.0

The hybrid quant employs different quantization levels on a per layer basis to increase flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. For this file the layer quants are as follows:

Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0

   LAYER_TYPES='[
   [0 ,"Q6_K_S"], [1 ,"Q5_K_L"], [2 ,"Q5_K_M"], [3 ,"Q5_K_M"], [4 ,"Q5_K_M"],
   [5 ,"Q6_K_S"], [6 ,"Q5_K_M"], [7, "Q6_K_S"], [8, "Q5_K_M"], [9, "Q6_K_S"],
   [10,"Q6_K_S"], [11,"Q6_K_S"], [12,"Q6_K_S"], [13,"Q6_K_S"], [14,"Q6_K_M"],
   [15,"Q6_K_M"], [16,"Q6_K_M"], [17,"Q6_K_L"], [18,"Q6_K_L"], [19,"Q6_K_L"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

The layer quants were optimized for solid performance across a set of curated test prompts on the RL variant of the model (Ring-mini-2.0) and reused with no further adjusments on this non RL version.

Comparison:

Quant size PPL Comment
Q6_K 13.4e9 19.0 default embed and output
Q6_K_H 13.2e9 18.9 Q6_K embed Q6_K output

Usage:

This is a compact moe model which shows quite strong performance across a range of curated test prompts. The unique feature of this model is its overall size is only 16G so with moe it will run very efficiently with experts offloaded to CPU. Example offload configs:

12G VRAM (4070), 32k context. Offload layers 16-19 to CPU and rest on GPU, VRAM will be about full and token gen will be about 120tps.

OT="-ot blk\.[1][6-9].*exps=CPU -ngl 99"

12G VRAM (4070), 128k context. Offload layers 10-19 or 9-19 to CPU and rest on GPU, token gen will be around 80tps.

# Offload 10-19 to CPU This will rail out VRAM and leave no room for a browser to run
OT="-ot blk\.[1][0-9].*exps=CPU -ngl 99"

# Offload 9-19 to CPU this will leave some VRAM for a browser to run
OT="-ot blk\.9|1[0-9].*exps=CPU -ngl 99"

# Config YARN for 128k context
--rope-scaling yarn --yarn-orig-ctx 32768 --rope_scale 4.00000

Grouped experts:

The model can be run with or without grouped expert logic. It defaults with grouped experts logic on. To turn if off launch with:

--override-kv bailingmoe2.expert_group_count=int:1

Example gen:

lm say something profound that will amaze me

"Time is the canvas upon which the masterpiece of our existence is painted, and every moment, every choice, every experience is a stroke of color that defines the picture."

This statement reflects the profound interconnectedness of time, choice, and experience, suggesting that our lives are shaped by the cumulative impact of the moments we live and the decisions we make. It invites reflection on the beauty and complexity of the journey we call life.

Benchmarks:

Evals for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm.

Download the file from below:

Link Type Size/e9 B Notes
Ling-mini-2.0.Q6_K_H.gguf Q6_K_H 13.2e9 B ~Q6_K size

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
22
GGUF
Model size
16B params
Architecture
bailingmoe2
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Ling-mini-2.0-Hybrid-GGUF

Quantized
(26)
this model