Llama.cpp hybrid layer quantization of Qwen3-VL-8B-Instruct by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This quant was optimized for high performance across a set of test prompts with ~Q6_K size. The model sometimes exhibits rep fails across a set of curated test prompts, where it falls into infinite repeat loops on certain prompts when using greedy sampling. Extensive testing showed there is no way to correct this problem by adjusting the layer quants, the problem is baked into the model by the training process. The VL 32B Instruct model does not exhibit this failure mode.
The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0
LAYER_TYPES='[
[0 ,"Q6_K_S"],[1 ,"Q5_K_L"],[2 ,"Q5_K_M"],[3 ,"Q5_K_M"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],
[6 ,"Q5_K_M"],[7 ,"Q5_K_M"],[8, "Q5_K_M"],[9, "Q5_K_M"],[10,"Q5_K_M"],[11,"Q5_K_M"],
[12,"Q5_K_M"],[13,"Q5_K_M"],[14,"Q5_K_M"],[15,"Q5_K_M"],[16,"Q5_K_M"],[17,"Q5_K_M"],
[18,"Q6_K_S"],[19,"Q5_K_L"],[20,"Q6_K_S"],[21,"Q5_K_L"],[22,"Q6_K_S"],[23,"Q5_K_L"],
[24,"Q6_K_S"],[25,"Q6_K_M"],[26,"Q6_K_S"],[27,"Q6_K_M"],[28,"Q6_K_S"],[29,"Q6_K_M"],
[30,"Q6_K_M"],[31,"Q6_K_M"],[32,"Q6_K_M"],[33,"Q6_K_L"],[34,"Q6_K_L"],[35,"Q6_K_L"]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
A Q4_K_H quant is also available:
Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0
LAYER_TYPES='[
[0 ,"Q6_K_L"],[1 ,"Q6_K_M"],[2 ,"Q6_K_S"],[3 ,"Q5_K_L"],[4 ,"Q5_K_M"],[5 ,"Q5_K_S"],
[6 ,"Q4_K_S"],[7 ,"Q4_K_S"],[8, "Q4_K_S"],[9, "Q4_K_S"],[10,"Q4_K_S"],[11,"Q4_K_S"],
[12,"Q4_K_M"],[13,"Q4_K_S"],[14,"Q4_K_M"],[15,"Q4_K_S"],[16,"Q4_K_M"],[17,"Q4_K_S"],
[18,"Q4_K_M"],[19,"Q4_K_S"],[20,"Q4_K_M"],[21,"Q4_K_S"],[22,"Q4_K_M"],[23,"Q4_K_S"],
[24,"Q4_K_M"],[25,"Q4_K_M"],[26,"Q4_K_M"],[27,"Q4_K_M"],[28,"Q4_K_M"],[29,"Q4_K_M"],
[30,"Q4_K_M"],[31,"Q4_K_L"],[32,"Q5_K_S"],[33,"Q5_K_M"],[34,"Q5_K_L"],[35,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 4.6e9 | 8.7 | - |
| Q4_K_H | 5.4e9 | 8.7 | Hybrid quant with Q4_K embedding Q6_K output |
| Q6_K | 6.7e9 | 8.6 | Q6_K with default embedding and output |
| Q6_K_H | 6.5e9 | 8.6 | Hybrid quant with Q6_K embedding Q6_K output |
Usage:
Qwen3-VL-8B-Instruct is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .
The model can be speculated with Qwen3 0.6B if the inference platform can support dynamic vocab translation between draft and target. On a 4070 non-code gen rate is about 70-75tps speculated and 64tps non speculated.
Llama.cpp minimum version to run Qwen3-VL series should be 6915 with recommended 6936 and above. Another fix for Qwen3-VL was done on llama.cpp version b7209.
Benchmarks:
A full set of vision benchmarks for the model is given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Qwen3-VL-8B-Instruct.Q4_K_H.gguf | Q4_K_H | 5.4e9 B | ~1B smaller than Q6_K_H |
| Qwen3-VL-8B-Instruct.Q6_K_H.gguf | Q6_K_H | 6.5e9 B | ~Q6_K size |
| Qwen3-VL-8B-Instruct.mmproj.gguf | F16 | 1.2e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 107
6-bit
Model tree for steampunque/Qwen3-VL-8B-Instruct-Hybrid-GGUF
Base model
Qwen/Qwen3-VL-8B-Instruct