Llama.cpp hybrid layer quantization of MiniCPb7167M-V-4_5 by openbmb
Original model: https://huggingface.co/openbmb/MiniCPM-V-4_5
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~6.3G gguf with the same perplexity as a ~6.7G Q6_K GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q6_K_H layer quants are as follows:
LAYER_TYPES='[
[0 ,"Q6_K" ],[1 ,"Q5_K_M"],[2 ,"Q4_K_M"],[3, "Q4_K_M"],[4 ,"Q4_K_M"],[5 ,"Q4_K_M"],
[6 ,"Q5_K_S"],[7 ,"Q5_K_S"],[8 ,"Q5_K_S"],[9 ,"Q5_K_S"],[10,"Q5_K_M"],[11,"Q5_K_S"],
[12,"Q5_K_M"],[13,"Q5_K_S"],[14,"Q5_K_M"],[15,"Q5_K_M"],[16,"Q5_K_M"],[17,"Q5_K_M"],
[18,"Q5_K_M"],[19,"Q5_K_M"],[20,"Q5_K_M"],[21,"Q5_K_M"],[22,"Q5_K_M"],[23,"Q5_K_M"],
[24,"Q5_K_M"],[25,"Q5_K_M"],[26,"Q5_K_M"],[27,"Q5_K_M"],[28,"Q6_K" ],[29,"Q6_K" ],
[30,"Q6_K" ],[31,"Q6_K" ],[32,"Q8_0" ],[33,"Q8_0" ],[34,"Q8_0" ],[35,"Q8_0" ]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
A Q4_K_H quant is also available:
Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
LAYER_TYPES='[
[0 ,"Q6_K_S"],[1 ,"Q5_K_L"],[2 ,"Q4_K_M"],[3, "Q4_K_S"],[4 ,"Q4_K_S"],[5 ,"Q4_K_S"],
[6 ,"Q4_K_S"],[7 ,"Q4_K_S"],[8, "Q4_K_S"],[9, "Q4_K_S"],[10,"Q4_K_S"],[11,"Q4_K_S"],
[12,"Q4_K_M"],[13,"Q4_K_S"],[14,"Q4_K_M"],[15,"Q4_K_S"],[16,"Q4_K_M"],[17,"Q4_K_S"],
[18,"Q4_K_M"],[19,"Q4_K_S"],[20,"Q4_K_M"],[21,"Q4_K_S"],[22,"Q4_K_M"],[23,"Q4_K_S"],
[24,"Q4_K_M"],[25,"Q4_K_M"],[26,"Q4_K_M"],[27,"Q4_K_M"],[28,"Q4_K_M"],[29,"Q4_K_M"],
[30,"Q4_K_M"],[31,"Q4_K_L"],[32,"Q5_K_S"],[33,"Q5_K_M"],[34,"Q5_K_L"],[35,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 4.6G | 7.4 | - |
| Q4_K_H | 5.2G | 7.3 | Hybrid quant with Q4_K embedding Q6_K output |
| Q6_K | 6.7e9 | 7.24 | Q6_K with default embedding and output |
| Q6_K_H | 6.3e9 | 7.26 | Hybrid quant with Q6_K embedding Q6_K output |
Usage:
MiniCPM-V-4_5 is a vision capable reinforcement learning (RL) reasoning model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The model appears to have been trained to optionally use think block prefixes (depending on image/prompt it may or may not create a think block prefix) The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .
A llama.cpp bug fix for MiniCPMV inference was completed at b7167 which is minimum version which should be used to run model as it results in noticeably improved vision evals.
Benchmarks:
A full set of benchmarks for the models are given here: https://huggingface.co/spaces/steampunque/benchlm . Benches were run after b7167 fix for minicpm.
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| MiniCPM-V-4_5.Q4_K_H.gguf | Q4_K_H | 5.2e9 B | ~1B smaller than Q6_K_H same eval performance |
| MiniCPM-V-4_5.Q6_K_H.gguf | Q6_K_H | 6.3e9 B | 0.4B smaller than Q6_K |
| MiniCPM-V-4_5.mmproj.gguf | mmproj | 1.1e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 108
6-bit
Model tree for steampunque/MiniCPM-V-4_5-Hybrid-GGUF
Base model
openbmb/MiniCPM-V-4_5