Llama.cpp hybrid layer quantization of LFM2-VL-1.6B by LiquidAI

Original model: https://huggingface.co/LiquidAI/LFM2-VL-1.6B

license lfm1.0

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~1.07G gguf with the same perplexity as a ~1.24G Q8_0 GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

   Q6_K_S : Q6_K
   Q6_K_M : Q6_K_S + attn_v = Q8_0, ffn_d = Q8_0
   Q6_K_L : Q6_K_M + attn_o = Q8_0

   LAYER_TYPES='[
   [0 ,"Q8_0"  ],[1 ,"Q6_K_L"],[2 ,"Q6_K_M"],[3 ,"Q6_K_S"],
   [4 ,"Q6_K_S"],[5 ,"Q6_K_S"],[6 ,"Q6_K_S"],[7 ,"Q6_K_S"],
   [8 ,"Q6_K_M"],[9 ,"Q6_K_M"],[10,"Q6_K_M"],[11,"Q6_K_M"],
   [12,"Q6_K_L"],[13,"Q6_K_L"],[14,"Q6_K_L"],[15,"Q8_0"  ]
   ]'
   FLAGS="--token-embedding-type Q8_0 --output-tensor-type Q8_0 --layer-types-high"

Comparison:

Quant size PPL Comment
Q8_0 1.24e9 12.9 Q8_0 with default embedding and output
Q8_0_H 1.07e9 12.9 Hybrid quant with Q8_0 embedding Q8_0 output

Usage:

LFM2-VL-1.6B is a vision capable edge model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. The mmproj for this model is made available in default (F16), Q8_0, and Q4_0 quants for possible use with constrained memory/compute on edge devices. To create Q8_0 and Q4_0 mmproj quants the clip ffn tensor length is zero padded to be divisible by 32 (from 4304 to 4320).

To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .

A llama.cpp bug fix for LFM2-VL-1.6B inference was completed at b7210 which is minimum version which should be used to run model as it results in noticeably improved vision evals.

Benchmarks:

A full set of benchmarks for the model is given here: https://huggingface.co/spaces/steampunque/benchlm . Benches were run after b7210 fix for LFM2-VL.

Download the file from below:

Link Type Size/e9 B Notes
LFM2-VL-1.6B.Q8_0_H.gguf Q8_0_H 1.07e9 B 0.17B smaller than Q8_0
LFM2-VL-1.6B.mmproj.gguf mmproj 0.83e9 B multimedia projector F16
LFM2-VL-1.6B.mmproj.Q8_0.gguf mmproj 0.44e9 B multimedia projector Q8_0
LFM2-VL-1.6B.mmproj.Q4_0.gguf mmproj 0.24e9 B multimedia projector Q4_0

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
82
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/LFM2-VL-1.6B-Hybrid-GGUF

Quantized
(9)
this model