Llamacpp quantizations of gme-Qwen2-VL-2B-Instruct
Using llama.cpp release b4604 for quantization.
Original model: https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct
Download a single file:
Filename | Quant type | File Size | Split | Description |
---|---|---|---|---|
gme-Qwen2-VL-2B-Instruct-f16.gguf | f16 | 3.09GB | false | Full F16 weights. |
gme-Qwen2-VL-2B-Instruct-Q8_0.gguf | Q8_0 | 1.65GB | false | Extremely high quality, generally unneeded but max available quant. |
gme-Qwen2-VL-2B-Instruct-Q6_K.gguf | Q6_K | 1.27GB | false | Very high quality, near perfect, recommended. |
gme-Qwen2-VL-2B-Instruct-Q5_K_M.gguf | Q5_K_M | 1.13GB | false | High quality, recommended. |
gme-Qwen2-VL-2B-Instruct-Q5_K_S.gguf | Q5_K_S | 1.10GB | false | High quality, recommended. |
gme-Qwen2-VL-2B-Instruct-Q4_K_M.gguf | Q4_K_M | 0.99GB | false | Good quality, default size for most use cases, recommended. |
gme-Qwen2-VL-2B-Instruct-Q4_K_S.gguf | Q4_K_S | 0.94GB | false | Slightly lower quality with more space savings, recommended. |
gme-Qwen2-VL-2B-Instruct-Q4_0.gguf | Q4_0 | 0.94GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
gme-Qwen2-VL-2B-Instruct-Q3_K_L.gguf | Q3_K_L | 0.88GB | false | Lower quality but usable, good for low RAM availability. |
gme-Qwen2-VL-2B-Instruct-Q3_K_M.gguf | Q3_K_M | 0.82GB | false | Low quality. |
gme-Qwen2-VL-2B-Instruct-Q3_K_S.gguf | Q3_K_S | 0.76GB | false | Low quality, not recommended. |
gme-Qwen2-VL-2B-Instruct-Q2_K.gguf | Q2_K | 0.68GB | false | Very low quality but surprisingly usable. |
- Downloads last month
- 67
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for sinequa/gme-Qwen2-VL-2B-Instruct-GGUF
Base model
Qwen/Qwen2-VL-2B
Finetuned
Qwen/Qwen2-VL-2B-Instruct
Finetuned
Alibaba-NLP/gme-Qwen2-VL-2B-Instruct