https://github.com/zixi01chen/llama.cpp_internvl2_bpu

Introduction: The VLM model runs on gguf and bpu.

Downloads last month
245
GGUF
Model size
630M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for D-Robotics/InternVL2_5-1B-GGUF-BPU

Quantized
(5)
this model