SmolVLM-256M-Instruct
Model creator: HuggingFaceTB
Original model: HuggingFaceTB/SmolVLM-256M-Instruct
GGUF quantization: provided by HarshKalburgi using llama.cpp
Special thanks
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Use with Ollama
ollama run "hf.co/HarshKalburgi/SmolVLM-256M-Instruct-GGUF:Q4_K_M"
Use with LM Studio
lms load "HarshKalburgi/SmolVLM-256M-Instruct-GGUF"
Use with llama.cpp CLI
llama-cli --hf "HarshKalburgi/SmolVLM-256M-Instruct-GGUF:Q4_K_M" -p "The meaning to life and the universe is"
Use with llama.cpp Server:
llama-server --hf "HarshKalburgi/SmolVLM-256M-Instruct-GGUF:Q4_K_M" -c 4096
- Downloads last month
- 19
Hardware compatibility
Log In
to view the estimation
4-bit
Model tree for HarshKalburgi/SmolVLM-256M-Instruct-GGUF
Base model
HuggingFaceTB/SmolLM2-135M
Quantized
HuggingFaceTB/SmolLM2-135M-Instruct
Quantized
HuggingFaceTB/SmolVLM-256M-Instruct