Original model: https://huggingface.co/ggml-org/SmolLM3-3B-GGUF

gguf-split with --split-max-max 100M to create a small model to test split ggufs

Downloads last month
7
GGUF
Model size
3B params
Architecture
smollm3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support