This repo includes .gguf built for HuggingFace/Candle. They will not work with llama.cpp.

Refer to the original repo for more details.

Downloads last month
22
GGUF
Model size
1.41B params
Architecture
undefined
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support