π₯ Baguettotron-GGUF
This repo contains gguf variants of Baguettotron, a 321 million parameters generalist Small Reasoning Model, trained on 200 billions tokens from SYNTH, a fully open generalist dataset.
Despite being trained on consideraly less data, Baguettotron outperforms most SLM of the same size range on non-code industry benchmarks, providing an unprecedented balance between memory, general reasoning, math and retrieval performance.
Please refer to the original model card for more details.
GGUF conversion
The GGUF conversion was originally done by typeof.
This versions adds a few additional metadata (preferred temperature/settings) to ease the implementation in LM Studio.
Given the tight size of the original model, we recommend to use at least the 8bit version.
We dont provide quants lower than 4bit but they are available in typeof repo.
Inference
Easiest installation is through LM Studio, with the default import system.
Otherwise, you can install llama.cpp directly and invoke the llama.cpp server:
llama-server --hf-repo typeof/Baguettotron-gguf --hf-file Baguettotron-Q8_0.gguf -c 2048
or the command line:
llama-cli --hf-repo typeof/Baguettotron-gguf --hf-file Baguettotron-Q8_0.gguf -p "How to make a good baguette?"
- Downloads last month
- 983
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for PleIAs/Baguettotron-GGUF
Base model
PleIAs/Baguettotron