This is allenai/Olmo-3-7B-Think quantized with LLM Compressor with GPTQ W4A16G128. The model is compatible with vLLM (tested: v0.11.2). Tested with an RTX 5090.

How to Support My Work

Subscribe to The Kaitchup. This helps me a lot to continue quantizing and evaluating models for free. Or you can "buy me a kofi".

Downloads last month
10
Safetensors
Model size
2B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kaitchup/Olmo-3-7B-Think-gptq-w4a16-g128

Quantized
(25)
this model

Collection including kaitchup/Olmo-3-7B-Think-gptq-w4a16-g128