OpenReasoning-Nemotron-1.5B-F32-GGUF

OpenReasoning-Nemotron-1.5B is a large language model (LLM) which is a derivative of Qwen2.5-1.5B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. We evaluated this model with up to 64K output tokens. OpenReasoning-Nemotron models can be used in a "heavy" mode by starting multiple parallel generations and combining them together via generative solution selection (GenSelect). To add this "skill" we follow the original GenSelect training pipeline except we do not train on the selection summary but use the full reasoning trace of DeepSeek R1 0528 671B instead. We only train models to select the best solution for math problems but surprisingly find that this capability directly generalizes to code and science questions! With this "heavy" GenSelect inference mode, OpenReasoning-Nemotron-32B model surpasses O3 (High) on math and coding benchmarks.

Model File

Quant Type File Size Filename
F32 6.18 GB OpenReasoning-Nemotron-1.5B.F32.gguf
F16 3.09 GB OpenReasoning-Nemotron-1.5B.F16.gguf
BF16 3.09 GB OpenReasoning-Nemotron-1.5B.BF16.gguf
Q8_0 1.65 GB OpenReasoning-Nemotron-1.5B.Q8_0.gguf
Q6_K 1.27 GB OpenReasoning-Nemotron-1.5B.Q6_K.gguf
Q5_K_M 1.13 GB OpenReasoning-Nemotron-1.5B.Q5_K_M.gguf
Q5_K_S 1.1 GB OpenReasoning-Nemotron-1.5B.Q5_K_S.gguf
Q4_K_M 986 MB OpenReasoning-Nemotron-1.5B.Q4_K_M.gguf
Q4_K_S 940 MB OpenReasoning-Nemotron-1.5B.Q4_K_S.gguf
Q3_K_L 880 MB OpenReasoning-Nemotron-1.5B.Q3_K_L.gguf
Q3_K_M 824 MB OpenReasoning-Nemotron-1.5B.Q3_K_M.gguf
Q3_K_S 761 MB OpenReasoning-Nemotron-1.5B.Q3_K_S.gguf
Q2_K 676 MB OpenReasoning-Nemotron-1.5B.Q2_K.gguf

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
393
GGUF
Model size
1.54B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/OpenReasoning-Nemotron-1.5B-F32-GGUF

Base model

Qwen/Qwen2.5-1.5B
Quantized
(21)
this model