QED-Nano

logo.png

Table of Contents

  1. Model Summary
  2. How to use
  3. Evaluation
  4. Limitations
  5. License

Model Summary

QED-Nano is a 4B parameter model explicitly post-trained to strengthen its proof-writing capabilities. Despite its small size, QED-Nano achieves an impressive 40% score on the challenging IMO-ProofBench benchmark (+20% over the Qwen3 base model), matching the performance of GPT-OSS-120B from OpenAI. With an agent scaffold that scales inference-time compute to over 1M tokens per problem, QED-Nano approaches the performance of Gemini-3-Pro. Crucially, the same agentic scaffold on the base model (Qwen3-4B-Thinking-2507) barely improves performance.

imoproofbench.png

QED-Nano is based on Qwen/Qwen3-4B-Thinking-2507, and was post-trained via a combination of supervised fine-tuning and reinforcement learning with a reasoning cache (to be able to train for continual improvement with our agentic scaffold at test time) on a mixture of Olympiads proof problems from various public sources.

We are working to release the full training recipe, including data, code, and agent scaffolds -- stay tuned!

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "lm-provers/QED-Nano"
device = "cuda"  # for GPU usage or "cpu" for CPU usage

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
).to(device)

# prepare the model input
prompt = "Generate a rigorous proof to the following question: is \sqrt{2} rational or irrational?"
messages_think = [
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages_think,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# Generate the output
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)

# Get and decode the output
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
print(tokenizer.decode(output_ids, skip_special_tokens=True))

We recommend setting temperature=0.6 and top_p=0.95 in the sampling parameters.

vLLM and SGLang

You can use vLLM and SGLang to deploy the model in an API compatible with OpenAI format.

SGLang

python -m sglang.launch_server --model-path lm-provers/QED-Nano

vLLM

vllm serve lm-provers/QED-Nano

Evaluation

In this section, we report the evaluation results of QED-Nano on IMO-ProofBench and ProofBench. All evaluations are reported as avg@3 unless stated otherwise.

evals.png

Limitations

QED-Nano is a domain-specific model that is designed for one thing and one thing only: proving theorems. Using as a general assistant will likely produce nonsense outside of this domain. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.

License

Apache 2.0

Acknowledgements

QED-Nano is a joint collaboration between the research teams at CMU, ETH Zurich, Numina, and Hugging Face. Below is a list of the individual contributors and their affiliations:

CMU

Amrith Setlur, Yuxiao Qu, Ian Wu, and Aviral Kumar

ETH Zurich

Jasper Dekoninck

Numina

Jia Li

Hugging Face

Edward Beeching and Lewis Tunstall

Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lm-provers/QED-Nano

Finetuned
(190)
this model
Quantizations
3 models

Paper for lm-provers/QED-Nano