QUASAR: Quantum Assembly Code Generation with Tool-Augmented RL
Model Summary
QUASAR is a 4B-parameter model fine-tuned from Qwen3-4B-Instruct-2507 using a two-stage process: supervised fine-tuning (SFT) followed by agentic reinforcement learning (RL) with tool-augmented feedback.
The model is designed to generate OpenQASM 3.0 quantum circuits for optimization problems such as QAOA and VQE, achieving high syntactic validity and semantic fidelity.
- Framework: Agentic RL with external quantum simulator verification
- Reward: Hierarchical 4-level reward (syntax, distribution alignment, expectation value, optimization progress)
- Primary Domain: Quantum circuit generation and quantum optimization algorithm design
Model Details
- Model type: LLM fine-tuned with reinforcement learning
- Languages: English
- License: Apache-2.0
- Base model: Qwen/Qwen3-4B-Instruct-2507
Uses
Direct Use
- Generate OpenQASM 3.0 code from natural language descriptions
- Design ansatz circuits for quantum optimization tasks (QAOA, VQE)
Downstream Use
- Integration into quantum compilers
- Research on LLM-guided quantum algorithm design
Bias, Risks, and Limitations
- May produce a valid QASM that is semantically weak if prompts are ambiguous
- Tailored primarily to graph-based quantum optimization problems
- Evaluated mainly in simulation; hardware generalization remains untested
Recommendation: Always verify generated circuits with independent quantum simulators or compilers before deployment.
How to Get Started
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Benyucong/rl_quantum_4b")
tokenizer = AutoTokenizer.from_pretrained("Benyucong/rl_quantum_4b")
prompt = """Design a QASM 3.0 quantum circuit with 3 qubits and 3 layers to solve the vertex_cover \
given the graph: {"directed": false, "multigraph": false, "graph": {}, "nodes": [{"id": 0}, {"id": 1}, {"id": 2}], \
"edges": [{"source": 0, "target": 1}, {"source": 0, "target": 2}, {"source": 1, "target": 2}]}. \
Provide valid QASM 3.0 code with optimal parameters."""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
- Dataset: Benyucong/graph-data-quantum-rl
- Contains QASM 3.0 circuits, Hamiltonians, eigenvalues, and parameterized circuits for 12 quantum optimization problems
Training Setup
- Stage 1: Supervised fine-tuning (SFT), the model is available here.
- Stage 2: Reinforcement learning with GRPO and hierarchical reward
Hyperparameters
- Batch size: 128
- Rollouts: 16 per prompt (temperature = 0.7, top-p = 0.8)
- Precision: bf16 mixed precision
- GPUs: 16 × H100-64GB (FSDP enabled)
- Training time: ~48 hours
Evaluation
Metrics (Please check our paper for details)
- SCR: Syntactic Correctness Ratio
- SREV: Successful Rate of Expectation Value
- RE: Relative Entropy (distributional alignment)
- HQCR: High-Quality Circuit Ratio
Results (QUASAR vs Baselines)
Method | Pass@1 SCR ↑ | Pass@1 SREV ↑ | Pass@1 RE ↓ | Pass@1 HQCR ↑ | Pass@10 SCR ↑ | Pass@10 SREV ↑ | Pass@10 RE ↓ | Pass@10 HQCR ↑ |
---|---|---|---|---|---|---|---|---|
DeepSeek-V3 | 94.83% | 12.24% | 19.20 | 10.00% | 98.97% | 26.38% | 16.39 | 16.38% |
GPT-5 | 87.07% | 10.00% | 19.94 | 6.90% | 90.52% | 27.07% | 11.57 | 16.55% |
GPT-4o | 87.93% | 9.83% | 19.42 | 6.38% | 88.79% | 18.62% | 14.08 | 12.07% |
Qwen3-4B SFT | 97.41% | 18.97% | 12.74 | 15.17% | 99.65% | 31.55% | 10.81 | 23.62% |
Cold Start GRPO | 84.48% | 19.84% | 14.32 | 12.41% | 95.17% | 27.59% | 11.38 | 18.96% |
QUASAR (ours) | 99.31% | 22.41% | 11.61 | 17.24% | 100% | 33.10% | 8.48 | 27.24% |
Environmental Impact
- Hardware Type: NVIDIA H100 (16×, 64GB)
- Training Hours: ~48
Technical Specifications
- Architecture: Qwen3-4B-Instruct-2507
- Fine-tuning: SFT + RL (GRPO)
- Reward Design: Syntax validity, distributional alignment (JS distance), expectation-value matching, optimization-progress efficiency
- Frameworks: PyTorch, vLLM, Qiskit, OpenQASM
Citation
@misc{yu2025quasarquantumassemblycode,
title={QUASAR: Quantum Assembly Code Generation Using Tool-Augmented LLMs via Agentic RL},
author={Cong Yu and Valter Uotila and Shilong Deng and Qingyuan Wu and Tuo Shi and Songlin Jiang and Lei You and Bo Zhao},
year={2025},
eprint={2510.00967},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2510.00967},
}
- Downloads last month
- 208