metadata
license: mit
Introduction to TraDo
We introduce TraDo, SOTA diffusion language model, trained with TraceRL.
- TraDo-4B-Instruct and TraDo-8B-Instruct outperform similarly sized strong AR models across math reasoning tasks.
- TraDo-8B-Thinking is the first Long-CoT diffusion language model.
Citation
@article{wang2025trado,
title={Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models},
author={Wang, Yinjie and Yang, Ling and Li, Bowen and Tian, Ye and Shen, Ke and Wang, Mengdi},
journal={arXiv preprint arXiv:2509.06949},
year={2025}
}