LLaDA-Instruct-JustGRPO-Code

This model is LLaDA-8B-Instruct fine-tuned with JustGRPO on coding tasks.

It was introduced in the paper The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models.

Method

JustGRPO is a minimalist RL approach for diffusion language models. Instead of complex diffusion-specific RL adaptations, we simply treat dLLMs as autoregressive models during training and apply standard GRPO. See our paper for details.

Performance

HumanEval

Sequence Length 128 256 512
Pass@1 (%) 37.8 49.4 48.7

MBPP

Sequence Length 128 256 512
Pass@1 (%) 50.6 52.4 49.0

Usage

For generation and evaluation, please refer to our GitHub repository.

Citation

@article{ni2026flexibility,
  title={The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models},
  author={Ni, Zanlin and Wang, Shenzhi and Yue, Yang and Yu, Tianyu and Zhao, Weilin and Hua, Yeguo and Chen, Tianyi and Song, Jun and Yu, Cheng and Zheng, Bo and Huang, Gao},
  journal={arXiv preprint arXiv:2601.15165},
  year={2026}
}
Downloads last month
11
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nzl-thu/LLaDA-Instruct-JustGRPO-Code

Finetuned
(25)
this model

Paper for nzl-thu/LLaDA-Instruct-JustGRPO-Code