Boundary-Guided Policy Optimization for Memory-efficient RL of Diffusion Large Language Models
Abstract
Boundary-Guided Policy Optimization (BGPO) improves reinforcement learning for diffusion large language models by efficiently approximating likelihoods with a memory-efficient lower bound, enhancing performance in tasks like math problem solving, code generation, and planning.
A key challenge in applying reinforcement learning (RL) to diffusion large language models (dLLMs) lies in the intractability of their likelihood functions, which are essential for the RL objective, necessitating corresponding approximation in each training step. While existing methods approximate the log-likelihoods by their evidence lower bounds (ELBOs) via customized Monte Carlo (MC) sampling, the forward computational graphs of all MC samples need to be retained for the gradient computation of non-linear terms in the RL objective, resulting in significant memory overhead. This constraint restricts feasible sample sizes, leading to imprecise likelihood approximations and ultimately distorting the RL objective. To overcome this limitation, we propose Boundary-Guided Policy Optimization (BGPO), a memory-efficient RL algorithm that maximizes a specially constructed lower bound of the ELBO-based objective. This lower bound is carefully designed to satisfy two key properties: (1) Linearity: it is formulated in a linear sum where each term depends only on a single MC sample, thereby enabling gradient accumulation across samples and ensuring constant memory usage; (2) Equivalence: Both the value and gradient of this lower bound are equal to those of the ELBO-based objective in on-policy training, making it also an effective approximation for the original RL objective. These properties allow BGPO to adopt a large MC sample size, resulting in more accurate likelihood approximations and improved RL objective estimation, which in turn leads to enhanced performance. Experiments show that BGPO significantly outperforms previous RL algorithms for dLLMs in math problem solving, code generation, and planning tasks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Inpainting-Guided Policy Optimization for Diffusion Large Language Models (2025)
- Improving Reasoning for Diffusion Language Models via Group Diffusion Policy Optimization (2025)
- Enhancing Reasoning for Diffusion LLMs via Distribution Matching Policy Optimization (2025)
- RFG: Test-Time Scaling for Diffusion Large Language Model Reasoning with Reward-Free Guidance (2025)
- Principled and Tractable RL for Reasoning with Diffusion Language Models (2025)
- DiFFPO: Training Diffusion LLMs to Reason Fast and Furious via Reinforcement Learning (2025)
- MDPO: Overcoming the Training-Inference Divide of Masked Diffusion Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 4
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper