Papers
arxiv:2510.11052

Latent Refinement Decoding: Enhancing Diffusion-Based Language Models by Refining Belief States

Published on Oct 13
ยท Submitted by Lin Gui on Oct 14
#3 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

Latent Refinement Decoding (LRD) improves parallel sequence generation by maintaining global consistency and iterative refinement, enhancing accuracy and reducing latency.

AI-generated summary

Autoregressive (AR) models remain the standard for natural language generation but still suffer from high latency due to strictly sequential decoding. Recent diffusion-inspired approaches, such as LlaDA and Dream, mitigate this by generating in parallel, yet they suffer from two core limitations: information loss, as predictive distributions for non-finalized tokens are discarded at each step, and premature commitment, where local decisions are made without sufficient global coordination. We introduce Latent Refinement Decoding (LRD), a two-stage framework with Latent Refinement and a Predictive Feedback Loop. The first stage maintains masked positions as distributional mixtures of predicted tokens and the mask embedding, allowing the model to establish more globally consistent beliefs. The second stage progressively finalizes confident tokens while retaining uncertain ones for iterative feedback. KL-divergence dynamics provide a principled and reliable criterion for convergence and early stopping. Experiments across coding (HumanEval +6.3, MBPP +2.6) and reasoning (GSM8K +2.9, MATH500 +3.8) show that LRD improves accuracy while delivering speedups of up to 10.6x, making it a strong and versatile alternative for parallel sequence generation.

Community

Paper author Paper submitter

Latent Refinement Decoding (LRD) is a two-stage parallel generation framework that mitigates information loss and premature commitment in diffusion-inspired models by maintaining distributional mixtures and iterative feedback, achieving higher accuracy and up to 10.6ร— faster decoding across coding and reasoning tasks.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.11052 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.11052 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.11052 in a Space README.md to link it from this page.

Collections including this paper 1