Attention Is All You Need for KV Cache in Diffusion LLMs
Abstract
Elastic-Cache optimizes key-value cache management in diffusion large language models to reduce decoding latency without sacrificing prediction accuracy.
This work studies how to adaptively recompute key-value (KV) caches for diffusion large language models (DLMs) to maximize prediction accuracy while minimizing decoding latency. Prior methods' decoders recompute QKV for all tokens at every denoising step and layer, despite KV states changing little across most steps, especially in shallow layers, leading to substantial redundancy. We make three observations: (1) distant {bf MASK} tokens primarily act as a length-bias and can be cached block-wise beyond the active prediction window; (2) KV dynamics increase with depth, suggesting that selective refresh starting from deeper layers is sufficient; and (3) the most-attended token exhibits the smallest KV drift, providing a conservative lower bound on cache change for other tokens. Building on these, we propose {bf Elastic-Cache}, a training-free, architecture-agnostic strategy that jointly decides {when} to refresh (via an attention-aware drift test on the most-attended token) and {where} to refresh (via a depth-aware schedule that recomputes from a chosen layer onward while reusing shallow-layer caches and off-window MASK caches). Unlike fixed-period schemes, Elastic-Cache performs adaptive, layer-aware cache updates for diffusion LLMs, reducing redundant computation and accelerating decoding with negligible loss in generation quality. Experiments on LLaDA-Instruct, LLaDA-1.5, and LLaDA-V across mathematical reasoning and code generation tasks demonstrate consistent speedups: 8.7times on GSM8K (256 tokens), 45.1times on longer sequences, and 4.8times on HumanEval, while consistently maintaining higher accuracy than the baseline. Our method achieves significantly higher throughput (6.8times on GSM8K) than existing confidence-based approaches while preserving generation quality, enabling practical deployment of diffusion LLMs.
Community
๐ Attention Is All You Need for KV Cache in Diffusion LLMs ๐
Making Diffusion LLMs Practical! We introduce Elastic-Cache, the first adaptive, layer-aware KV caching strategy for diffusion language models, achieving massive speedups without sacrificing generation quality.
๐Intelligent Cache Updates: Adaptively decides when to refresh (attention-aware drift detection) and where to refresh (depth-selective updates), eliminating redundant computation across denoising steps.
๐๐Exceptional Speedups : Achieves 8.7ร faster inference on GSM8K, 45.1ร on longer sequences, and 4.8ร on HumanEval, while maintaining or even improving accuracy compared to baselines.
๐๐๐ฅ Training-Free & Universal : Works out-of-the-box with any diffusion LLM architecture. No retraining needed, just plug and play!
๐ Paper: https://arxiv.org/abs/2510.14973
๐ Project page: https://vila-lab.github.io/elastic-cache-webpage/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- d$^2$Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching (2025)
- AdaBlock-dLLM: Semantic-Aware Diffusion LLM Inference via Adaptive Block Size (2025)
- Fast-dLLM v2: Efficient Block-Diffusion LLM (2025)
- DELTA: Dynamic Layer-Aware Token Attention for Efficient Long-Context Reasoning (2025)
- PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference (2025)
- Sequential Diffusion Language Models (2025)
- Expected Attention: KV Cache Compression by Estimating Attention from Future Queries Distribution (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper