KVzap: Fast, Adaptive, and Faithful KV Cache Pruning
Abstract
KVzap is a fast, input-adaptive method for compressing key-value caches in transformer models, achieving significant memory reduction with minimal accuracy impact across multiple large language models.
Growing context lengths in transformer-based language models have made the key-value (KV) cache a critical inference bottleneck. While many KV cache pruning methods have been proposed, they have not yet been adopted in major inference engines due to speed--accuracy trade-offs. We introduce KVzap, a fast, input-adaptive approximation of KVzip that works in both prefilling and decoding. On Qwen3-8B, Llama-3.1-8B-Instruct, and Qwen3-32B across long-context and reasoning tasks, KVzap achieves 2--4times KV cache compression with negligible accuracy loss and achieves state-of-the-art performance on the KVpress leaderboard. Code and models are available at https://github.com/NVIDIA/kvpress.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs (2025)
- Learning What to Write: Write-Gated KV for Efficient Long-Context Inference (2025)
- Hold Onto That Thought: Assessing KV Cache Compression On Reasoning (2025)
- SWAN: Sparse Winnowed Attention for Reduced Inference Memory via Decompression-Free KV-Cache Compression (2025)
- KV Pareto: Systems-Level Optimization of KV Cache and Model Compression for Long Context Inference (2025)
- Adaptive Layer Selection for Layer-Wise Token Pruning in LLM Inference (2026)
- KVReviver: Reversible KV Cache Compression with Sketch-Based Token Reconstruction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 6
Browse 6 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper