Accelerating Masked Image Generation by Learning Latent Controlled Dynamics Paper • 2602.23996 • Published 5 days ago • 8 • 3
Set Block Decoding is a Language Model Inference Accelerator Paper • 2509.04185 • Published Sep 4, 2025 • 54
VOCABTRIM: Vocabulary Pruning for Efficient Speculative Decoding in LLMs Paper • 2506.22694 • Published Jun 28, 2025 • 3
VOCABTRIM: Vocabulary Pruning for Efficient Speculative Decoding in LLMs Paper • 2506.22694 • Published Jun 28, 2025 • 3 • 1
Direct Alignment of Draft Model for Speculative Decoding with Chat-Fine-Tuned LLMs Paper • 2403.00858 • Published Feb 29, 2024 • 1
CAOTE: KV Caching through Attention Output Error based Token Eviction Paper • 2504.14051 • Published Apr 18, 2025 • 1
KeDiff: Key Similarity-Based KV Cache Eviction for Long-Context LLM Inference in Resource-Constrained Environments Paper • 2504.15364 • Published Apr 21, 2025 • 4
KeDiff: Key Similarity-Based KV Cache Eviction for Long-Context LLM Inference in Resource-Constrained Environments Paper • 2504.15364 • Published Apr 21, 2025 • 4
Direct Alignment of Draft Model for Speculative Decoding with Chat-Fine-Tuned LLMs Paper • 2403.00858 • Published Feb 29, 2024 • 1
CAOTE: KV Caching through Attention Output Error based Token Eviction Paper • 2504.14051 • Published Apr 18, 2025 • 1
On Speculative Decoding for Multimodal Large Language Models Paper • 2404.08856 • Published Apr 13, 2024 • 13
Recursive Speculative Decoding: Accelerating LLM Inference via Sampling Without Replacement Paper • 2402.14160 • Published Feb 21, 2024 • 1
On Speculative Decoding for Multimodal Large Language Models Paper • 2404.08856 • Published Apr 13, 2024 • 13