RotateKV: Accurate and Robust 2-Bit KV Cache Quantization for LLMs via Outlier-Aware Adaptive Rotations Paper • 2501.16383 • Published Jan 25, 2025
AKVQ-VL: Attention-Aware KV Cache Adaptive 2-Bit Quantization for Vision-Language Models Paper • 2501.15021 • Published Jan 25, 2025
Unveiling Super Experts in Mixture-of-Experts Large Language Models Paper • 2507.23279 • Published Jul 31, 2025 • 1