Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods
Abstract
VTC-Bench is introduced to provide a fair evaluation framework for visual token compression by incorporating a data filtering mechanism to denoise existing benchmarks.
Recent endeavors to accelerate inference in Multimodal Large Language Models (MLLMs) have primarily focused on visual token compression. The effectiveness of these methods is typically assessed by measuring the accuracy drop on established benchmarks, comparing model performance before and after compression. However, these benchmarks are originally designed to assess the perception and reasoning capabilities of MLLMs, rather than to evaluate compression techniques. As a result, directly applying them to visual token compression introduces a task mismatch. Strikingly, our investigation reveals that simple image downsampling consistently outperforms many advanced compression methods across multiple widely used benchmarks. Through extensive experiments, we make the following observations: (i) Current benchmarks are noisy for the visual token compression task. (ii) Down-sampling is able to serve as a data filter to evaluate the difficulty of samples in the visual token compression task. Motivated by these findings, we introduce VTC-Bench, an evaluation framework that incorporates a data filtering mechanism to denoise existing benchmarks, thereby enabling fairer and more accurate assessment of visual token compression methods. All data and code are available at https://github.com/Chenfei-Liao/VTC-Bench.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play Toolkit (2025)
- Variation-aware Vision Token Dropping for Faster Large Vision-Language Models (2025)
- Training-Free Token Pruning via Zeroth-Order Gradient Estimation in Vision-Language Models (2025)
- CoViPAL: Layer-wise Contextualized Visual Token Pruning for Large Vision-Language Models (2025)
- Prune2Drive: A Plug-and-Play Framework for Accelerating Vision-Language Models in Autonomous Driving (2025)
- TrimTokenator: Towards Adaptive Visual Token Pruning for Large Multimodal Models (2025)
- Seeing More, Saying More: Lightweight Language Experts are Dynamic Video Token Compressors (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper