Vision4Chart / README.md
Junteng's picture
Improve dataset card: Add metadata and GitHub link (#1)
59c3666 verified
metadata
task_categories:
  - image-text-to-text
language:
  - en
tags:
  - chart-understanding
  - vlm
  - multimodal
  - clip

Data for CLIP Training on Chart Task

This repository contains the CLIP Training data implementation from our paper "On the Perception Bottleneck of VLMs for Chart Understanding".

Code: https://github.com/hkust-nlp/Vision4Chart

Data Details

  • Data Source: Mainly chart tasks data like ChartQA, FigureQA, and DVQA.
  • Data overview: Each data contains image, a correct caption and wrong caption.

Citation

If you find this data useful in your research, please consider citing our paper:

@misc{liu2025perceptionbottleneckvlmschart,
      title={On the Perception Bottleneck of VLMs for Chart Understanding}, 
      author={Junteng Liu and Weihao Zeng and Xiwen Zhang and Yijun Wang and Zifei Shan and Junxian He},
      year={2025},
      eprint={2503.18435},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.18435}, 
}