File size: 4,624 Bytes
d3507ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a345570
d3507ea
a345570
d3507ea
a345570
d3507ea
 
 
a345570
d3507ea
 
a345570
d3507ea
 
 
 
 
 
 
 
a345570
d3507ea
 
a345570
d3507ea
a345570
d3507ea
a345570
d3507ea
a345570
 
 
 
 
d3507ea
a345570
d3507ea
 
 
a345570
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: apache-2.0
configs:
- config_name: object_category
  data_files:
  - split: test
    path:
    - annotations/object_category.jsonl
- config_name: object_number
  data_files:
  - split: test
    path:
    - annotations/object_number.jsonl
- config_name: object_color
  data_files:
  - split: test
    path:
    - annotations/object_color.jsonl
- config_name: spatial_relation
  data_files:
  - split: test
    path:
    - annotations/spatial_relation.jsonl
- config_name: scene
  data_files:
  - split: test
    path:
    - annotations/scene.jsonl
- config_name: camera_angle
  data_files:
  - split: test
    path:
    - annotations/camera_angle.jsonl
- config_name: OCR
  data_files:
  - split: test
    path:
    - annotations/OCR.jsonl
- config_name: style
  data_files:
  - split: test
    path:
    - annotations/style.jsonl
- config_name: character_identification
  data_files:
  - split: test
    path:
    - annotations/character_identification.jsonl
- config_name: dynamic_object_number
  data_files:
  - split: test
    path:
    - annotations/dynamic_object_number.jsonl
- config_name: action
  data_files:
  - split: test
    path:
    - annotations/action.jsonl
- config_name: camera_movement
  data_files:
  - split: test
    path:
    - annotations/camera_movement.jsonl
- config_name: event
  data_files:
  - split: test
    path:
    - annotations/event.jsonl
---

# Dataset Card for Dataset Name

![VideoQA](https://img.shields.io/badge/Task-VideoQA-red)  ![Multi-Modal](https://img.shields.io/badge/Task-Multi--Modal-red)  ![CAPability](https://img.shields.io/badge/Dataset-CAPability-blue)   ![License](https://img.shields.io/badge/License-Apache%202.0-yellow) 

Visual caption benchmark Repo: CAPability

[[🍎 Project Page](https://capability-bench.github.io/)] [[πŸ“– ArXiv Paper](https://arxiv.org/pdf/2502.14914)] [[πŸ§‘β€πŸ’» Github Repo](https://github.com/ali-vilab/CAPability)] [[πŸ† Leaderboard](https://capability-bench.github.io/#leaderboard)]



## Dataset Details


Visual captioning benchmarks have become outdated with the emergence of modern MLLMs, as the brief ground-truth sentences and traditional metrics fail to assess detailed captions effectively. While recent benchmarks attempt to address this by focusing on keyword extraction or object-centric evaluation, they remain limited to vague-view or object-view analyses and incomplete visual element coverage. We introduce CAPability, a comprehensive multi-view benchmark for evaluating visual captioning across 12 dimensions spanning six critical views. We curate nearly 11K human-annotated images and videos with visual element annotations to evaluate the generated captions. CAPability stably assesses both the correctness and thoroughness of captions using F1-score. By converting annotations to QA pairs, we further introduce a heuristic metric, *know but cannot tell* ($K\bar{T}$), indicating a significant performance gap between QA and caption capabilities. Our work provides the first holistic analysis of MLLMs' captioning abilities, as we identify their strengths and weaknesses across various dimensions, guiding future research to enhance specific aspects of capabilities.


## Uses

<!-- Address questions around how the dataset is intended to be used. -->

### Direct Use

You can directly download the `data` folder, unzip all `zip` files, and put the `data` under in the same root of [Github Repo](https://github.com/ali-vilab/CAPability). Then you can follow the instruction in Github to run the inference and the evaluation.


### Use with lmms-eval

We have supported [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval/pull/656) to run inference and evaluation for convience.

## Copyright

CAPability is only used for academic research. Commercial use in any form is prohibited.
The copyright of all images and videos belongs to the media owners.
If there is any infringement in CAPability, please email liuzhihang@mail.ustc.edu.cn and we will remove it immediately.
Without prior approval, you cannot distribute, publish, copy, disseminate, or modify CAPability in whole or in part. 
You must strictly comply with the above restrictions.

## Citation

**BibTeX:**

```bibtex
@article{liu2025good,
  title={What Is a Good Caption? A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness},
  author={Liu, Zhihang and Xie, Chen-Wei and Wen, Bin and Yu, Feiwu and Chen, Jixuan and Zhang, Boqiang and Yang, Nianzu and Li, Pandeng and Li, Yinglu and Gao, Zuan and Zheng, Yun and Xie, Hongtao},
  journal={arXiv preprint arXiv:2502.14914},
  year={2025}
}
```