Datasets:
File size: 2,697 Bytes
ff6dd8c e44afa3 ff6dd8c 4185985 ff6dd8c 4185985 ff6dd8c df2cec0 ff6dd8c 1dd8d94 e14c71e 1dd8d94 207e769 1dd8d94 fded5ec c38ee1b 3e5d5d1 1dd8d94 f64fa8f 773311c 1dd8d94 4b4b16e c521ddc 4b4b16e 1dd8d94 4b4b16e 1dd8d94 4b4b16e 4a49361 1dd8d94 4b4b16e af37fc8 4b4b16e f37bbb0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
dataset_info:
features:
- name: video
dtype: string
- name: videoType
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: correctAnswer
dtype: string
- name: abilityType_L2
dtype: string
- name: abilityType_L3
dtype: string
- name: question_idx
dtype: int64
splits:
- name: test
num_bytes: 1135911
num_examples: 1257
download_size: 586803
dataset_size: 1135911
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- video-text-to-text
---
<p align="center">
<img src="./figs/LOGO_v3.png" width="30%" height="30%">
</p>
# MMR-V: *Can MLLMs Think with Video?* A Benchmark for Multimodal Deep Reasoning in Videos
<p align="center">
<a href="https://arxiv.org/abs/2506.04141"> 📝 Paper</a></a> |
<a href="https://github.com/GaryStack/MMR-V"> 💻 Code</a> |
<a href="https://mmr-v.github.io/"> 🏠 Homepage</a>
</p>
## 👀 MMR-V Data Card ("Think with Video")
The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to locate multi-frame evidence🕵️ and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (referred to as "question frame") and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**. MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**think with video and mine evidence across long-range multi-frame**.
## 🎬 MMR-V Task Examples
<p align="center">
<img src="./figs/data_example_intro_v4_5_16.png" width="80%" height="80%">
</p>
## 📚 Evaluation
1. Load the MMR-V Videos
```shell
huggingface-cli download JokerJan/MMR-VBench --repo-type dataset --local-dir MMR-V --local-dir-use-symlinks False
```
2. Extract videos from the `.tar` files:
```shell
cat videos.tar.part.* > videos.tar
tar -xvf videos.tar
```
3. Load MMR-V Benchmark:
```shell
samples = load_dataset("JokerJan/MMR-VBench", split='test')
```
## 🎯 Experiment Results
<p align="center">
<img src="./figs/main.png" width="70%" height="70%">
</p>
## Dataset Details
Curated by: MMR-V Team
Language(s) (NLP): English
License: CC-BY 4.0
|