|
--- |
|
dataset_info: |
|
features: |
|
- name: video |
|
dtype: string |
|
- name: videoType |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: options |
|
sequence: string |
|
- name: correctAnswer |
|
dtype: string |
|
- name: abilityType_L2 |
|
dtype: string |
|
- name: abilityType_L3 |
|
dtype: string |
|
- name: question_idx |
|
dtype: int64 |
|
splits: |
|
- name: test |
|
num_bytes: 1135911 |
|
num_examples: 1257 |
|
download_size: 586803 |
|
dataset_size: 1135911 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: test |
|
path: data/test-* |
|
task_categories: |
|
- video-text-to-text |
|
--- |
|
|
|
<p align="center"> |
|
<img src="./figs/LOGO_v3.png" width="30%" height="30%"> |
|
</p> |
|
|
|
# MMR-V: *Can MLLMs Think with Video?* A Benchmark for Multimodal Deep Reasoning in Videos |
|
|
|
|
|
<p align="center"> |
|
<a href="https://arxiv.org/abs/2506.04141"> 📝 Paper</a></a> | |
|
<a href="https://github.com/GaryStack/MMR-V"> 💻 Code</a> | |
|
<a href="https://mmr-v.github.io/"> 🏠 Homepage</a> |
|
</p> |
|
|
|
|
|
|
|
|
|
## 👀 MMR-V Data Card ("Think with Video") |
|
The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to locate multi-frame evidence🕵️ and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (referred to as "question frame") and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**. MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**think with video and mine evidence across long-range multi-frame**. |
|
|
|
## 🎬 MMR-V Task Examples |
|
|
|
<p align="center"> |
|
<img src="./figs/data_example_intro_v4_5_16.png" width="80%" height="80%"> |
|
</p> |
|
|
|
## 📚 Evaluation |
|
|
|
1. Load the MMR-V Videos |
|
|
|
```shell |
|
huggingface-cli download JokerJan/MMR-VBench --repo-type dataset --local-dir MMR-V --local-dir-use-symlinks False |
|
``` |
|
2. Extract videos from the `.tar` files: |
|
|
|
```shell |
|
cat videos.tar.part.* > videos.tar |
|
tar -xvf videos.tar |
|
``` |
|
|
|
3. Load MMR-V Benchmark: |
|
|
|
```shell |
|
samples = load_dataset("JokerJan/MMR-VBench", split='test') |
|
``` |
|
|
|
## 🎯 Experiment Results |
|
|
|
|
|
<p align="center"> |
|
<img src="./figs/main.png" width="70%" height="70%"> |
|
</p> |
|
|
|
|
|
## Dataset Details |
|
|
|
Curated by: MMR-V Team |
|
|
|
Language(s) (NLP): English |
|
|
|
License: CC-BY 4.0 |
|
|