Datasets:

Languages:
English
ArXiv:
License:
File size: 4,333 Bytes
ec4eff4
a262660
 
 
b07971a
 
58f48f5
 
a90d97c
 
a262660
 
b07971a
eb64996
ec4eff4
 
bac486d
 
9d7d4a3
 
a90d97c
9d7d4a3
a90d97c
2dc3f6e
 
a8dcba6
 
 
 
 
 
 
 
 
877283e
 
 
24ee9fa
 
877283e
 
 
 
f260f52
 
877283e
5397621
 
 
2dc3f6e
a8dcba6
2dc3f6e
 
5397621
46c03ce
f2e5f42
5397621
 
 
 
 
 
f2e5f42
5397621
 
 
2dc3f6e
 
 
 
46c03ce
5397621
2dc3f6e
 
 
 
 
 
44ac4b9
 
2dc3f6e
 
 
 
44ac4b9
2dc3f6e
44ac4b9
2dc3f6e
44ac4b9
2dc3f6e
44ac4b9
2dc3f6e
 
44ac4b9
 
 
 
 
 
2dc3f6e
44ac4b9
2dc3f6e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
task_categories:
- text-retrieval
- text-classification
- token-classification
language:
- en
tags:
- multimodal
pretty_name: MMEB-V2
size_categories:
- 1M<n<10M
viewer: false
---

# MMEB-V2 (Massive Multimodal Embedding Benchmark)

[**Website**](https://tiger-ai-lab.github.io/VLM2Vec/) |[**Github**](https://github.com/TIGER-AI-Lab/VLM2Vec) | [**πŸ†Leaderboard**](https://huggingface.co/spaces/TIGER-Lab/MMEB) | [**πŸ“–MMEB-V2/VLM2Vec-V2 Paper**](https://arxiv.org/abs/2507.04590) | | [**πŸ“–MMEB-V1/VLM2Vec-V1 Paper**](https://arxiv.org/abs/2410.05160) |


## Introduction

Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks β€” Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering β€” and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings.

**This Hugging Face repository contains the image and video frames used in MMEB-V2, which need to be downloaded in advance.**


## Guide to All MMEB-V2 Data
**Please review this section carefully for all MMEB-V2–related data.**

- **Image/Video Frames** – Available in this repository.  
- **Test File** – Loaded during evaluation from Hugging Face automatically. A comprehensive list of HF paths can be found [here](https://github.com/TIGER-AI-Lab/VLM2Vec/blob/main/src/data/dataset_hf_path.py).  
- **Raw Video Files** – In most cases, the video frames are all you need for MMEB evaluation. However, we also provide the raw video files [here](https://huggingface.co/datasets/TIGER-Lab/MMEB_Raw_Video) in case they are needed for specific use cases. Since these files are very large, please download and use them only if necessary.  


## πŸš€ What's New
- **\[2025.07\]** Release [tech report](https://arxiv.org/abs/2507.04590).
- **\[2025.05\]** Initial release of MMEB-V2/VLM2Vec-V2.


## Dataset Overview

We present an overview of the MMEB-V2 dataset below:
<img width="900" alt="abs" src="overview.png">


## Dataset Structure

The directory structure of this Hugging Face repository is shown below. 
For video tasks, we provide sampled frames in this repo. For image tasks, we provide the raw images.
Files from each meta-task are zipped together, resulting in six files. For example, ``video_cls.tar.gz`` contains the sampled frames for the video classification task.

```

β†’ video-tasks/
β”œβ”€β”€ frames/
β”‚   β”œβ”€β”€ video_cls.tar.gz
β”‚   β”œβ”€β”€ video_qa.tar.gz
β”‚   β”œβ”€β”€ video_ret.tar.gz
β”‚   └── video_mret.tar.gz

β†’ image-tasks/
β”œβ”€β”€ mmeb_v1.tar.gz
└── visdoc.tar.gz

```

After downloading and unzipping these files locally, you can organize them as shown below. (You may choose to use ``Git LFS`` or ``wget`` for downloading.)
Then, simply specify the correct file path in the configuration file used by your code.

```

β†’ MMEB
β”œβ”€β”€ video-tasks/
β”‚   └── frames/
β”‚       β”œβ”€β”€ video_cls/
β”‚       β”‚   β”œβ”€β”€ UCF101/
β”‚       β”‚   β”‚   └── video_1/              # video ID
β”‚       β”‚   β”‚       β”œβ”€β”€ frame1.png        # frame from video_1
β”‚       β”‚   β”‚       β”œβ”€β”€ frame2.png
β”‚       β”‚   β”‚       └── ...
β”‚       β”‚   β”œβ”€β”€ HMDB51/
β”‚       β”‚   β”œβ”€β”€ Breakfast/
β”‚       β”‚   └── ...                       # other datasets from video classification category
β”‚       β”œβ”€β”€ video_qa/
β”‚       β”‚   └── ...                       # video QA datasets
β”‚       β”œβ”€β”€ video_ret/
β”‚       β”‚   └── ...                       # video retrieval datasets
β”‚       └── video_mret/
β”‚           └── ...                       # moment retrieval datasets
β”œβ”€β”€ image-tasks/
β”‚   β”œβ”€β”€ mmeb_v1/
β”‚   β”‚   β”œβ”€β”€ OK-VQA/
β”‚   β”‚   β”‚   β”œβ”€β”€ image1.png
β”‚   β”‚   β”‚   β”œβ”€β”€ image2.png
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”œβ”€β”€ ImageNet-1K/
β”‚   β”‚   └── ...                           # other datasets from MMEB-V1 category
β”‚   └── visdoc/
β”‚       └── ...                           # visual document retrieval datasets


```