---
dataset_info:
features:
- name: id
dtype: int32
- name: category
dtype: string
- name: images
sequence: image
- name: question
dtype: string
- name: question_text
dtype: string
- name: answer
dtype: string
- name: difficulty
dtype: int32
- name: metric_info
dtype: string
- name: initial_state
dtype: string
splits:
- name: test
num_bytes: 92043781.1
num_examples: 1290
download_size: 60812104
dataset_size: 92043781.1
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- image-text-to-text
language:
- en
tags:
- multimodal
- reasoning
- benchmark
- long-chain-reasoning
- math
- logic
---
# MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with Holistic Platform and Adaptive Hybrid Policy Optimization
This repository contains the **MM-HELIX benchmark dataset**, which was introduced in the paper [MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with Holistic Platform and Adaptive Hybrid Policy Optimization](https://huggingface.co/papers/2510.08540).
[**π Homepage**](https://mm-helix.github.io) | [**π Leaderboard**](https://github.com/PhoenixZ810/MM-HELIX/tree/main) | [**π€ MM-HELIX-Thinking-7B**](https://huggingface.co/PhoenixZ/MM-HELIX-7B-Thinking) | [**π€ MM-HELIX Benchmark**](https://huggingface.co/datasets/tianhao2k/MM-HELIX) | [**π€ MM-HELIX-100K**](https://huggingface.co/datasets/mjuicem/MM-HELIX-100K) | [**π€ Paper**](https://arxiv.org/pdf/2510.08540) | [**π arXiv**](https://arxiv.org/abs/2510.08540) | [**GitHub**](https://github.com/PhoenixZ810/MM-HELIX/tree/main)
## π News
- [2025-10-10]: Paper released on arXiv.
- [2025-10-09]: Initial release of the evaluation code in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit/tree/main).
## ποΈ QuickStart for Evaluating MM-HELIX Benchmark
We provide our evaluation code in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit/tree/main).
Here is a quick start example:
* Step 1. Install VLMEvalKit
```bash
conda create -n vlmevalkit python=3.10
conda activate vlmevalkit
git clone https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .
```
* Step 2. Evaluate Qwen2.5-VL-7B-Instruct on MM-HELIX Benchmark
```bash
export LMUData=path/to/your/LMUData
python run.py --data MM-HELIX --model Qwen2.5-VL-7B-Instruct --work-dir path/to/your/output --verbose
# For text-only:
python run.py --data MM-HELIX_lang --model Qwen2.5-VL-7B-Instruct --work-dir path/to/your/output --verbose
```
## Introduction
While Multimodal Large Language Models (MLLMs) have shown proficiency in tasks like mathematics and logic, their ability for **long-chain reflective reasoning**βa key element for solving complex, real-world problemsβis not fully developed. This type of reasoning requires iterative thinking and backtracking, which current models often lack.
**MM-HELIX** is a comprehensive platform designed to **evaluate** and **enhance** this crucial capability in MLLMs. It consists of:
* **A Challenging Benchmark:** A new benchmark, MM-HELIX, featuring 1,260 instances across 42 difficult tasks that demand reflective reasoning. Our findings show that existing MLLMs struggle significantly on this benchmark.
* **A High-Quality Dataset (MM-HELIX-100K):** To address the performance gap, we created MM-HELIX-100K, a dataset with 100,000 high-quality, reflective reasoning instruction-tuning samples, generated through our innovative **Step-Elicited Response Generation (SERG)** pipeline.
* **An Advanced Training Method:** We introduce **Adaptive Hybrid Policy Optimization (AHPO)**, a novel training strategy that combines offline supervision with online optimization. This method effectively teaches the model to learn from expert data and explore solutions independently, overcoming issues like sparse rewards and catastrophic forgetting that are common in standard Reinforcement Learning.
Our model, based on Qwen2.5-VL-7B, shows a **+18.6%** improvement in accuracy on the MM-HELIX benchmark and a **+5.7%** average gain on general math and logic tasks, demonstrating that reflective reasoning can be effectively learned and generalized.
## MM-HELIX Benchmark
The 42 tasks in the MM-HELIX benchmark.
The **MM-HELIX benchmark** is designed to test the limits of multimodal long-chain reflective reasoning in MLLMs.
* **Diverse and Challenging Tasks:** The benchmark includes 1,260 high-quality samples from 42 unique tasks divided into four categories: **algorithms, graphs, puzzles, and games**.
* **Controlled Difficulty:** Tasks are generated procedurally with five levels of difficulty, from Level 1 (very easy) to Level 5 (very hard), allowing for a detailed analysis of model performance at different complexities.
* **Automated and Objective Evaluation:** Our framework includes an **Instance Generator**, a deterministic **Solver**, and an automated **Verifier**. The Verifier validates the correctness of model-generated solutions, enabling objective and scalable evaluation, and also serves as a reward oracle in a reinforcement learning setup.
## MM-HELIX-100K Dataset: High-Quality Multimodal Reflective CoT
To train models for complex reasoning, a large-scale, high-quality dataset is essential. We introduce **MM-HELIX-100K**, a dataset of 100,000 instruction-tuning instances with detailed, reflective reasoning paths.
This dataset was created using our **Step-Elicited Response Generation (SERG)** pipeline, which efficiently generates high-quality Chain-of-Thought (CoT) trajectories.
The SERG pipeline works as follows:
1. A rule-based CoT constructor first generates a skeletal reasoning path.
2. This path is then refined by a powerful language model (Qwen3-235B) to create a more natural, human-like reasoning process that includes reflective steps.
3. Finally, each generated trajectory is validated by our automated verifier to ensure its correctness and quality.
The Step-Elicited Response Generation (SERG) pipeline.
## Dataset Structure
The MM-HELIX benchmark dataset (`tianhao2k/MM-HELIX`) is split into a `test` set, containing 1,290 examples.
Each sample in the dataset contains the following features:
* `id`: Unique identifier for each sample.
* `category`: Category of the task (e.g., "algorithms", "graphs", "puzzles", "games").
* `images`: Sequence of image inputs relevant to the task.
* `question`: The main question or prompt.
* `question_text`: Textual representation of the question.
* `answer`: The correct answer to the question.
* `difficulty`: Integer representing the difficulty level (1-5).
* `metric_info`: Information about evaluation metrics.
* `initial_state`: Initial state for the task.
For the related instruction-tuning dataset, please refer to the [MM-HELIX-100K](https://huggingface.co/datasets/mjuicem/MM-HELIX-100K) dataset on the Hugging Face Hub.
## Citation
If you find our work useful, please consider citing our paper:
```bibtex
@article{zhao2025mmhelix,
title={MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with Holistic Platform and Adaptive Hybrid Policy Optimization},
author={Zhao, Xiangyu and Lin, Junming and Liang, Tianhao and Zhou, Yifan and Chai, Wenhao and Gu, Yuzhe and Wang, Weiyun and Chen, Kai and Luo, Gen and Zhang, Wenwei and Yan, Junchi and Yang, Hua and Duan, Haodong and Yang, Xue},
journal={arXiv preprint arXiv:2510.08540},
year={2025}
}
```