Papers
arxiv:2603.28407

MiroEval: Benchmarking Multimodal Deep Research Agents in Process and Outcome

Published on Mar 30
ยท Submitted by
Fangda Ye
on Apr 2
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

MiroEval addresses limitations of existing deep research system benchmarks by introducing a comprehensive evaluation framework that assesses adaptive synthesis, agentic factuality verification, and process-centric auditing across real-user tasks.

AI-generated summary

Recent progress in deep research systems has been impressive, but evaluation still lags behind real user needs. Existing benchmarks predominantly assess final reports using fixed rubrics, failing to evaluate the underlying research process. Most also offer limited multimodal coverage, rely on synthetic tasks that do not reflect real-world query complexity, and cannot be refreshed as knowledge evolves. To address these gaps, we introduce MiroEval, a benchmark and evaluation framework for deep research systems. The benchmark comprises 100 tasks (70 text-only, 30 multimodal), all grounded in real user needs and constructed via a dual-path pipeline that supports periodic updates, enabling a live and evolving setting. The proposed evaluation suite assesses deep research systems along three complementary dimensions: adaptive synthesis quality evaluation with task-specific rubrics, agentic factuality verification via active retrieval and reasoning over both web sources and multimodal attachments, and process-centric evaluation audits how the system searches, reasons, and refines throughout its investigation. Evaluation across 13 systems yields three principal findings: the three evaluation dimensions capture complementary aspects of system capability, with each revealing distinct strengths and weaknesses across systems; process quality serves as a reliable predictor of overall outcome while revealing weaknesses invisible to output-level metrics; and multimodal tasks pose substantially greater challenges, with most systems declining by 3 to 10 points. The MiroThinker series achieves the most balanced performance, with MiroThinker-H1 ranking the highest overall in both settings. Human verification and robustness results confirm the reliability of the benchmark and evaluation framework. MiroEval provides a holistic diagnostic tool for the next generation of deep research agents.

Community

Paper submitter

We introduce MiroEval, a benchmark and evaluation framework for deep research systems with 100 tasks (70 text-only, 30 multimodal). Unlike existing benchmarks that only assess final reports, MiroEval evaluates systems along three dimensions: adaptive synthesis quality, agentic factuality verification, and process-centric evaluation.
We benchmark 13 leading systems including OpenAI Deep Research, Gemini, Claude, Grok, Manus, Kimi, and others. Key findings: process quality reliably predicts overall outcome (r=0.88); multimodal tasks cause 3โ€“10 point drops; and synthesis quality vs. factuality rankings diverge substantially across systems.
๐Ÿ“ Blog: https://miroeval-ai.github.io/blog/
๐ŸŒ Project: https://miroeval-ai.github.io/website/
๐Ÿ™ GitHub: https://github.com/MiroMindAI/MiroEval

the core of miroeval that grabs me is the attachment-aware, rubric-driven evaluation plus a process audit, which tries to assess how researchers actually work, not just what they output. that four-way labeling RIGHT, WRONG, CONFLICT, UNKNOWN for factual anchors is clever, but iโ€™m curious how they calibrate across judges and how sensitive it is to tricky chart interpretations. an ablation where attachments are removed or where only textual anchors are used would reveal how much the multimodal verification actually contributes to the final score. there are worries about knowledge drift in the live setting and whether the framework could be gamed by querying or backfilling sources to look good. btw the arxivlens breakdown helped me parse the method details, a solid walkthrough on how the live, multi-layer eval hangs together, and this link helps keep it concrete: https://arxivlens.com/PaperView/Details/miroeval-benchmarking-multimodal-deep-research-agents-in-process-and-outcome-7188-42258562

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.28407
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.28407 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.28407 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.28407 in a Space README.md to link it from this page.

Collections including this paper 4