Wiki Live Challenge: Challenging Deep Research Agents with Expert-Level Wikipedia Articles
Abstract
Deep Research Agents demonstrate capabilities in autonomous information retrieval but show significant gaps when evaluated against expert-level Wikipedia articles using a new live benchmark and comprehensive evaluation framework.
Deep Research Agents (DRAs) have demonstrated remarkable capabilities in autonomous information retrieval and report generation, showing great potential to assist humans in complex research tasks. Current evaluation frameworks primarily rely on LLM-generated references or LLM-derived evaluation dimensions. While these approaches offer scalability, they often lack the reliability of expert-verified content and struggle to provide objective, fine-grained assessments of critical dimensions. To bridge this gap, we introduce Wiki Live Challenge (WLC), a live benchmark that leverages the newest Wikipedia Good Articles (GAs) as expert-level references. Wikipedia's strict standards for neutrality, comprehensiveness, and verifiability serve as a great challenge for DRAs, with GAs representing the pinnacle of which. We curate a dataset of 100 recent Good Articles and propose Wiki Eval, a comprehensive evaluation framework comprising a fine-grained evaluation method with 39 criteria for writing quality and rigorous metrics for factual verifiability. Extensive experiments on various DRA systems demonstrate a significant gap between current DRAs and human expert-level Wikipedia articles, validating the effectiveness of WLC in advancing agent research. We release our benchmark at https://github.com/WangShao2000/Wiki_Live_Challenge
Community
Hi everyone, we have released the Wiki Live Challenge, a benchmark that uses Wikipedia Good Articles as a high-level human baseline. It is designed to evaluate the writing quality and information-gathering capabilities of Deep Research Agents in authoring Wikipedia content. Our results indicate that there is still a gap between current DRAs and real-world human experts in this domain.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DeepResearch Bench II: Diagnosing Deep Research Agents via Rubrics from Expert Report (2026)
- DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation (2026)
- DEER: A Benchmark for Evaluating Deep Research Agents on Expert Report Generation (2025)
- Mind2Report: A Cognitive Deep Research Agent for Expert-Level Commercial Report Synthesis (2026)
- DR-Arena: an Automated Evaluation Framework for Deep Research Agents (2026)
- RubricHub: A Comprehensive and Highly Discriminative Rubric Dataset via Automated Coarse-to-Fine Generation (2026)
- DeepSynth-Eval: Objectively Evaluating Information Consolidation in Deep Survey Writing (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper