AACR-Bench: Evaluating Automatic Code Review with Holistic Repository-Level Context
Abstract
AACR-Bench addresses limitations in automated code review benchmarks by providing multi-language, cross-file context with expert-verified defect annotations, revealing significant gaps in prior LLM evaluation and demonstrating that context granularity and retrieval methods critically impact ACR performance across different models and paradigms.
High-quality evaluation benchmarks are pivotal for deploying Large Language Models (LLMs) in Automated Code Review (ACR). However, existing benchmarks suffer from two critical limitations: first, the lack of multi-language support in repository-level contexts, which restricts the generalizability of evaluation results; second, the reliance on noisy, incomplete ground truth derived from raw Pull Request (PR) comments, which constrains the scope of issue detection. To address these challenges, we introduce AACR-Bench a comprehensive benchmark that provides full cross-file context across multiple programming languages. Unlike traditional datasets, AACR-Bench employs an "AI-assisted, Expert-verified" annotation pipeline to uncover latent defects often overlooked in original PRs, resulting in a 285% increase in defect coverage. Extensive evaluations of mainstream LLMs on AACR-Bench reveal that previous assessments may have either misjudged or only partially captured model capabilities due to data limitations. Our work establishes a more rigorous standard for ACR evaluation and offers new insights on LLM based ACR, i.e., the granularity/level of context and the choice of retrieval methods significantly impact ACR performance, and this influence varies depending on the LLM, programming language, and the LLM usage paradigm e.g., whether an Agent architecture is employed. The code, data, and other artifacts of our evaluation set are available at https://github.com/alibaba/aacr-bench .
Community
This paper introduces AACR-Bench, a multi-language benchmark utilizing an AI-assisted, expert-verified pipeline to rigorously evaluate LLMs for repository-level Automated Code Review.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- M2G-Eval: Enhancing and Evaluating Multi-granularity Multilingual Code Generation (2025)
- Sphinx: Benchmarking and Modeling for LLM-Driven Pull Request Review (2026)
- SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories (2025)
- Evaluating and Achieving Controllable Code Completion in Code LLM (2026)
- MHRC-Bench: A Multilingual Hardware Repository-Level Code Completion benchmark (2026)
- SeRe: A Security-Related Code Review Dataset Aligned with Real-World Review Activities (2026)
- SweRank+: Multilingual, Multi-Turn Code Ranking for Software Issue Localization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper