Papers
arxiv:2603.20576

Can AI Agents Answer Your Data Questions? A Benchmark for Data Agents

Published on Mar 21
· Submitted by
Shreya Shankar
on Mar 25
Authors:
,
,
,
,
,
,
,
,
,

Abstract

A comprehensive benchmark evaluates enterprise data agents' ability to integrate and analyze multi-database data through natural language, revealing significant challenges in real-world applications.

AI-generated summary

Users across enterprises increasingly rely on AI agents to query their data through natural language. However, building reliable data agents remains difficult because real-world data is often fragmented across multiple heterogeneous database systems, with inconsistent references and information buried in unstructured text. Existing benchmarks only tackle individual pieces of this problem -- e.g., translating natural-language questions into SQL queries, answering questions over small tables provided in context -- but do not evaluate the full pipeline of integrating, transforming, and analyzing data across multiple database systems. To fill this gap, we present the Data Agent Benchmark (DAB), grounded in a formative study of enterprise data agent workloads across six industries. DAB comprises 54 queries across 12 datasets, 9 domains, and 4 database management systems. On DAB, the best frontier model (Gemini-3-Pro) achieves only 38% pass@1 accuracy. We benchmark five frontier LLMs, analyze their failure modes, and distill takeaways for future data agent development. Our benchmark and experiment code are published at github.com/ucbepic/DataAgentBench.

Community

Users across enterprises increasingly rely on AI agents to query
their data through natural language. However, building reliable data
agents remains difficult because real-world data is often fragmented
across multiple heterogeneous database systems, with inconsistent
references and information buried in unstructured text. Existing
benchmarks only tackle individual pieces of this problem—e.g.,
translating natural-language questions into SQL queries, answering questions over small tables provided in context—but do not
evaluate the full pipeline of integrating, transforming, and analyzing data across multiple database systems. To fill this gap, we
present the Data Agent Benchmark (DAB), grounded in a formative
study of enterprise data agent workloads across six industries. DAB
comprises 54 queries across 12 datasets, 9 domains, and 4 database
management systems. On DAB, the best frontier model (Gemini3-Pro) achieves only 38% pass@1 accuracy. We benchmark five
frontier LLMs, analyze their failure modes, and distill takeaways
for future data agent development. Our benchmark and experiment
code are published at github.com/ucbepic/DataAgentBench.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.20576 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.20576 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.20576 in a Space README.md to link it from this page.

Collections including this paper 1