Abstract
AiScientist enables autonomous long-horizon ML research engineering by combining hierarchical orchestration with durable state management, achieving superior performance on benchmark tasks through structured coordination and persistent project artifacts.
Autonomous AI research has advanced rapidly, but long-horizon ML research engineering remains difficult: agents must sustain coherent progress across task comprehension, environment setup, implementation, experimentation, and debugging over hours or days. We introduce AiScientist, a system for autonomous long-horizon engineering for ML research built on a simple principle: strong long-horizon performance requires both structured orchestration and durable state continuity. To this end, AiScientist combines hierarchical orchestration with a permission-scoped File-as-Bus workspace: a top-level Orchestrator maintains stage-level control through concise summaries and a workspace map, while specialized agents repeatedly re-ground on durable artifacts such as analyses, plans, code, and experimental evidence rather than relying primarily on conversational handoffs, yielding thin control over thick state. Across two complementary benchmarks, AiScientist improves PaperBench score by 10.54 points on average over the best matched baseline and achieves 81.82 Any Medal% on MLE-Bench Lite. Ablation studies further show that File-as-Bus protocol is a key driver of performance, reducing PaperBench by 6.41 points and MLE-Bench Lite by 31.82 points when removed. These results suggest that long-horizon ML research engineering is a systems problem of coordinating specialized work over durable project state, rather than a purely local reasoning problem.
Community
AiScientist is an autonomous system for long-horizon ML research engineering. It shows that long-horizon ML research engineering is not just a local reasoning problem, but a systems problem of state continuity. By combining hierarchical orchestration with a File-as-Bus workspace that preserves durable project state, it can carry work across paper understanding, environment setup, implementation, experimentation, and debugging, improving PaperBench by 10.54 points over the best matched baseline and reaching 81.82 Any Medal% on MLE-Bench Lite.
Get this paper in your agent:
hf papers read 2604.13018 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper