Papers
arxiv:2603.22386

From Static Templates to Dynamic Runtime Graphs: A Survey of Workflow Optimization for LLM Agents

Published on Mar 23
· Submitted by
Leo Y
on Mar 25
Authors:
,
,
,
,
,
,

Abstract

LLM-based systems use executable workflows that interleave various computational components, with recent approaches organized by workflow structure determination timing and optimization dimensions.

AI-generated summary

Large language model (LLM)-based systems are becoming increasingly popular for solving tasks by constructing executable workflows that interleave LLM calls, information retrieval, tool use, code execution, memory updates, and verification. This survey reviews recent methods for designing and optimizing such workflows, which we treat as agentic computation graphs (ACGs). We organize the literature based on when workflow structure is determined, where structure refers to which components or agents are present, how they depend on each other, and how information flows between them. This lens distinguishes static methods, which fix a reusable workflow scaffold before deployment, from dynamic methods, which select, generate, or revise the workflow for a particular run before or during execution. We further organize prior work along three dimensions: when structure is determined, what part of the workflow is optimized, and which evaluation signals guide optimization (e.g., task metrics, verifier signals, preferences, or trace-derived feedback). We also distinguish reusable workflow templates, run-specific realized graphs, and execution traces, separating reusable design choices from the structures actually deployed in a given run and from realized runtime behavior. Finally, we outline a structure-aware evaluation perspective that complements downstream task metrics with graph-level properties, execution cost, robustness, and structural variation across inputs. Our goal is to provide a clear vocabulary, a unified framework for positioning new methods, a more comparable view of existing body of literature, and a more reproducible evaluation standard for future work in workflow optimizations for LLM agents.

Community

Paper submitter

This survey provides a clear framework for understanding workflow optimization in LLM-based agents. The authors formalize agentic systems as agentic computation graphs (ACGs), distinguish reusable workflow templates, run-specific realized graphs, and execution traces, and organize the literature by when workflow structure is determined: static optimization, pre-execution generation or selection, and in-execution editing. I especially found the evaluation perspective valuable, since the paper argues that workflow structure should be treated as a first-class output alongside downstream task performance, execution cost, robustness, and structural variation across inputs. Overall, this is a timely and well-structured survey for researchers and practitioners working on LLM agents, multi-agent systems, and orchestration frameworks.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.22386 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.22386 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.22386 in a Space README.md to link it from this page.

Collections including this paper 1