Papers
arxiv:2510.12635

Memory as Action: Autonomous Context Curation for Long-Horizon Agentic Tasks

Published on Oct 14
· Submitted by Yuxiang Zhang on Oct 15
Authors:
,
,
,

Abstract

Large Language Models face challenges in long-horizon agentic tasks as their constrained memory is easily overwhelmed by distracting or irrelevant context. Existing working memory methods typically rely on external, heuristic mechanisms that are decoupled from the agent's core policy. In this work, we reframe working memory management as a learnable, intrinsic capability. We propose a novel framework, Memory-as-Action, where an agent actively manages its working memory by executing explicit editing operations as part of a unified policy. This formulation allows an agent, trained via reinforcement learning, to balance memory curation against long-term task objectives under given resource constraints. However, such memory editing actions break the standard assumption of a continuously growing prefix in LLM interactions, leading to what we call trajectory fractures. These non-prefix changes disrupt the causal continuity required by standard policy gradient methods, making those methods inapplicable. To address this, we propose a new algorithm, Dynamic Context Policy Optimization, which enables stable end-to-end reinforcement learning by segmenting trajectories at memory action points and applying trajectory-level advantages to the resulting action segments. Our results demonstrate that jointly optimizing for task reasoning and memory management in an end-to-end fashion not only reduces overall computational consumption but also improves task performance, driven by adaptive context curation strategies tailored to the model's intrinsic capabilities.

Community

Paper author Paper submitter

Large Language Models face challenges in long-horizon agentic tasks as their
constrained memory is easily overwhelmed by distracting or irrelevant context.
Existing working memory methods typically rely on external, heuristic mecha-
nisms that are decoupled from the agent’s core policy. In this work, we reframe
working memory management as a learnable, intrinsic capability. We propose a
novel framework, Memory-as-Action, where an agent actively manages its work-
ing memory by executing explicit editing operations as part of a unified policy.
This formulation allows an agent, trained via reinforcement learning, to balance
memory curation against long-term task objectives under given resource con-
straints. However, such memory editing actions break the standard assumption
of a continuously growing prefix in LLM interactions, leading to what we call
trajectory fractures. These non-prefix changes disrupt the causal continuity re-
quired by standard policy gradient methods, making those methods inapplicable.
To address this, we propose a new algorithm, Dynamic Context Policy Optimiza-
tion, which enables stable end-to-end reinforcement learning by segmenting tra-
jectories at memory action points and applying trajectory-level advantages to the
resulting action segments. Our results demonstrate that jointly optimizing for task
reasoning and memory management in an end-to-end fashion not only reduces
overall computational consumption but also improves task performance, driven
by adaptive context curation strategies tailored to the model’s intrinsic capabili-
ties.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.12635 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.12635 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.12635 in a Space README.md to link it from this page.

Collections including this paper 2