Multi-Agent Deep Research: Training Multi-Agent Systems with M-GRPO
Abstract
M-GRPO, an extension of Group Relative Policy Optimization for hierarchical multi-agent systems, improves stability and efficiency in tool-augmented reasoning tasks by aligning heterogeneous trajectories and decoupling agent training.
Multi-agent systems perform well on general reasoning tasks. However, the lack of training in specialized areas hinders their accuracy. Current training methods train a unified large language model (LLM) for all agents in the system. This may limit the performances due to different distributions underlying for different agents. Therefore, training multi-agent systems with distinct LLMs should be the next step to solve. However, this approach introduces optimization challenges. For example, agents operate at different frequencies, rollouts involve varying sub-agent invocations, and agents are often deployed across separate servers, disrupting end-to-end gradient flow. To address these issues, we propose M-GRPO, a hierarchical extension of Group Relative Policy Optimization designed for vertical Multi-agent systems with a main agent (planner) and multiple sub-agents (multi-turn tool executors). M-GRPO computes group-relative advantages for both main and sub-agents, maintaining hierarchical credit assignment. It also introduces a trajectory-alignment scheme that generates fixed-size batches despite variable sub-agent invocations. We deploy a decoupled training pipeline in which agents run on separate servers and exchange minimal statistics via a shared store. This enables scalable training without cross-server backpropagation. In experiments on real-world benchmarks (e.g., GAIA, XBench-DeepSearch, and WebWalkerQA), M-GRPO consistently outperforms both single-agent GRPO and multi-agent GRPO with frozen sub-agents, demonstrating improved stability and sample efficiency. These results show that aligning heterogeneous trajectories and decoupling optimization across specialized agents enhances tool-augmented reasoning tasks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Multi-Agent Tool-Integrated Policy Optimization (2025)
- Stronger Together: On-Policy Reinforcement Learning for Collaborative LLMs (2025)
- In-the-Flow Agentic System Optimization for Effective Planning and Tool Use (2025)
- MARS: Reinforcing Multi-Agent Reasoning of LLMs through Self-Play in Strategic Games (2025)
- AgentRL: Scaling Agentic Reinforcement Learning with a Multi-Turn, Multi-Task Framework (2025)
- Unlocking the Power of Multi-Agent LLM for Reasoning: From Lazy Agents to Deliberation (2025)
- DRAFT-RL: Multi-Agent Chain-of-Draft Reasoning for Reinforcement Learning-Enhanced LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper