Abstract
A multi-agent framework refines conversational responses by addressing factuality, personalization, and coherence, outperforming single-agent methods on challenging datasets.
Large Language Models (LLMs) have demonstrated remarkable success in conversational systems by generating human-like responses. However, they can fall short, especially when required to account for personalization or specific knowledge. In real-life settings, it is impractical to rely on users to detect these errors and request a new response. One way to address this problem is to refine the response before returning it to the user. While existing approaches focus on refining responses within a single LLM, this method struggles to consider diverse aspects needed for effective conversations. In this work, we propose refining responses through a multi-agent framework, where each agent is assigned a specific role for each aspect. We focus on three key aspects crucial to conversational quality: factuality, personalization, and coherence. Each agent is responsible for reviewing and refining one of these aspects, and their feedback is then merged to improve the overall response. To enhance collaboration among them, we introduce a dynamic communication strategy. Instead of following a fixed sequence of agents, our approach adaptively selects and coordinates the most relevant agents based on the specific requirements of each query. We validate our framework on challenging conversational datasets, demonstrating that ours significantly outperforms relevant baselines, particularly in tasks involving knowledge or user's persona, or both.
Community
We propose a cooperative multi-agent framework where specialized LLM agents (for factuality, personalization, and coherence) dynamically collaborate alongside a planner agent to refine conversational responses.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PRINCIPLES: Synthetic Strategy Memory for Proactive Dialogue Agents (2025)
- From Simulation to Strategy: Automating Personalized Interaction Planning for Conversational Agents (2025)
- ProSEA: Problem Solving via Exploration Agents (2025)
- ChatR1: Reinforcement Learning for Conversational Reasoning and Retrieval Augmented Question Answering (2025)
- Enhancing Multi-Agent Debate System Performance via Confidence Expression (2025)
- When Thoughts Meet Facts: Reusable Reasoning for Long-Context LMs (2025)
- MARS: toward more efficient multi-agent collaboration for LLM reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper