Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation
Abstract
A multi-agent retrieval-augmented framework improves counterspeech generation against health misinformation by integrating multiple LLMs for better evidence retrieval and response quality.
Large language models (LLMs) incorporated with Retrieval-Augmented Generation (RAG) have demonstrated powerful capabilities in generating counterspeech against misinformation. However, current studies rely on limited evidence and offer less control over final outputs. To address these challenges, we propose a Multi-agent Retrieval-Augmented Framework to generate counterspeech against health misinformation, incorporating multiple LLMs to optimize knowledge retrieval, evidence enhancement, and response refinement. Our approach integrates both static and dynamic evidence, ensuring that the generated counterspeech is relevant, well-grounded, and up-to-date. Our method outperforms baseline approaches in politeness, relevance, informativeness, and factual accuracy, demonstrating its effectiveness in generating high-quality counterspeech. To further validate our approach, we conduct ablation studies to verify the necessity of each component in our framework. Furthermore, cross evaluations show that our system generalizes well across diverse health misinformation topics and datasets. And human evaluations reveal that refinement significantly enhances counterspeech quality and obtains human preference.
Community
๐ How can we make counterspeech against health misinformation more effective?
Our recent paper, Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation (https://arxiv.org/abs/2507.07307), explores this question.
We propose a multi-agent framework that integrates multiple LLMs for evidence retrieval, enhancement, and refinement. Unlike single-model approaches, this system:
- Combines static + dynamic evidence for up-to-date, reliable responses.
- Produces counterspeech that is more polite, relevant, and factually accurate.
- Demonstrates strong generalization across diverse health misinformation topics.
๐ก We also conducted ablation studies and human evaluations, showing that refinement steps significantly improve user preference and counterspeech quality.
๐ Iโd love to hear your thoughts:
How do you see multi-agent systems shaping the future of fact-checking and counterspeech?
What challenges do you think remain in making LLM-based health communication both effective and trustworthy?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper