MA-EgoQA: Question Answering over Egocentric Videos from Multiple Embodied Agents
Abstract
Multi-agent systems require understanding multiple long-horizon egocentric videos simultaneously, necessitating new benchmarks and models for system-level comprehension.
As embodied models become powerful, humans will collaborate with multiple embodied AI agents at their workplace or home in the future. To ensure better communication between human users and the multi-agent system, it is crucial to interpret incoming information from agents in parallel and refer to the appropriate context for each query. Existing challenges include effectively compressing and communicating high volumes of individual sensory inputs in the form of video and correctly aggregating multiple egocentric videos to construct system-level memory. In this work, we first formally define a novel problem of understanding multiple long-horizon egocentric videos simultaneously collected from embodied agents. To facilitate research in this direction, we introduce MultiAgent-EgoQA (MA-EgoQA), a benchmark designed to systemically evaluate existing models in our scenario. MA-EgoQA provides 1.7k questions unique to multiple egocentric streams, spanning five categories: social interaction, task coordination, theory-of-mind, temporal reasoning, and environmental interaction. We further propose a simple baseline model for MA-EgoQA named EgoMAS, which leverages shared memory across embodied agents and agent-wise dynamic retrieval. Through comprehensive evaluation across diverse baselines and EgoMAS on MA-EgoQA, we find that current approaches are unable to effectively handle multiple egocentric streams, highlighting the need for future advances in system-level understanding across the agents. The code and benchmark are available at https://ma-egoqa.github.io.
Community
We introduce MA-EgoQA, the first benchmark for question answering over multiple long-horizon egocentric videos from embodied agents (1,741 questions, 5 categories, 6 agents, 7 days).
Moreover, we propose EgoMAS, a training-free baseline using shared memory and dynamic retrieval that outperforms state-of-the-art frontier models like Gemini-2.5-Flash and GPT-5.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LifeEval: A Multimodal Benchmark for Assistive AI in Egocentric Daily Life Tasks (2026)
- EgoGraph: Temporal Knowledge Graph for Egocentric Video Understanding (2026)
- EgoSound: Benchmarking Sound Understanding in Egocentric Videos (2026)
- FocusGraph: Graph-Structured Frame Selection for Embodied Long Video Question Answering (2026)
- VideoThinker: Building Agentic VideoLLMs with LLM-Guided Tool Reasoning (2026)
- EgoAVU: Egocentric Audio-Visual Understanding (2026)
- Thinker: A vision-language foundation model for embodied intelligence (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper