Memory-Augmented Vision-Language Agents for Persistent and Semantically Consistent Object Captioning
Abstract
A memory-augmented vision-language agent simultaneously handles data association, object captioning, and exploration policy within a single autoregressive framework, ensuring consistent object representation across viewpoints.
Vision-Language Models (VLMs) often yield inconsistent descriptions of the same object across viewpoints, hindering the ability of embodied agents to construct consistent semantic representations over time. Previous methods resolved inconsistencies using offline multi-view aggregation or multi-stage pipelines that decouple exploration, data association, and caption learning, with limited capacity to reason over previously observed objects. In this paper, we introduce a unified, memory-augmented Vision-Language agent that simultaneously handles data association, object captioning, and exploration policy within a single autoregressive framework. The model processes the current RGB observation, a top-down explored map, and an object-level episodic memory serialized into object-level tokens, ensuring persistent object identity and semantic consistency across extended sequences. To train the model in a self-supervised manner, we collect a dataset in photorealistic 3D environments using a disagreement-based policy and a pseudo-captioning model that enforces consistency across multi-view caption histories. Extensive evaluation on a manually annotated object-level test set, demonstrate improvements of up to +11.86% in standard captioning scores and +7.39% in caption self-similarity over baseline models, while enabling scalable performance through a compact scene representation. Code, model weights, and data are available at https://hsp-iit.github.io/epos-vlm/.
Community
Why do VLMs call the same object a "sofa," a "bed," and an "armchair" during a single navigation task? 🛋️ We present EPOS-VLM, which uses a structured episodic object memory to ensure persistent semantic consistency in 3D environments. Our agent doesn't just describe; it actively explores to resolve perceptual ambiguities. It outperforms state-of-the-art VLMs like InternVL and BLIP-2 in embodied settings while maintaining a near-constant inference time. Check out our 3D-grounded pseudo-captioning dataset and benchmark!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GSMem: 3D Gaussian Splatting as Persistent Spatial Memory for Zero-Shot Embodied Exploration and Reasoning (2026)
- RenderMem: Rendering as Spatial Memory Retrieval (2026)
- Context-Nav: Context-Driven Exploration and Viewpoint-Aware 3D Spatial Reasoning for Instance Navigation (2026)
- HIMM: Human-Inspired Long-Term Memory Modeling for Embodied Exploration and Question Answering (2026)
- Recursive Belief Vision Language Action Models (2026)
- RoboStream: Weaving Spatio-Temporal Reasoning with Memory in Vision-Language Models for Robotics (2026)
- VISOR: VIsual Spatial Object Reasoning for Language-driven Object Navigation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper