Abstract
A unified Vision-Language-Action (VLA) and world model, RynnVLA-002, jointly learns environmental dynamics and action planning, outperforming individual models in both simulation and real-world tasks.
We introduce RynnVLA-002, a unified Vision-Language-Action (VLA) and world model. The world model leverages action and visual inputs to predict future image states, learning the underlying physics of the environment to refine action generation. Conversely, the VLA model produces subsequent actions from image observations, enhancing visual understanding and supporting the world model's image generation. The unified framework of RynnVLA-002 enables joint learning of environmental dynamics and action planning. Our experiments show that RynnVLA-002 surpasses individual VLA and world models, demonstrating their mutual enhancement. We evaluate RynnVLA-002 in both simulation and real-world robot tasks. RynnVLA-002 achieves 97.4% success rate on the LIBERO simulation benchmark without pretraining, while in real-world LeRobot experiments, its integrated world model boosts the overall success rate by 50%.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- dVLA: Diffusion Vision-Language-Action Model with Multimodal Chain-of-Thought (2025)
- Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process (2025)
- UniCoD: Enhancing Robot Policy via Unified Continuous and Discrete Representation Learning (2025)
- Embodiment Transfer Learning for Vision-Language-Action Models (2025)
- VITA-VLA: Efficiently Teaching Vision-Language Models to Act via Action Expert Distillation (2025)
- XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations (2025)
- Spatial Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper