High-Fidelity Simulated Data Generation for Real-World Zero-Shot Robotic Manipulation Learning with Gaussian Splatting
Abstract
RoboSimGS, a Real2Sim2Real framework, uses 3D Gaussian Splatting and mesh primitives to create scalable, high-fidelity, and physically interactive simulation environments, enabling successful zero-shot sim-to-real transfer for robotic manipulation tasks.
The scalability of robotic learning is fundamentally bottlenecked by the significant cost and labor of real-world data collection. While simulated data offers a scalable alternative, it often fails to generalize to the real world due to significant gaps in visual appearance, physical properties, and object interactions. To address this, we propose RoboSimGS, a novel Real2Sim2Real framework that converts multi-view real-world images into scalable, high-fidelity, and physically interactive simulation environments for robotic manipulation. Our approach reconstructs scenes using a hybrid representation: 3D Gaussian Splatting (3DGS) captures the photorealistic appearance of the environment, while mesh primitives for interactive objects ensure accurate physics simulation. Crucially, we pioneer the use of a Multi-modal Large Language Model (MLLM) to automate the creation of physically plausible, articulated assets. The MLLM analyzes visual data to infer not only physical properties (e.g., density, stiffness) but also complex kinematic structures (e.g., hinges, sliding rails) of objects. We demonstrate that policies trained entirely on data generated by RoboSimGS achieve successful zero-shot sim-to-real transfer across a diverse set of real-world manipulation tasks. Furthermore, data from RoboSimGS significantly enhances the performance and generalization capabilities of SOTA methods. Our results validate RoboSimGS as a powerful and scalable solution for bridging the sim-to-real gap.
Community
TL;DR: RoboSimGS, a novel Real2Sim2Real framework that converts multi-view real-world images into scalable, high-fidelity, and physically interactive simulation environments for robotic manipulation.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RealMirror: A Comprehensive, Open-Source Vision-Language-Action Platform for Embodied AI (2025)
- Robot Learning from Any Images (2025)
- EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer (2025)
- HoloScene: Simulation-Ready Interactive 3D Worlds from a Single Video (2025)
- IGFuse: Interactive 3D Gaussian Scene Reconstruction via Multi-Scans Fusion (2025)
- GP3: A 3D Geometry-Aware Policy with Multi-View Images for Robotic Manipulation (2025)
- R2RGEN: Real-to-Real 3D Data Generation for Spatially Generalized Manipulation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper