metadata
license: cc-by-nc-4.0
pipeline_tag: depth-estimation
Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots
This repository contains the Camera Depth Models (CDMs) from the paper Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots.
CDMs are proposed as a simple plugin for daily-use depth cameras, taking RGB images and raw depth signals as input to output denoised, accurate metric depth. This enables policies trained purely in simulation to transfer directly to real robots, effectively bridging the sim-to-real gap for manipulation tasks.
- Project page: https://manipulation-as-in-simulation.github.io/
- Code repository: https://github.com/ByteDance-Seed/manip-as-in-sim-suite
Usage
To run depth inference on RGB-D camera data, follow the example from the GitHub repository's CDM section:
cd cdm
python infer.py \
--encoder vitl \
--model-path /path/to/model.pth \
--rgb-image /path/to/rgb.jpg \
--depth-image /path/to/depth.png \
--output result.png