Improve model card: add pipeline tag, update license, links, and usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +20 -4
README.md CHANGED
@@ -1,9 +1,25 @@
1
  ---
2
- license: cc-by-nc-4.0
 
3
  ---
4
 
5
- This repository contains the camera depth model of the paper Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots.
6
 
7
- Model inference guide: https://github.com/ByteDance-Seed/manip-as-in-sim-suite/tree/main/cdm
8
 
9
- Project page: https://manipulation-as-in-simulation.github.io
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ pipeline_tag: depth-estimation
4
  ---
5
 
6
+ This repository contains the Camera Depth Model (CDM) of the paper [Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots](https://huggingface.co/papers/2509.02530).
7
 
8
+ The Camera Depth Models (CDMs) are proposed as a simple plugin on daily-use depth cameras, which take RGB images and raw depth signals as input and output denoised, accurate metric depth. This enables accurate geometry perception in robots by effectively bridging the sim-to-real gap for manipulation tasks.
9
 
10
+ Project page: https://manipulation-as-in-simulation.github.io/
11
+ Code: https://github.com/ByteDance-Seed/manip-as-in-sim-suite
12
+
13
+ ## Sample Usage
14
+
15
+ To run depth inference on RGB-D camera data, use the `infer.py` script provided in the `cdm` directory of the main repository.
16
+
17
+ ```bash
18
+ cd cdm
19
+ python infer.py \
20
+ --encoder vitl \
21
+ --model-path /path/to/model.pth \
22
+ --rgb-image /path/to/rgb.jpg \
23
+ --depth-image /path/to/depth.png \
24
+ --output result.png
25
+ ```