video video |
|---|
Dexterous Manipulation Benchmark — Cross-Method Video Grid
3 methods × 4 hands × 7 OakInk-v2 bimanual trajectories, all rendered under an identical 720×480 @ 30fps BENCH_CAMERA so the paper tile grid lines up frame-for-frame.
Metrics in
ckwolfe/benchmarks-viz-tiles· trained checkpoints inckwolfe/benchmarks-trained-ckpts.
Paper comparison table
| method | class | simulator | n eval cells | ADD ↓ (m) | tracking_err ↓ (m) | success ↑ | cost / cell | standardized videos |
|---|---|---|---|---|---|---|---|---|
| ManipTrans | closed-loop RL (residual) | IsaacGym | 27 | — | 0.040 | — | ~2 s | 21/28 |
| DexMachina | closed-loop RL (PPO) | Genesis | 225 | 0.244 | — | 0.191 | ~300 s | 0/28 |
| Spider | sampling (MJWP) | MuJoCo-Warp | 28 | — | 0.083 | 0.357 | ~600 s | 0/28 |
Reference trajectory videos (kinematic replay of the demonstrator under the same camera) are published separately as a visual quality reference — not a peer method. See the reference-only appendix section below if you need them.
Video grid — ManipTrans (OakInk-v2)
The paper's closed-loop-RL column. Videos are MT's captured rollout qpos replayed through the shared BENCH_CAMERA MuJoCo scene. Xhand joint names auto-match the MJ scene (24/38 joints); allegro / inspire currently fall back to a linear DOF mapping, which leaves the hand frozen (follow-up: per-hand name translation table).
allegro
| lift_board | pick_spoon | pour_tube | stir_beaker | uncap | unplug | wipe_board |
|---|---|---|---|---|---|---|
inspire
| lift_board | pick_spoon | pour_tube | stir_beaker | uncap | unplug | wipe_board |
|---|---|---|---|---|---|---|
schunk
| lift_board | pick_spoon | pour_tube | stir_beaker | uncap | unplug | wipe_board |
|---|---|---|---|---|---|---|
| · | · | · | · | · | · | · |
xhand
| lift_board | pick_spoon | pour_tube | stir_beaker | uncap | unplug | wipe_board |
|---|---|---|---|---|---|---|
Coverage summary
| method | real eval cells | standardized videos |
|---|---|---|
| ManipTrans | oakink 27/28, arctic 0/28 | 21/28 |
| DexMachina | arctic 225/28=8-rep, oakink 0/28 | 0/28 (native renders only) |
| Spider | oakink 28/28 sampled / 12/28 real row | 0/28 (native renders only) |
OakInk bimanual tasks: lift_board · pick_spoon_bowl · pour_tube · stir_beaker · uncap_alcohol_burner · unplug · wipe_board.
Arctic tasks: ketchup30 · box30 · mixer30 · ketchup40 · mixer40 · notebook40 · waffleiron40 (s01 subject).
Methods
ManipTrans — closed-loop residual-RL atop per-hand PPO imitator, IsaacGym. Paper · Upstream. This benchmark ships imitators for {allegro, inspire, xhand}; schunk retrain in progress.
DexMachina — rl_games PPO in Genesis, 140 arctic checkpoints from upstream. Paper · Upstream. OakInk not publicly released; from-scratch is ~35 GPU-days/cell.
Spider — sampling-based MPC on MuJoCo-Warp. Upstream. MJWP sampling complete 28/28 oakink; real-row MetricsRow emission partial.
Metric schema
shared/bench/schema.py::MetricsRow. Key fields:
| field | units | notes |
|---|---|---|
success_rate |
0–1 | per-episode outcome mean |
tracking_err_mean |
meters | position err vs reference; blanked if divergent (>1 m) |
add_mean |
meters | Average Distance of Displacement |
sim_steps |
— | hard cap via BENCH_MAX_STEPS |
upstream_commit |
git SHA | captured on host |
sim_backend |
str | isaacgym / genesis / mjwp |
Row-append validator rejects tracking_err_mean > tracking_err_max + 0.01 (the divergent-rollout failure mode).
Reviewer notes
"5 seeds" = 5 training reps (shared seed = 42). All DM checkpoints share seed: 42 per each config.json. Variance reflects rl_games's env-step + mini-epoch stochasticity, not a seed sweep.
Open-loop vs closed-loop. Spider (sampling) runs in a fundamentally different regime from MT/DM (learned closed-loop). Paper Table 1 splits into two sub-tables keyed on METHOD_AXIS_CLASS in shared/bench/plot_style.py.
Simulator confound. MT=IsaacGym, DM=Genesis, Spider=MuJoCo-Warp. Contact model, integrator, and timestep differ — tracking-err is within-class comparable, cross-class illustrative only.
Known gaps
- DM × oakink training — public ckpts arctic-only; from-scratch is ~35 GPU-days/cell.
- MT × schunk imitator — retraining overnight.
- Arctic standardized videos — Spider arctic preprocess stops at stage-1.
- MT allegro/inspire video joint-name mapping — xhand 24/38 auto-matched; others need per-hand translation table.
Reference kinematic trajectories (not part of the method comparison)
For qualitative inspection of the reference demonstrations each method is trying to track, videos/std_*.mp4 contain the per-hand kinematic replay of the Spider-preprocessed trajectory under the same BENCH_CAMERA. These are not a 4th method — they are what the policies are being compared against.
Citation
@misc{dex-manip-benchmark-2026,
title = {Dexterous Manipulation Benchmark — Cross-Method Video Grid},
author = {C.K. Wolfe and T. Sadjadpour},
year = {2026},
url = {https://huggingface.co/datasets/ckwolfe/dex-manip-benchmark},
}
- Downloads last month
- 26