version int64 | seed int64 | created timestamp[s] | n_samples int64 | source string | min_frames int64 | export_suffix string | samples list |
|---|---|---|---|---|---|---|---|
1 | 42 | 2026-04-17T00:00:00 | 240 | benchmark_manifest | 81 | _v4 | [
{
"embodiment": "robot",
"dataset": "agibot",
"episode": "368_667087_a1_140_266",
"camera": "head",
"data_root": "/nfs/turbo/coe-jungaocv-turbo2/wzy/datasets/benchmark_export_v4/agibot"
},
{
"embodiment": "robot",
"dataset": "agibot",
"episode": "380_696636_a5_598_977",
"came... |
WM4VLA Benchmark v4
Paper-definitive evaluation benchmark for video world models — 240 samples across 6 robotic manipulation datasets (agibot / airoa_moma / droid / interndata / rh20t_cfg5 / rh20t_cfg7), filtered for quality and subsampled via sqrt-N–weighted k-means medoid selection.
This supersedes masterwu/WM4VLA_benchmark (v1, 1000 samples, retired 2026-04-17).
At a glance
| Samples | 240 |
| Datasets | 6 (agibot 77, airoa_moma 35, droid 54, interndata 30, rh20t_cfg5 23, rh20t_cfg7 21) |
| Frames per sample | 81 (aligned so frame 0 passes arm-visibility) |
| Resolution | 480 × 640 |
| FPS | per-sample native (agibot 15, airoa 30, rh20t_cfg5 12, rh20t_cfg7 10, droid 14, interndata 30) |
| Size on disk | ~6.7 GB |
| License | MIT (see upstream dataset licenses for source terms) |
Provenance (v4 = filter-backport of v1)
v1 pool (1000 samples): original k-means-selected evaluation benchmark.
Filters applied (from the v3 protocol):
num_frames ≥ 81before any trim.- Arm-visibility three-tuple on
gripper_scenario.mp4:- Frame-0 non-background pixel count > 3000 (
T0) - Min-window pixel count > 500 (
T1) - Max/min ratio < 3.0 within the 81-frame window (
R) - Scan up to 200 frames to find the earliest valid window.
- Frame-0 non-background pixel count > 3000 (
Re-encoded via ffmpeg with
libx264 crf=18for samples whose valid window starts past frame 0 (trim offset stored inarm_visibility_v4.json).Post-trim sanity re-checks the re-encoded mp4; failures are replaced by next-best survivors to restore per-dataset allocation.
k-means subsample to 240 total, allocated by sqrt-N per-dataset with largest-remainder rounding:
dataset v1 pool survived filters v4 agibot 490 335 77 airoa_moma 103 91 35 droid 247 238 54 interndata 77 77 30 rh20t_cfg5 45 45 23 rh20t_cfg7 38 37 21 total 1000 823 240
Note: train_manifest.json from v1 is unchanged by this filtering — existing training runs that use the v1 train split are unaffected.
Per-sample layout
{dataset}/{episode}/{camera}/
├─ rgb.mp4 # H.264, 480×640, 81 frames, per-dataset FPS
├─ gripper_scenario.mp4 # skeleton-rendered gripper/arm projection (for arm-visibility & ROI)
├─ caption.pickle # {"caption": str, ...}
├─ episode_meta.npz # ee_pose (4×4 per frame), joint_angles, gripper_openness, camera intrinsics/extrinsics
└─ overlay.mp4 # optional debug overlay (rgb + gripper_scenario); present for ~163/240 samples
agibot uses dual-arm keys ee_pose_left / ee_pose_right (+ gripper_openness_left/right + joint_angles_left/right); all other datasets are single-arm and use ee_pose / gripper_openness / joint_angles.
Top-level files
benchmark_eval_split_v4.json— flat list of 240 samples with{embodiment, dataset, episode, camera, data_root}per entry. Drop-in for baseline adapters (see baseline integration guide).benchmark_manifest.json— per-dataset export summary (counts, backfills, trim stats).
Evaluation protocol (recommended)
- Input window: first 81 frames of
rgb.mp4starting at frame 0. v4 guarantees ≥ 81 frames; no freeze-pad needed. - Metric window: first 49 frames of both GT and generated output (post-generation slice). Matches the Kinema4D convention.
- Skip frame 0 in per-frame PSNR/SSIM/LPIPS — frame 0 is the I2V input image, its metric is trivially perfect.
- Metrics: PSNR ↑, SSIM ↑, LPIPS ↓ (AlexNet backbone), FVD ↓ (I3D Kinetics-400), FID ↓ (InceptionV3), Latent L2 ↓ (Wan2.1 VAE), tLPIPS ↓ (temporal LPIPS between consecutive generated frames, GT-free).
- Reporting levels:
- PSNR / SSIM / LPIPS / tLPIPS / Latent L2: overall + per-embodiment + per-dataset
- FVD / FID: overall + per-embodiment only (per-dataset N ≈ 30-80 is below the empirical FID/FVD stability threshold)
Baseline usage
import json
split = json.load(open("benchmark_eval_split_v4.json"))
for s in split["samples"]:
rgb = f"{s['data_root']}/{s['episode']}/{s['camera']}/rgb.mp4"
meta = f"{s['data_root']}/{s['episode']}/{s['camera']}/episode_meta.npz"
# ... run your baseline, save gen.mp4 to
# outputs/<baseline>/{s['embodiment']}/{s['dataset']}/{episode_basename}/gen.mp4
Citation
@dataset{masterwu_wm4vla_benchmark_v4_2026,
author = {wuzy2115},
title = {WM4VLA Benchmark v4},
year = {2026},
url = {https://huggingface.co/datasets/masterwu/WM4VLA_benchmark_v4},
}
Changelog
- v4 (2026-04-17): Current. v1 pool filtered by arm-visibility + min_frames ≥ 81, subsampled to 240, re-encoded for trimmed samples, post-trim sanity + backfill to restore per-dataset allocation. Adds
episode_meta.npzper sample (was absent in v1 export). - v1 (2026-04-12): Retired. 1000 samples, unfiltered pool. Archived at
masterwu/WM4VLA_benchmark.
- Downloads last month
- 42