Improve dataset card for CausalVerse_Image: Add paper, code, project page, tasks, license, and detailed usage (#2)
Browse files- Improve dataset card for CausalVerse_Image: Add paper, code, project page, tasks, license, and detailed usage (9699f25cd6a01ae9a8b9b0f326625bc56919dce0)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -53,6 +53,20 @@ dataset_info:
|
|
| 53 |
num_bytes: 15431950241.0
|
| 54 |
download_size: 136135745843.0
|
| 55 |
dataset_size: 136135745843.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
---
|
| 57 |
|
| 58 |
# CausalVerse Image Dataset
|
|
@@ -67,6 +81,36 @@ All splits share the same columns:
|
|
| 67 |
- `render_path` (string; original image filename/path)
|
| 68 |
- `metavalue` (string; per-sample metadata; schema varies by split)
|
| 69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
## Sizes (from repository files)
|
| 71 |
- `scene1`: 11,736 examples — ~19.94 GB
|
| 72 |
- `scene2`: 11,736 examples — ~17.01 GB
|
|
@@ -81,7 +125,9 @@ All splits share the same columns:
|
|
| 81 |
> - `metavalue` is **split-specific** (e.g., `fall` uses keys like `id,h1,r,u,h2,view`, while `scene*` have attributes like `domain,age,gender,...`).
|
| 82 |
> - If you only need a portion, consider slicing (e.g., `split="fall[:1000]"`) or streaming to reduce local footprint.
|
| 83 |
|
| 84 |
-
##
|
|
|
|
|
|
|
| 85 |
|
| 86 |
```python
|
| 87 |
from datasets import load_dataset
|
|
@@ -91,4 +137,168 @@ ds_fall = load_dataset("CausalVerse/CausalVerse_Image", split="fall")
|
|
| 91 |
|
| 92 |
# Scene split
|
| 93 |
ds_s1 = load_dataset("CausalVerse/CausalVerse_Image", split="scene1")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
num_bytes: 15431950241.0
|
| 54 |
download_size: 136135745843.0
|
| 55 |
dataset_size: 136135745843.0
|
| 56 |
+
license: apache-2.0
|
| 57 |
+
task_categories:
|
| 58 |
+
- image-feature-extraction
|
| 59 |
+
- object-detection
|
| 60 |
+
- video-classification
|
| 61 |
+
language:
|
| 62 |
+
- en
|
| 63 |
+
tags:
|
| 64 |
+
- causal-representation-learning
|
| 65 |
+
- simulation
|
| 66 |
+
- robotics
|
| 67 |
+
- traffic
|
| 68 |
+
- physics
|
| 69 |
+
- synthetic
|
| 70 |
---
|
| 71 |
|
| 72 |
# CausalVerse Image Dataset
|
|
|
|
| 81 |
- `render_path` (string; original image filename/path)
|
| 82 |
- `metavalue` (string; per-sample metadata; schema varies by split)
|
| 83 |
|
| 84 |
+
**Paper:** [CausalVerse: Benchmarking Causal Representation Learning with Configurable High-Fidelity Simulations](https://huggingface.co/papers/2510.14049)
|
| 85 |
+
**Project page:** [https://causal-verse.github.io/](https://causal-verse.github.io/)
|
| 86 |
+
**Code:** [https://github.com/CausalVerse/CausalVerseBenchmark](https://github.com/CausalVerse/CausalVerseBenchmark)
|
| 87 |
+
|
| 88 |
+
## Overview
|
| 89 |
+
|
| 90 |
+
<p align="center"> <img src="https://github.com/CausalVerse/CausalVerseBenchmark/blob/main/assets/causalverse_intro.png?raw=true" alt="CausalVerse Overview Figure" width="85%">
|
| 91 |
+
|
| 92 |
+
**CausalVerse** is a comprehensive benchmark for **Causal Representation Learning (CRL)** focused on *recovering the data-generating process*. It couples **high-fidelity, controllable simulations** with **accessible and configurable ground-truth causal mechanisms** (structure, variables, interventions, temporal dependencies), bridging the gap between **realism** and **evaluation rigor**.
|
| 93 |
+
|
| 94 |
+
The benchmark spans **24 sub-scenes** across **four domains**:
|
| 95 |
+
- 🖼️ Static image generation
|
| 96 |
+
- 🧪 Dynamic physical simulation
|
| 97 |
+
- 🤖 Robotic manipulation
|
| 98 |
+
- 🚦 Traffic scene analysis
|
| 99 |
+
|
| 100 |
+
Scenarios range from **static to temporal**, **single to multi-agent**, and **simple to complex** structures, enabling principled stress-tests of CRL assumptions. We also include reproducible baselines to help practitioners align **assumptions ↔ data ↔ methods** and deploy CRL effectively.
|
| 101 |
+
|
| 102 |
+
## Dataset at a Glance
|
| 103 |
+
|
| 104 |
+
<p align="center">
|
| 105 |
+
<img src="https://github.com/CausalVerse/CausalVerseBenchmark/blob/main/assets/causalverse_overall.png?raw=true" alt="CausalVerse Overview Figure" width="45%">
|
| 106 |
+
<img src="https://github.com/CausalVerse/CausalVerseBenchmark/blob/main/assets/causalverse_pie.png?raw=true" alt="CausalVerse data info Figure" width="49.4%">
|
| 107 |
+
</p>
|
| 108 |
+
|
| 109 |
+
- **Scale & Coverage**: ≈ **200k** high-res images, ≈ **140k** videos, **>300M** frames across **24 scenes** in **4 domains**
|
| 110 |
+
- Image generation (4), Physical simulation (10; aggregated & dynamic), Robotic manipulation (5), Traffic (5)
|
| 111 |
+
- **Resolution & Duration**: typical **1024×1024** / **1920×1080**; clips **3–32 s**; diverse frame rates
|
| 112 |
+
- **Causal Variables**: **3–100+** per scene, including **categorical** (e.g., object/material types) and **continuous** (e.g., velocity, mass, positions). Temporal scenes combine **global invariants** (e.g., mass) with **time-evolving variables** (e.g., pose, momentum).
|
| 113 |
+
|
| 114 |
## Sizes (from repository files)
|
| 115 |
- `scene1`: 11,736 examples — ~19.94 GB
|
| 116 |
- `scene2`: 11,736 examples — ~17.01 GB
|
|
|
|
| 125 |
> - `metavalue` is **split-specific** (e.g., `fall` uses keys like `id,h1,r,u,h2,view`, while `scene*` have attributes like `domain,age,gender,...`).
|
| 126 |
> - If you only need a portion, consider slicing (e.g., `split="fall[:1000]"`) or streaming to reduce local footprint.
|
| 127 |
|
| 128 |
+
## Sample Usage
|
| 129 |
+
|
| 130 |
+
### Loading with `datasets` library
|
| 131 |
|
| 132 |
```python
|
| 133 |
from datasets import load_dataset
|
|
|
|
| 137 |
|
| 138 |
# Scene split
|
| 139 |
ds_s1 = load_dataset("CausalVerse/CausalVerse_Image", split="scene1")
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
### Using the Image Dataset (PyTorch-ready)
|
| 143 |
+
|
| 144 |
+
We provide a **reference PyTorch dataset/loader** that works with exported splits.
|
| 145 |
+
|
| 146 |
+
* Core class: `dataset/dataset_multisplit.py` → `MultiSplitImageCSVDataset`
|
| 147 |
+
* Builder: `build_dataloader(...)`
|
| 148 |
+
* Minimal example: `dataset/quickstart.py`
|
| 149 |
+
|
| 150 |
+
**Conventions**
|
| 151 |
+
|
| 152 |
+
* Each split folder contains `<SPLIT>.csv` + `.png` files
|
| 153 |
+
* CSV must include **`render_path`** (relative to the repository root or chosen data root)
|
| 154 |
+
* All remaining CSV columns are treated as **metadata** and packed into a float tensor `meta`
|
| 155 |
+
|
| 156 |
+
**Quick example**
|
| 157 |
+
|
| 158 |
+
```python
|
| 159 |
+
from dataset.dataset_multisplit import build_dataloader
|
| 160 |
+
# Optional torchvision transforms:
|
| 161 |
+
# import torchvision.transforms as T
|
| 162 |
+
# tfm = T.Compose([T.Resize((256, 256)), T.ToTensor()])
|
| 163 |
+
|
| 164 |
+
loader, ds = build_dataloader(
|
| 165 |
+
root="/path/to/causalverse",
|
| 166 |
+
split="SCENE1",
|
| 167 |
+
batch_size=16,
|
| 168 |
+
shuffle=True,
|
| 169 |
+
num_workers=4,
|
| 170 |
+
pad_images=True, # zero-pads within a batch if resolutions differ
|
| 171 |
+
# image_transform=tfm,
|
| 172 |
+
# check_files=True,
|
| 173 |
+
)
|
| 174 |
+
|
| 175 |
+
for images, meta in loader:
|
| 176 |
+
# images: FloatTensor [B, C, H, W] in [0, 1]
|
| 177 |
+
# meta : FloatTensor [B, D] with ordered metadata (including 'view' if present)
|
| 178 |
+
...
|
| 179 |
+
```
|
| 180 |
+
|
| 181 |
+
> **`view` column semantics**:
|
| 182 |
+
> • Physical splits (e.g., FALL/REFRACTION/SLOPE/SPRING): **camera viewpoint**
|
| 183 |
+
> • Human rendering splits (SCENE1–SCENE4): **indoor background type**
|
| 184 |
+
|
| 185 |
+
## Installation
|
| 186 |
+
|
| 187 |
+
```bash
|
| 188 |
+
# 1) Clone
|
| 189 |
+
git clone https://github.com/CausalVerse/CausalVerseBenchmark.git
|
| 190 |
+
cd CausalVerseBenchmark
|
| 191 |
+
|
| 192 |
+
# 2) Core environment
|
| 193 |
+
python3 --version # >= 3.9 recommended
|
| 194 |
+
pip install -U torch datasets huggingface_hub pillow tqdm
|
| 195 |
+
|
| 196 |
+
# 3) Optional: examples / loaders / transforms
|
| 197 |
+
pip install torchvision scikit-learn rich
|
| 198 |
+
```
|
| 199 |
+
|
| 200 |
+
## Download & Convert (Image subset)
|
| 201 |
+
|
| 202 |
+
Fetch the **image** portion from Hugging Face and export to a simple on-disk layout (PNG files + per-split CSVs).
|
| 203 |
+
|
| 204 |
+
**Quick start (recommended)**
|
| 205 |
+
|
| 206 |
+
```bash
|
| 207 |
+
chmod +x dataset/run_export.sh
|
| 208 |
+
./dataset/run_export.sh
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
This will:
|
| 212 |
+
|
| 213 |
+
* download parquet shards (skip if local),
|
| 214 |
+
* export images to `image/<SPLIT>/*.png`,
|
| 215 |
+
* write `<SPLIT>.csv` next to each split with metadata columns + a `render_path` column.
|
| 216 |
+
|
| 217 |
+
**Output layout**
|
| 218 |
+
|
| 219 |
+
```
|
| 220 |
+
image/
|
| 221 |
+
FALL/
|
| 222 |
+
FALL.csv
|
| 223 |
+
000001.png
|
| 224 |
+
...
|
| 225 |
+
SCENE1/
|
| 226 |
+
SCENE1.csv
|
| 227 |
+
char_001.png
|
| 228 |
+
...
|
| 229 |
+
```
|
| 230 |
+
|
| 231 |
+
<details>
|
| 232 |
+
<summary><b>Custom CLI usage</b></summary>
|
| 233 |
+
|
| 234 |
+
```bash
|
| 235 |
+
python dataset/export_causalverse_image.py \
|
| 236 |
+
--repo-id CausalVerse/CausalVerse_Image \
|
| 237 |
+
--hf-home ./.hf \
|
| 238 |
+
--raw-repo-dir ./CausalVerse_Image \
|
| 239 |
+
--image-root ./image \
|
| 240 |
+
--folder-case upper \
|
| 241 |
+
--no-overwrite \
|
| 242 |
+
--include-render-path-column \
|
| 243 |
+
--download-allow-patterns data/*.parquet \
|
| 244 |
+
--skip-download-if-local
|
| 245 |
+
|
| 246 |
+
# Export specific splits (case-insensitive)
|
| 247 |
+
python dataset/export_causalverse_image.py --splits FALL SCENE1
|
| 248 |
+
```
|
| 249 |
+
|
| 250 |
+
</details>
|
| 251 |
+
|
| 252 |
+
## Evaluation (Image Part)
|
| 253 |
+
|
| 254 |
+
We release four reproducible baselines (shared backbone & similar training loop for fair comparison):
|
| 255 |
+
|
| 256 |
+
* `CRL_SC` — Sufficient Change
|
| 257 |
+
* `CRL_SF` — Mechanism Sparsity
|
| 258 |
+
* `CRL_SP` — Multi-view
|
| 259 |
+
* `SUP` — Supervised upper bound
|
| 260 |
+
|
| 261 |
+
**How to run**
|
| 262 |
+
|
| 263 |
+
```bash
|
| 264 |
+
# From repo root, run each baseline:
|
| 265 |
+
cd evaluation/image_part/CRL_SC && python main.py
|
| 266 |
+
cd ../CRL_SF && python main.py
|
| 267 |
+
cd ../CRL_SP && python main.py
|
| 268 |
+
cd ../SUP && python main.py
|
| 269 |
+
|
| 270 |
+
# Example: pass data root via env or args
|
| 271 |
+
# DATA_ROOT=/path/to/causalverse python main.py
|
| 272 |
+
```
|
| 273 |
+
|
| 274 |
+
**Full comparison (MCC / R²)**
|
| 275 |
+
|
| 276 |
+
| Algorithm | Ball on the Slope<br><sub>MCC / R²</sub> | Cylinder Spring<br><sub>MCC / R²</sub> | Light Refraction<br><sub>MCC / R²</sub> | Avg<br><sub>MCC / R²</sub> |
|
| 277 |
+
|---|---:|---:|---:|---:|
|
| 278 |
+
| **Supervised** | 0.9878 / 0.9962 | 0.9970 / 0.9910 | 0.9900 / 0.9800 | **0.9916 / 0.9891** |
|
| 279 |
+
| **Sufficient Change** | 0.4434 / 0.9630 | 0.6092 / 0.9344 | 0.6778 / 0.8420 | 0.5768 / 0.9131 |
|
| 280 |
+
| **Mechanism Sparsity** | 0.2491 / 0.3242 | 0.3353 / 0.2340 | 0.1836 / 0.4067 | 0.2560 / 0.3216 |
|
| 281 |
+
| **Multiview** | 0.4109 / 0.9658 | 0.4523 / 0.7841 | 0.3363 / 0.7841 | 0.3998 / 0.8447 |
|
| 282 |
+
| **Contrastive Learning** | 0.2853 / 0.9604 | 0.6342 / 0.9920 | 0.3773 / 0.9677 | 0.4323 / 0.9734 |
|
| 283 |
+
|
| 284 |
+
|
| 285 |
+
> Ablations can be reproduced by editing each method’s `main.py` or adding configs (e.g., split selection, loss weights, target subsets).
|
| 286 |
+
|
| 287 |
+
## Acknowledgements
|
| 288 |
+
|
| 289 |
+
We thank the open-source community and the simulation/rendering ecosystem. We also appreciate contributors who help improve CausalVerse through issues and pull requests.
|
| 290 |
+
|
| 291 |
+
## Citation
|
| 292 |
+
|
| 293 |
+
If CausalVerse helps your research, please cite:
|
| 294 |
|
| 295 |
+
```bibtex
|
| 296 |
+
@inproceedings{causalverse2025,
|
| 297 |
+
title = {CausalVerse: Benchmarking Causal Representation Learning with Configurable High-Fidelity Simulations},
|
| 298 |
+
author = {Guangyi Chen and Yunlong Deng and Peiyuan Zhu and Yan Li and Yifan Shen and Zijian Li and Kun Zhang},
|
| 299 |
+
booktitle = {NeurIPS},
|
| 300 |
+
year = {2025},
|
| 301 |
+
note = {Spotlight},
|
| 302 |
+
url = {https://huggingface.co/CausalVerse}
|
| 303 |
+
}
|
| 304 |
+
```
|