|
--- |
|
license: mit |
|
language: |
|
- en |
|
--- |
|
# Dataset Card for TRANSIC Data |
|
|
|
This dataset card is accompanied with the [CoRL 2024 paper](https://arxiv.org/abs/2405.10315) titled TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction. |
|
It includes generated simulation data and real-robot human correction data for sim-to-real transfer of robotic arm manipulation policies. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
This dataset includes two parts, 1) simulation data used in student policy distillation and 2) real-robot data used in residual policy learning. |
|
|
|
The first part can be found in the `distillation` folder. We include 5 tasks in the `distillation/tasks` directory. For each task, we provide 10,000 successful trajectories generated by teacher policies trained with reinforcement learning in simulation. |
|
Furthermore, we also provide `matched_point_cloud_scenes.h5`, a seperate collection of 59 matched point clouds in simulation and the real world. We use them to regularize the point-cloud encoder during policy training. |
|
|
|
The second part can be found in the `correction_data` folder. We include real-world human correction data for 5 tasks. Each task contains different number of trajectories. Each trajectory includes observations, pre-intervention actions, and post-intervention actions for residual policy learning. |
|
|
|
- **Curated by:** [Yunfan Jiang](https://yunfanj.com/) |
|
- **License:** [MIT](LICENSE) |
|
|
|
### Dataset Sources |
|
|
|
- **Repositories:** [TRANSIC](https://github.com/transic-robot/transic), [TRANSIC-Envs](https://github.com/transic-robot/transic-envs) |
|
- **Paper:** [TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction](https://arxiv.org/abs/2405.10315) |
|
|
|
## Uses |
|
|
|
Please see our [codebase](https://github.com/transic-robot/transic) for detailed usage. |
|
|
|
## Dataset Structure |
|
|
|
Structure for `distillation/tasks/*.hdf5`: |
|
``` |
|
data[f"rollouts/successful/rollout_{idx}/actions"]: shape (L, 7), first 6 dimensions represent end-effector's pose change. The last dimension corresponds to the gripper action. |
|
data[f"rollouts/successful/rollout_{idx}/eef_pos"]: shape (L + 1, 3), end-effector's positions. |
|
data[f"rollouts/successful/rollout_{idx}/eef_quat"]: shape (L + 1, 4), end-effector's orientations in quaternion. |
|
data[f"rollouts/successful/rollout_{idx}/franka_base"]: shape (L + 1, 7), robot base pose. |
|
data[f"rollouts/successful/rollout_{idx}/gripper_width"]: shape (L + 1, 1), gripper's current width. |
|
data[f"rollouts/successful/rollout_{idx}/leftfinger"]: shape (L + 1, 7), left gripper finger pose. |
|
data[f"rollouts/successful/rollout_{idx}/q"]: shape (L + 1, 7), robot joint positions. |
|
data[f"rollouts/successful/rollout_{idx}/rightfinger"]: shape (L + 1, 7), right gripper finger pose. |
|
data[f"rollouts/successful/rollout_{idx}/{obj}"]: shape (L + 1, 7), pose for each object. |
|
``` |
|
|
|
Structure for `distillation/matched_point_cloud_scenes.h5`: |
|
``` |
|
# sim |
|
data[f"{date}/{idx}/sim/ee_mask"]: shape (N,), represent if each point in the point cloud corresponds to the end-effector. 0: not end-effector, 1: end-effector. |
|
data[f"{date}/{idx}/sim/franka_base"]: shape (7,), robot base pose. |
|
data[f"{date}/{idx}/sim/leftfinger"]: shape (7,), left gripper finger pose. |
|
data[f"{date}/{idx}/sim/pointcloud"]: shape (N, 3), synthetic point cloud. |
|
data[f"{date}/{idx}/sim/q"]: shape (9,), robot joint positions, last two dimensions correspond to two gripper fingers. |
|
data[f"{date}/{idx}/sim/rightfinger"]: shape (7,), right gripper finger pose. |
|
data[f"{date}/{idx}/sim/{obj}"]: shape (7,), pose for each object. |
|
|
|
# real |
|
data[f"{date}/{idx}/real/{sample}/eef_pos"]: shape (3, 1), end-effector's position. |
|
data[f"{date}/{idx}/real/{sample}/eef_quat"]: shape (4), end-effector's orientations in quaternion. |
|
data[f"{date}/{idx}/real/{sample}/fk_finger_pointcloud"]: shape (N, 3), point cloud for gripper fingers obtained through forward kinematics. |
|
data[f"{date}/{idx}/real/{sample}/gripper_width"]: shape (), gripper width. |
|
data[f"{date}/{idx}/real/{sample}/measured_pointcloud"]: shape (N, 3), point cloud captured by depth cameras. |
|
data[f"{date}/{idx}/real/{sample}/q"]: shape (7,), robot joint positions. |
|
``` |
|
|
|
Structure for `correction_data/*/*.hdf5`: |
|
``` |
|
data["is_human_intervention"]: shape (L,), represent human intervention (1) or not (0). |
|
data["policy_action"]: shape (L, 8), simulation policies' actions. |
|
data["policy_obs"]: shape (L, ...), simulation policies' observations. |
|
data["post_intervention_eef_pose"]: shape (L, 4, 4), end-effector's pose after intervention. |
|
data["post_intervention_q"]: shape (L, 7), robot joint positions after intervention. |
|
data["post_intervention_gripper_q"]: shape (L, 2), gripper fingers' positions after intervention. |
|
data["pre_intervention_eef_pose"]: shape (L, 4, 4), end-effector's pose before intervention. |
|
data["pre_intervention_q"]: shape (L, 7), robot joint positions before intervention. |
|
data["pre_intervention_gripper_q"]: shape (L, 2), gripper fingers' positions before intervention. |
|
``` |
|
|
|
## Dataset Creation |
|
`distillation/tasks/*.hdf5` are generated by teacher policies trained with reinforcement learning in simulation. |
|
`distillation/matched_point_cloud_scenes.h5` and `correction_data/*/*.hdf5` are manually collected in the real world. |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@inproceedings{jiang2024transic, |
|
title = {TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction}, |
|
author = {Yunfan Jiang and Chen Wang and Ruohan Zhang and Jiajun Wu and Li Fei-Fei}, |
|
booktitle = {Conference on Robot Learning}, |
|
year = {2024} |
|
} |
|
``` |
|
|
|
## Dataset Card Contact |
|
|
|
[Yunfan Jiang](https://yunfanj.com/), email: `yunfanj[at]cs[dot]stanford[dot]edu` |