H&R
Dataset of paper Human2Robot: Learning Robot Actions from Paired Human-Robot Videos
Data Details
- /cam_data
- /human_camera: obs of human
- /robot_camera: obs of robot
- /end_position: eef 6-dof position, [x, y, z, roll, pitch, yaw] represent position and orientation, where the rotation is expressed in Euler angles (degrees) with the order XYZ.
- /gripper_state: 0/1, 1 for gripper open,0 for close
- /action: This attribute is not available in version v0, but is introduced in version v1. It represents the spatial pose of the human hand in the robot frame. It is a 7-DOF vector consisting of [x, y, z, roll, pitch, yaw, gripper].
- qpos: Joint angles of the robot arm
- qvel: Joint velocity of the robot arm
Version
- data
- v0: The older version of the dataset. /end_position together with /gripper_state can be used as the action.
- v1: In addition to the contents included in v0, there is an additional /action entry.
Visualization Code
Visualization the video of human and robot
python show_video.py --file_path /path/to/the/file