File size: 4,543 Bytes
424be82 600264a 424be82 600264a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 |
---
license: mit
language:
- en
tags:
- vla
- robot-control
- understanding
---
# Demos25 - Dual-Arm Robot Manipulation Dataset
## Dataset Description
Demos25 is a comprehensive dual-arm robot manipulation dataset containing 25 episodes of supermarket packing tasks. The dataset follows the LeRobot format and includes multi-modal data with video observations, robot states, and actions.
### Dataset Summary
- **Total Episodes**: 25
- **Total Frames**: 44,492
- **FPS**: 30
- **Robot Type**: A2D (dual-arm robot)
- **Task**: Supermarket packing with item manipulation and bag organization
### Task Description
The robot is positioned in front of a cash register with four different types of items and a soft woven bag. The task involves altering the positions and orientations of the items and the soft woven bag on the cash register.
## Dataset Structure
The dataset is organized in LeRobot v2.1 format with the following structure:
```
demos25/
βββ data/
β βββ chunk-000/
β βββ episode_000000.parquet
β βββ episode_000001.parquet
β βββ ...
βββ videos/
β βββ chunk-000/
β βββ observation.images.head/
β βββ observation.images.hand_left/
β βββ observation.images.hand_right/
βββ meta/
β βββ info.json
β βββ episodes.jsonl
β βββ episodes_stats.jsonl
β βββ tasks.jsonl
βββ interleaved_demo.jsonl
```
## Features
### Video Observations
- **observation.images.head**: Head camera view (480x640x3, 30fps)
- **observation.images.hand_left**: Left hand camera view (480x640x3, 30fps)
- **observation.images.hand_right**: Right hand camera view (480x640x3, 30fps)
### Robot States
- **observation.states.joint.position**: 14 joint positions (7 per arm)
- **observation.states.joint.current_value**: 14 joint current values
- **observation.states.effector.position**: 2 gripper positions
- **observation.states.end.position**: 2 end-effector positions (3D)
- **observation.states.end.orientation**: 2 end-effector orientations (quaternion)
- **observation.states.head.position**: 2 head positions (yaw, pitch)
- **observation.states.robot.position**: 3 robot base positions
- **observation.states.robot.orientation**: 4 robot base orientations
- **observation.states.waist.position**: 2 waist positions (pitch, lift)
### Actions
- **actions.joint.position**: 14 joint position targets
- **actions.effector.position**: 2 gripper position targets
- **actions.end.position**: 2 end-effector position targets (3D)
- **actions.end.orientation**: 2 end-effector orientation targets (quaternion)
- **actions.head.position**: 2 head position targets
- **actions.robot.velocity**: 2 robot base velocities
- **actions.waist.position**: 2 waist position targets
## Usage
### Loading with LeRobot
```python
from lerobot.datasets.lerobot_dataset import LeRobotDatasetMetadata
from eo.data.lerobot_dataset import LeRobotDataset
# Load metadata
meta = LeRobotDatasetMetadata(
repo_id="IPEC-COMMUNITY/demos25",
root="./demos25",
)
# Load dataset
dataset = LeRobotDataset(
repo_id="IPEC-COMMUNITY/demos25",
root="./demos25",
delta_timestamps={
k: [i / meta.fps for i in range(0, 50)]
for k in ["actions.joint.position", "actions.effector.position"]
}
)
```
### Loading with EO-1 Framework
```python
from eo.data.lerobot_dataset import LeRobotDataset
from eo.data.schema import LerobotConfig
dataset = LeRobotDataset(
repo_id="IPEC-COMMUNITY/demos25",
root="./demos25",
select_video_keys=[
"observation.images.head",
"observation.images.hand_left",
"observation.images.hand_right"
],
select_state_keys=[
"observation.states.joint.position",
"observation.states.effector.position"
],
select_action_keys=[
"actions.joint.position",
"actions.effector.position"
],
delta_timestamps={
k: [i / 30 for i in range(0, 1500)]
for k in ["actions.joint.position", "actions.effector.position"]
}
)
```
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{demos25_2024,
title={Demos25: Dual-Arm Robot Manipulation Dataset for Supermarket Packing},
author={IPEC-COMMUNITY},
year={2024},
url={https://huggingface.co/datasets/IPEC-COMMUNITY/demos25}
}
```
## License
This dataset is released under the MIT License.
## Contact
For questions or issues, please contact the IPEC-COMMUNITY team. |