Datasets:

Modalities:
Video
Languages:
English
Size:
< 1K
Libraries:
Datasets
License:
demos25 / README.md
delinqu's picture
Update README.md
600264a verified
---
license: mit
language:
- en
tags:
- vla
- robot-control
- understanding
---
# Demos25 - Dual-Arm Robot Manipulation Dataset
## Dataset Description
Demos25 is a comprehensive dual-arm robot manipulation dataset containing 25 episodes of supermarket packing tasks. The dataset follows the LeRobot format and includes multi-modal data with video observations, robot states, and actions.
### Dataset Summary
- **Total Episodes**: 25
- **Total Frames**: 44,492
- **FPS**: 30
- **Robot Type**: A2D (dual-arm robot)
- **Task**: Supermarket packing with item manipulation and bag organization
### Task Description
The robot is positioned in front of a cash register with four different types of items and a soft woven bag. The task involves altering the positions and orientations of the items and the soft woven bag on the cash register.
## Dataset Structure
The dataset is organized in LeRobot v2.1 format with the following structure:
```
demos25/
β”œβ”€β”€ data/
β”‚ └── chunk-000/
β”‚ β”œβ”€β”€ episode_000000.parquet
β”‚ β”œβ”€β”€ episode_000001.parquet
β”‚ └── ...
β”œβ”€β”€ videos/
β”‚ └── chunk-000/
β”‚ β”œβ”€β”€ observation.images.head/
β”‚ β”œβ”€β”€ observation.images.hand_left/
β”‚ └── observation.images.hand_right/
β”œβ”€β”€ meta/
β”‚ β”œβ”€β”€ info.json
β”‚ β”œβ”€β”€ episodes.jsonl
β”‚ β”œβ”€β”€ episodes_stats.jsonl
β”‚ └── tasks.jsonl
└── interleaved_demo.jsonl
```
## Features
### Video Observations
- **observation.images.head**: Head camera view (480x640x3, 30fps)
- **observation.images.hand_left**: Left hand camera view (480x640x3, 30fps)
- **observation.images.hand_right**: Right hand camera view (480x640x3, 30fps)
### Robot States
- **observation.states.joint.position**: 14 joint positions (7 per arm)
- **observation.states.joint.current_value**: 14 joint current values
- **observation.states.effector.position**: 2 gripper positions
- **observation.states.end.position**: 2 end-effector positions (3D)
- **observation.states.end.orientation**: 2 end-effector orientations (quaternion)
- **observation.states.head.position**: 2 head positions (yaw, pitch)
- **observation.states.robot.position**: 3 robot base positions
- **observation.states.robot.orientation**: 4 robot base orientations
- **observation.states.waist.position**: 2 waist positions (pitch, lift)
### Actions
- **actions.joint.position**: 14 joint position targets
- **actions.effector.position**: 2 gripper position targets
- **actions.end.position**: 2 end-effector position targets (3D)
- **actions.end.orientation**: 2 end-effector orientation targets (quaternion)
- **actions.head.position**: 2 head position targets
- **actions.robot.velocity**: 2 robot base velocities
- **actions.waist.position**: 2 waist position targets
## Usage
### Loading with LeRobot
```python
from lerobot.datasets.lerobot_dataset import LeRobotDatasetMetadata
from eo.data.lerobot_dataset import LeRobotDataset
# Load metadata
meta = LeRobotDatasetMetadata(
repo_id="IPEC-COMMUNITY/demos25",
root="./demos25",
)
# Load dataset
dataset = LeRobotDataset(
repo_id="IPEC-COMMUNITY/demos25",
root="./demos25",
delta_timestamps={
k: [i / meta.fps for i in range(0, 50)]
for k in ["actions.joint.position", "actions.effector.position"]
}
)
```
### Loading with EO-1 Framework
```python
from eo.data.lerobot_dataset import LeRobotDataset
from eo.data.schema import LerobotConfig
dataset = LeRobotDataset(
repo_id="IPEC-COMMUNITY/demos25",
root="./demos25",
select_video_keys=[
"observation.images.head",
"observation.images.hand_left",
"observation.images.hand_right"
],
select_state_keys=[
"observation.states.joint.position",
"observation.states.effector.position"
],
select_action_keys=[
"actions.joint.position",
"actions.effector.position"
],
delta_timestamps={
k: [i / 30 for i in range(0, 1500)]
for k in ["actions.joint.position", "actions.effector.position"]
}
)
```
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{demos25_2024,
title={Demos25: Dual-Arm Robot Manipulation Dataset for Supermarket Packing},
author={IPEC-COMMUNITY},
year={2024},
url={https://huggingface.co/datasets/IPEC-COMMUNITY/demos25}
}
```
## License
This dataset is released under the MIT License.
## Contact
For questions or issues, please contact the IPEC-COMMUNITY team.