InternData-A1 / README.md
yuanxuewei's picture
Update README.md
d86f33d verified
---
pretty_name: InternData-A1
size_categories:
- n>1T
task_categories:
- other
language:
- en
tags:
- Embodied-AI
- Robotic manipulation
extra_gated_prompt: >-
### InternData-A1 COMMUNITY LICENSE AGREEMENT
InternData-A1 Release Date: July 26, 2025. All the data and code
within this repo are under [CC BY-NC-SA
4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
extra_gated_fields:
First Name: text
Last Name: text
Email: text
Country: country
Affiliation: text
Phone: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
Research interest: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the InternData Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the InternData Privacy Policy.
extra_gated_button_content: Submit
---
# InternData-A1
<strong>InternData-A1</strong> is a hybrid synthetic-real manipulation dataset integrating 5 heterogeneous robots, 15 skills, and 200+ scenes, emphasizing multi-robot collaboration under dynamic scenarios.
<div style="display: flex; flex-direction: column; align-items: center; gap: 10px;">
<!-- First Row -->
<div style="display: flex; justify-content: center; align-items: center; gap: 10px;">
<video controls autoplay loop muted width="400" style="border-radius: 10px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);">
<source src="https://huggingface.co/spaces/yuanxuewei/Robot_videos/resolve/main/20250726-014857.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<video controls autoplay loop muted width="400" style="border-radius: 10px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);">
<source src="https://huggingface.co/spaces/yuanxuewei/Robot_videos/resolve/main/20250725-232612.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
<!-- Second Row -->
<div style="display: flex; justify-content: center; align-items: center; gap: 10px;">
<video controls autoplay loop muted width="400" style="border-radius: 10px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);">
<source src="https://huggingface.co/spaces/yuanxuewei/Robot_videos/resolve/main/20250725-232622.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<video controls autoplay loop muted width="400" style="border-radius: 10px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);">
<source src="https://huggingface.co/spaces/yuanxuewei/Robot_videos/resolve/main/20250725-232631.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
</div>
# 🔑 Key Features
- **Heterogeneous multi-robot platforms:** ARX Lift-2, AgileX Split Aloha, Openloong Humanoid, A2D, Franka
- **Hybrid synthetic-real** manipulation demonstrations with **task-level digital twins**
- **Dynamic scenarios include:**
- Moving Object Manipulation in Conveyor Belt Scenarios
- Multi-robot collaboration
- Human-robot interaction
# 📋 Table of Contents
- [🔑 Key Features](#key-features-)
- [Get started 🔥](#get-started-)
- [Download the Dataset](#download-the-dataset)
- [Dataset Structure](#dataset-structure)
- [📅 TODO List ](#todo-list-)
- [License and Citation](#license-and-citation)
# Get started 🔥
## Download the Dataset
To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.
```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/InternRobotics/InternData-A1
# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/InternRobotics/InternData-A1
```
If you only want to download a specific dataset, such as `splitaloha`, you can use the following code.
```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# Initialize an empty Git repository
git init InternData-A1
cd InternData-A1
# Set the remote repository
git remote add origin https://huggingface.co/datasets/InternRobotics/InternData-A1
# Enable sparse-checkout
git sparse-checkout init
# Specify the folders and files
git sparse-checkout set physical/splitaloha
# Pull the data
git pull origin main
```
## Dataset Structure
### Folder hierarchy
```
data
├── simulated
│ ├── franka
│ │ ├── data
│ │ │ ├── chunk-000
│ │ │ │ ├── episode_000000.parquet
│ │ │ │ ├── episode_000001.parquet
│ │ │ │ ├── episode_000002.parquet
│ │ │ │ ├── ...
│ │ │ ├── chunk-001
│ │ │ │ ├── ...
│ │ │ ├── ...
│ │ ├── meta
│ │ │ ├── episodes.jsonl
│ │ │ ├── episodes_stats.jsonl
│ │ │ ├── info.json
│ │ │ ├── modality.json
│ │ │ ├── stats.json
│ │ │ ├── tasks.jsonl
│ │ ├── videos
│ │ │ ├── chunk-000
│ │ │ │ ├── images.rgb.head
│ │ │ │ │ ├── episode_000000.mp4
│ │ │ │ │ ├── episode_000001.mp4
│ │ │ │ │ ├── ...
│ │ │ │ ├── ...
│ │ │ ├── chunk-001
│ │ │ │ ├── ...
│ │ │ ├── ...
├── physical
│ ├── splitaloha
│ │ └── ...
│ └── arx_lift2
│ │ └── ...
│ └── A2D
│ │ └── ...
```
This subdataset(such as `splitaloha`) was created using [LeRobot](https://github.com/huggingface/lerobot) (dataset v2.1). For GROOT training framework compatibility, additional `stats.json` and `modality.json` files are included, where `stats.json` provides statistical values (mean, std, min, max, q01, q99) for each feature across the dataset, and `modality.json` defines model-related custom modalities.
### [meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "piper",
"total_episodes": 100,
"total_frames": 49570,
"total_tasks": 1,
"total_videos": 300,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:100"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"images.rgb.head": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"images.rgb.hand_left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"images.rgb.hand_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"states.left_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5"
]
},
"states.left_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"states.right_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5"
]
},
"states.right_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"actions.left_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5"
]
},
"actions.left_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"actions.right_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5"
]
},
"actions.right_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
### key format in features
Select appropriate keys for features based on characteristics such as ontology, single-arm or bimanual-arm, etc.
```
|-- images
|-- rgb
|-- head
|-- hand_left
|-- hand_right
|-- states
|-- left_joint
|-- position
|-- right_joint
|-- position
|-- left_gripper
|-- position
|-- right_gripper
|-- position
|-- actions
|-- left_joint
|-- position
|-- right_joint
|-- position
|-- left_gripper
|-- position
|-- right_gripper
|-- position
```
# 📅 TODO List
- [ ] **InternData-A1**: ~200,000 simulation demonstrations and ~10,000 real-world robot demonstrations [expected 2025/8/15]
- [ ] ~1,000,000 trajectories of hybrid synthetic-real robotic manipulation data
# License and Citation
All the data and code within this repo are under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Please consider citing our project if it helps your research.
```BibTeX
@misc{contributors2025internroboticsrepo,
title={InternData-A1},
author={InternData-A1 contributors},
howpublished={\url{https://github.com/InternRobotics/InternManip}}, year={2025}
}
```