pretty_name: InternData-A1
size_categories:
- n>1T
task_categories:
- other
language:
- en
tags:
- Embodied-AI
- Robotic manipulation
extra_gated_prompt: >-
### InternData-A1 COMMUNITY LICENSE AGREEMENT
InternData-A1 Release Date: July 26, 2025. All the data and code within this
repo are under [CC BY-NC-SA
4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
extra_gated_fields:
First Name: text
Last Name: text
Email: text
Country: country
Affiliation: text
Phone: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
Research interest: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the InternData Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the InternData Privacy Policy.
extra_gated_button_content: Submit
InternData-A1
InternData-A1 is a hybrid synthetic-real manipulation dataset integrating 5 heterogeneous robots, 15 skills, and 200+ scenes, emphasizing multi-robot collaboration under dynamic scenarios.
π Key Features
- Heterogeneous multi-robot platforms: ARX Lift-2, AgileX Split Aloha, Openloong Humanoid, A2D, Franka
- Hybrid synthetic-real manipulation demonstrations with task-level digital twins
- Dynamic scenarios include:
- Moving Object Manipulation in Conveyor Belt Scenarios
- Multi-robot collaboration
- Human-robot interaction
π Table of Contents
Get started π₯
Download the Dataset
To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/InternRobotics/InternData-A1
# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/InternRobotics/InternData-A1
If you only want to download a specific dataset, such as splitaloha
, you can use the following code.
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# Initialize an empty Git repository
git init InternData-A1
cd InternData-A1
# Set the remote repository
git remote add origin https://huggingface.co/datasets/InternRobotics/InternData-A1
# Enable sparse-checkout
git sparse-checkout init
# Specify the folders and files
git sparse-checkout set physical/splitaloha
# Pull the data
git pull origin main
Dataset Structure
Folder hierarchy
data
βββ simulated
β βββ franka
β β βββ data
β β β βββ chunk-000
β β β β βββ episode_000000.parquet
β β β β βββ episode_000001.parquet
β β β β βββ episode_000002.parquet
β β β β βββ ...
β β β βββ chunk-001
β β β β βββ ...
β β β βββ ...
β β βββ meta
β β β βββ episodes.jsonl
β β β βββ episodes_stats.jsonl
β β β βββ info.json
β β β βββ modality.json
β β β βββ stats.json
β β β βββ tasks.jsonl
β β βββ videos
β β β βββ chunk-000
β β β β βββ images.rgb.head
β β β β β βββ episode_000000.mp4
β β β β β βββ episode_000001.mp4
β β β β β βββ ...
β β β β βββ ...
β β β βββ chunk-001
β β β β βββ ...
β β β βββ ...
βββ physical
β βββ splitaloha
β β βββ ...
β βββ arx_lift2
β β βββ ...
β βββ A2D
β β βββ ...
This subdataset(such as splitaloha
) was created using LeRobot (dataset v2.1). For GROOT training framework compatibility, additional stats.json
and modality.json
files are included, where stats.json
provides statistical values (mean, std, min, max, q01, q99) for each feature across the dataset, and modality.json
defines model-related custom modalities.
meta/info.json:
{
"codebase_version": "v2.1",
"robot_type": "piper",
"total_episodes": 100,
"total_frames": 49570,
"total_tasks": 1,
"total_videos": 300,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:100"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"images.rgb.head": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"images.rgb.hand_left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"images.rgb.hand_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"states.left_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5"
]
},
"states.left_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"states.right_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5"
]
},
"states.right_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"actions.left_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5"
]
},
"actions.left_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"actions.right_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5"
]
},
"actions.right_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
key format in features
Select appropriate keys for features based on characteristics such as ontology, single-arm or bimanual-arm, etc.
|-- images
|-- rgb
|-- head
|-- hand_left
|-- hand_right
|-- states
|-- left_joint
|-- position
|-- right_joint
|-- position
|-- left_gripper
|-- position
|-- right_gripper
|-- position
|-- actions
|-- left_joint
|-- position
|-- right_joint
|-- position
|-- left_gripper
|-- position
|-- right_gripper
|-- position
π TODO List
- InternData-A1: ~200,000 simulation demonstrations and ~10,000 real-world robot demonstrations [expected 2025/8/15]
- ~1,000,000 trajectories of hybrid synthetic-real robotic manipulation data
License and Citation
All the data and code within this repo are under CC BY-NC-SA 4.0. Please consider citing our project if it helps your research.
@misc{contributors2025internroboticsrepo,
title={InternData-A1},
author={InternData-A1 contributors},
howpublished={\url{https://github.com/InternRobotics/InternManip}}, year={2025}
}