Datasets:
pretty_name: MotionMillion
size_categories:
- n<1T
task_categories:
- other
language:
- en
tags:
- Large Human Motion
- Humanoid
- Humanoid Locomotion
extra_gated_prompt: >-
### MotionMillion COMMUNITY LICENSE AGREEMENT
MotionMillion Release Date: July 30, 2025 All the data and code within this
repo are under [CC BY-NC-SA
4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
extra_gated_fields:
First Name: text
Last Name: text
Email: text
Country: country
Affiliation: text
Phone: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
Research interest: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the InternData Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the InternData Privacy Policy.
extra_gated_button_content: Submit
π Key Features
- Over 2000 hours of high-quality human motion captured from web-scale human video data, covering:
- Martial Arts (23.7%)
- Fitness (26.4%)
- Performance (17.5%)
- Dance (14.9%)
- Non-Human (2.9%)
- Sports (2.4%)
- Over 20 detailed annotations per motion, including:
- Age
- Body Characteristics
- Movement Styles
- Emotions
- Environments

Table of Contents
π¨βπ« Get Started
Download the Dataset
To download the full dataset, use the following code. If you encounter any issues, refer to the official Hugging Face documentation.
# Ensure git-lfs is installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one in your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/InternRobotics/MotionMillion
# To clone without large files (only their pointers)
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/InternRobotics/MotionMillion
If you only need to download a specific dataset (e.g., MotionGV/folder1.tar.gz
), use the following code:
# Ensure git-lfs is installed (https://git-lfs.com)
git lfs install
# Initialize an empty Git repository
git init MotionMillion
cd MotionMillion
# Set the remote repository
git remote add origin https://huggingface.co/datasets/InternRobotics/MotionMillion
# Enable sparse-checkout
git sparse-checkout init
# Specify the target folders and files
git sparse-checkout set MotionGV/folder1.tar.gz
# Pull the data
git pull origin main
Dataset Processing
Folder Hierarchy
MotionMillion
|-- motion_272rpr
| |-- Mirror_MotionGV
| | |-- folder0.tar.gz
| | |-- folder1.tar.gz
| | |-- folder2.tar.gz
| | |-- folder3.tar.gz
| | |-- folder4.tar.gz
| | |-- folder5.tar.gz
| | |-- folder6.tar.gz
| | |-- folder7.tar.gz
| | |-- folder8.tar.gz
| | `-- folder9.tar.gz
| |-- Mirror_MotionLLAMA
| | |-- finedance.tar.gz
| | |-- fit3d.tar.gz
| | |-- hi4d.tar.gz
| | |-- humansc3d.tar.gz
| | |-- interhuman.tar.gz
| | |-- interx.tar.gz
| | `-- trumans.tar.gz
| |-- Mirror_MotionUnion
| | |-- 100STYLE_smpl.tar.gz
| | |-- CombatMotion_seperate.tar.gz
| | |-- EgoBody.tar.gz
| | |-- animation.tar.gz
| | |-- fitness.tar.gz
| | |-- game_motion.tar.gz
| | |-- haa500.tar.gz
| | |-- humman.tar.gz
| | |-- idea400.tar.gz
| | |-- kungfu.tar.gz
| | |-- music.tar.gz
| | `-- perform.tar.gz
| |-- MotionGV
| | |-- folder0.tar.gz
| | |-- folder1.tar.gz
| | |-- folder2.tar.gz
| | |-- folder3.tar.gz
| | |-- folder4.tar.gz
| | |-- folder5.tar.gz
| | |-- folder6.tar.gz
| | |-- folder7.tar.gz
| | |-- folder8.tar.gz
| | `-- folder9.tar.gz
| |-- MotionLLAMA
| | |-- finedance.tar.gz
| | |-- fit3d.tar.gz
| | |-- hi4d.tar.gz
| | |-- humansc3d.tar.gz
| | |-- interhuman.tar.gz
| | |-- interx.tar.gz
| | `-- trumans.tar.gz
| |-- MotionUnion
| | |-- 100STYLE_smpl.tar.gz
| | |-- CombatMotion_seperate.tar.gz
| | |-- EgoBody.tar.gz
| | |-- animation.tar.gz
| | |-- fitness.tar.gz
| | |-- game_motion.tar.gz
| | |-- haa500.tar.gz
| | |-- humman.tar.gz
| | |-- idea400.tar.gz
| | |-- kungfu.tar.gz
| | |-- music.tar.gz
| | `-- perform.tar.gz
|-- mean_std
β |-- Mean.npy
β `-- Std.npy
|-- texts.tar.gz
|-- splits.tar.gz
Due to data licensing restrictions, we only provide parts of the processed motion data with 272-representation. Among these, MotionGV contains motions captured by our motion capture algorithm; the remaining data is merged from other datasets.
Due to copyright constraints, BABEL, AIST and HumanML3D cannot be released directly. We will provide detailed data processing workflows.
Data Processing Steps
- For all tar.gz files, use
tar -xzvf x.tar.gz
to extract them. - For HumanML3D, please refer to data_process/HumanML3D.
- For BABEL, please refer to data_process/BABEL.
- For AIST, please refer to data_process/AIST.
Processed Data Hierarchy
MotionMillion
|-- motion_272rpr
| |-- BABEL
| |-- Mirror_BABEL
| |-- Mirror_MotionGV
| | |-- folder0
| | |-- folder1
| | |-- folder2
| | |-- folder3
| | |-- folder4
| | |-- folder5
| | |-- folder6
| | |-- folder7
| | |-- folder8
| | `-- folder9
| |-- Mirror_MotionLLAMA
| | |-- aist
| | |-- finedance
| | |-- fit3d
| | |-- hi4d
| | |-- humansc3d
| | |-- interhuman
| | |-- interx
| | `-- trumans
| |-- Mirror_MotionUnion
| | |-- 100STYLE_smpl
| | |-- CombatMotion_seperate
| | |-- EgoBody
| | |-- animation
| | |-- fitness
| | |-- game_motion
| | |-- haa500
| | |-- humanml
| | |-- humman
| | |-- idea400
| | |-- kungfu
| | |-- music
| | `-- perform
| |-- Mirror_PhantomDanceDatav1.1
| |-- MotionGV
| | |-- folder0
| | |-- folder1
| | |-- folder2
| | |-- folder3
| | |-- folder4
| | |-- folder5
| | |-- folder6
| | |-- folder7
| | |-- folder8
| | `-- folder9
| |-- MotionLLAMA
| | |-- aist
| | |-- finedance
| | |-- fit3d
| | |-- hi4d
| | |-- humansc3d
| | |-- interhuman
| | |-- interx
| | `-- trumans
| |-- MotionUnion
| | |-- 100STYLE_smpl
| | |-- CombatMotion_seperate
| | |-- EgoBody
| | |-- animation
| | |-- fitness
| | |-- game_motion
| | |-- haa500
| | |-- humanml
| | |-- humman
| | |-- idea400
| | |-- kungfu
| | |-- music
| | `-- perform
| `-- PhantomDanceDatav1.1
|-- texts
| |-- Mirror_MotionGV
| |-- Mirror_MotionLLAMA
| |-- Mirror_MotionUnion
| |-- MotionGV
| |-- MotionLLAMA
| `-- MotionUnion
|-- mean_std
β |-- Mean.npy
β `-- Std.npy
|-- split
| `-- version1
| |-- t2m_60_300
| | |-- all.txt
| | |-- test.txt
| | |-- train.txt
| | `-- val.txt
| `-- tokenizer_96
| |-- test.txt
| |-- train.txt
| `-- val.txt
License and Citation
All the data and code within this repo are under CC BY-NC-SA 4.0. Please consider citing our project if it helps your research.
@article{fan2025go,
title={Go to Zero: Towards Zero-shot Motion Generation with Million-scale Data},
author={Fan, Ke and Lu, Shunlin and Dai, Minyue and Yu, Runyi and Xiao, Lixing and Dou, Zhiyang and Dong, Junting and Ma, Lizhuang and Wang, Jingbo},
journal={arXiv preprint arXiv:2507.07095},
year={2025}
}
In addition, please cite the following literature:
@article{xiao2025motionstreamer,
title={MotionStreamer: Streaming Motion Generation via Diffusion-based Autoregressive Model in Causal Latent Space},
author={Xiao, Lixing and Lu, Shunlin and Pi, Huaijin and Fan, Ke and Pan, Liang and Zhou, Yueer and Feng, Ziyong and Zhou, Xiaowei and Peng, Sida and Wang, Jingbo},
journal={arXiv preprint arXiv:2503.15451},
year={2025}
}
@inproceedings{amass,
title={AMASS: Archive of motion capture as surface shapes},
author={Mahmood, Naureen and Ghorbani, Nima and Troje, Nikolaus F and Pons-Moll, Gerard and Black, Michael J},
booktitle={ICCV},
pages={5442--5451},
year={2019}
}
@InProceedings{Guo_2022_CVPR,
author = {Guo, Chuan and Zou, Shihao and Zuo, Xinxin and Wang, Sen and Ji, Wei and Li, Xingyu and Cheng, Li},
title = {Generating Diverse and Natural 3D Human Motions From Text},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {5152-5161}
}
@inproceedings{babel,
title={BABEL: Bodies, action and behavior with english labels},
author={Punnakkal, Abhinanda R and Chandrasekaran, Arjun and Athanasiou, Nikos and Quiros-Ramirez, Alejandra and Black, Michael J},
booktitle={CVPR},
pages={722--731},
year={2021}
}
@inproceedings{flag3d,
title={Flag3d: A 3d fitness activity dataset with language instruction},
author={Tang, Yansong and Liu, Jinpeng and Liu, Aoyang and Yang, Bin and Dai, Wenxun and Rao, Yongming and Lu, Jiwen and Zhou, Jie and Li, Xiu},
booktitle={CVPR},
pages={22106--22117},
year={2023}
}
@inproceedings{li2023finedance,
title={FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance Generation},
author={Li, Ronghui and Zhao, Junfan and Zhang, Yachao and Su, Mingyang and Ren, Zeping and Zhang, Han and Tang, Yansong and Li, Xiu},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={10234--10243},
year={2023}
}
@article{motionx,
title={Motion-x: A large-scale 3d expressive whole-body human motion dataset},
author={Lin, Jing and Zeng, Ailing and Lu, Shunlin and Cai, Yuanhao and Zhang, Ruimao and Wang, Haoqian and Zhang, Lei},
journal={NeurIPS},
year={2024}
}
@article{liang2024intergen,
title={Intergen: Diffusion-based multi-human motion generation under complex interactions},
author={Liang, Han and Zhang, Wenqian and Li, Wenxuan and Yu, Jingyi and Xu, Lan},
journal={International Journal of Computer Vision},
volume={132},
number={9},
pages={3463--3483},
year={2024},
publisher={Springer}
}
@inproceedings{interx,
title={Inter-x: Towards versatile human-human interaction analysis},
author={Xu, Liang and Lv, Xintao and Yan, Yichao and Jin, Xin and Wu, Shuwen and Xu, Congsheng and Liu, Yifan and Zhou, Yizhou and Rao, Fengyun and Sheng, Xingdong and others},
booktitle={CVPR},
pages={22260--22271},
year={2024}
}
@inproceedings{aist-dance-db,
author = {Shuhei Tsuchida and Satoru Fukayama and Masahiro Hamasaki and Masataka Goto},
title = {AIST Dance Video Database: Multi-genre, Multi-dancer, and Multi-camera Database for Dance Information Processing},
booktitle = {Proceedings of the 20th International Society for Music Information Retrieval Conference, {ISMIR} 2019},
address = {Delft, Netherlands},
year = 2019,
month = nov}
@inproceedings{jiang2024scaling,
title={Scaling up dynamic human-scene interaction modeling},
author={Jiang, Nan and Zhang, Zhiyuan and Li, Hongjie and Ma, Xiaoxuan and Wang, Zan and Chen, Yixin and Liu, Tengyu and Zhu, Yixin and Huang, Siyuan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={1737--1747},
year={2024}
}
@inproceedings{yin2023hi4d,
title={Hi4d: 4d instance segmentation of close human interaction},
author={Yin, Yifei and Guo, Chen and Kaufmann, Manuel and Zarate, Juan Jose and Song, Jie and Hilliges, Otmar},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={17016--17027},
year={2023}
}
@inproceedings{fieraru2021learning,
title={Learning complex 3D human self-contact},
author={Fieraru, Mihai and Zanfir, Mihai and Oneata, Elisabeta and Popa, Alin-Ionut and Olaru, Vlad and Sminchisescu, Cristian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={2},
pages={1343--1351},
year={2021}
}
@inproceedings{danceformer,
title={Danceformer: Music conditioned 3d dance generation with parametric motion transformer},
author={Li, Buyu and Zhao, Yongchi and Zhelun, Shi and Sheng, Lu},
booktitle={AAAI},
pages={1272--1279},
year={2022}
}
Special Notes
- We would like to express our gratitude to the authors of FineDance for granting permission to directly open-source the preprocessed motion data. It is important to note that when generating the 272-dimensional motion representation, we utilized the SMPL-X data provided in MotionLLAMA with all beta values set to 0, which may differ from the original FineDance data.
- If you intend to use the merged data (excluding MotionGV), please strictly adhere to their respective licenses.