Datasets:
video
video | label
class label 10
classes |
|---|---|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
0observation.image
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
|
1observation.image2
|
Community Dataset v1
A large-scale community-contributed robotics dataset for vision-language-action learning, featuring 128 datasets from 55 contributors worldwide.
We used this dataset to pretrain SmolVLA. However, this is not a complete set, but the dataset that we selected using specific filters, like fps, min num of episodes, and some qualitative assessment of video qualities, using the https://huggingface.co/spaces/Beegbrain/FilterLeRobotData tool. We also manually curated the task descriptions for this subset of the dataset.
π Overview
This dataset represents a collaborative effort from the robotics and AI community to build comprehensive training data for embodied AI systems. Each contribution contains demonstrations of robotic manipulation tasks with the SO100 arm, recorded using LeRobot tools, primarily focused on tabletop scenarios and everyday object interactions.
π Dataset Statistics
| Metric | Value |
|---|---|
| Total Datasets | 128 |
| Total Episodes | 11,132 |
| Total Frames | 5,105,808 |
| Total Videos | 22,065 |
| Contributors | 55 |
| Weighted Average FPS | 30.4 |
| Average Episodes per Dataset | 87.0 |
| Total Duration | 46.9 hours |
| Average Hours per Dataset | 0.37 |
| Primary Tasks | Manipulation, Pick & Place, Sorting |
| Robot Types | SO-100 (various colors) |
| Data Format | LeRobot v2.0 and v2.1 dataset format |
| Total Size | 119.3 GB |
ποΈ Structure
The dataset maintains a clear hierarchical structure:
community_dataset_v1/
βββ contributor1/
β βββ dataset_name_1/
β β βββ data/ # Parquet files with observations
β β βββ videos/ # MP4 recordings
β β βββ meta/ # Metadata and info
β βββ dataset_name_2/
βββ contributor2/
β βββ dataset_name_3/
βββ ...
Each dataset follows the LeRobot format standard, ensuring compatibility with existing frameworks and easy integration.
π Top Contributors
| Contributor | Datasets Quantity |
|---|---|
| lirislab | 14 |
| roboticshack | 9 |
| sihyun77 | 8 |
| pierfabre | 7 |
| ganker5 | 6 |
| paszea | 5 |
| samsam0510 | 5 |
| pranavsaroha | 5 |
| bensprenger | 4 |
| Chojins | 4 |
π Usage
1. Authenticate with Hugging Face
You need to be logged in to access the dataset:
# Login to Hugging Face
huggingface-cli login
# Or alternatively, set your token as an environment variable
# export HF_TOKEN=your_token_here
Get your token from https://huggingface.co/settings/tokens
Download the Dataset
hf download HuggingFaceVLA/community_dataset_v1 \
--repo-type=dataset \
--local-dir /path/local_dir/community_dataset_v1
Load Individual Datasets
from lerobot.datasets.lerobot_dataset import LeRobotDataset
import os
# Browse available datasets
for contributor in os.listdir("./community_dataset_v1"):
contributor_path = f"./community_dataset_v1/{contributor}"
if os.path.isdir(contributor_path):
for dataset in os.listdir(contributor_path):
print(f"π {contributor}/{dataset}")
# Load a specific dataset (requires authentication)
dataset = LeRobotDataset(
repo_id="local",
root="./community_dataset_v1/contributor_name/dataset_name"
)
# Access episodes and observations
print(f"Episodes: {len(dataset.episode_indices)}")
print(f"Total frames: {len(dataset)}")
Integration with SmolVLA pretraining framework
This dataset is designed for training VLA models You can download this dataset and use it for Vision Language Action Models training framework, VLAb:
- Visit the VLAb repository.
- Follow the training instructions in the repo
- Point the training script to this dataset
accelerate launch --config_file accelerate_configs/multi_gpu.yaml \
src/lerobot/scripts/train.py \
--policy.type=smolvla2 \
--policy.repo_id=HuggingFaceTB/SmolVLM2-500M-Video-Instruct \
--dataset.repo_id="community_dataset_v1/AndrejOrsula/lerobot_double_ball_stacking_random,community_dataset_v1/aimihat/so100_tape" \
--dataset.root="local/path/to/datasets" \
--dataset.video_backend=pyav \
--dataset.features_version=2 \
--output_dir="./outputs/training" \
--batch_size=8 \
--steps=200000 \
--wandb.enable=true \
--wandb.project="smolvla2-training"
π§ Dataset Format
Each dataset contains:
data/: Parquet files with timestamped observations- Robot states (joint positions, velocities)
- Action sequences
- Camera observations (multiple views)
- Language instructions
videos/: Synchronized video recordings- Multiple camera angles
- High-resolution capture
- Timestamp alignment
meta/: Metadata and configuration- Dataset info (fps, episode count)
- Robot configuration
- Task descriptions
π― Intended Use
This dataset is designed for:
- Vision-Language-Action (VLA) model training
- Robotic manipulation research
- Imitation learning experiments
- Multi-task policy development
- Embodied AI research
π€ Community Contributions
This dataset exists thanks to the generous contributions from researchers, hobbyists, and institutions worldwide. Each dataset represents hours of careful data collection and curation.
Contributing Guidelines
Future contributions should follow:
- LeRobot dataset format
- Consistent naming conventions for the features, camera views etc.
- Quality validation checks
- Proper task descriptions, describing the actions precisely.
Check the blogpost for more information
π Related Work
- VLAb Framework
- SmolVLA model
- SmolVLA Blogpost
- SmolVLA Paper
- Docs
- How to Build a successful Robotics dataset with Lerobot?
Built with β€οΈ by the SmolVLA team and LeRobot Community
- Downloads last month
- 4,106