The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/92/d2/92d21981c32b55448ad2e88034cc219b193b04e40ed017df8328850dbb8e6a21/3f39a43f4c674b89094d01768b3369369b7640dbd08842c0935df265527195b3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260130%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260130T160352Z&X-Amz-Expires=3600&X-Amz-Signature=380b16d579231f595fbfefc261a0e6d9998c9b4fb4b623bd0df82982d1f83d76&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27episode_000000.parquet%3B%20filename%3D%22episode_000000.parquet%22%3B&x-id=GetObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 118, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 2392, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 328, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1656, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 89, in pyarrow.lib.check_status
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1015, in read
return super().read(length)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 1846, in read
out = self.cache._fetch(self.loc, self.loc + length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/caching.py", line 189, in _fetch
self.cache = self.fetcher(start, end) # new block replaces old
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 976, in _fetch_range
hf_raise_for_status(r)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/92/d2/92d21981c32b55448ad2e88034cc219b193b04e40ed017df8328850dbb8e6a21/3f39a43f4c674b89094d01768b3369369b7640dbd08842c0935df265527195b3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260130%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260130T160352Z&X-Amz-Expires=3600&X-Amz-Signature=380b16d579231f595fbfefc261a0e6d9998c9b4fb4b623bd0df82982d1f83d76&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27episode_000000.parquet%3B%20filename%3D%22episode_000000.parquet%22%3B&x-id=GetObject
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/92/d2/92d21981c32b55448ad2e88034cc219b193b04e40ed017df8328850dbb8e6a21/3f39a43f4c674b89094d01768b3369369b7640dbd08842c0935df265527195b3</Key><RequestId>T1FM74CKPCHQM9P2</RequestId><HostId>QXzli8dh1b1bzzpPeb1MdY94gX4fxm4OtXCOAnhhseRdLGqaJM/+5eVB1BERILQdxvH+1ZDZgolTcAJq1jQaFD/pUqlJU5jP</HostId></Error>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
KAI0
TODO
- The advantage label will be coming soon.
Contents
About the Dataset
~134 hours real world scenarios
Main Tasks
- FlattenFold
- Single task
- Initial state: T-shirts are randomly tossed onto the table, presenting random crumpled configurations
- Manipulation task: Operate the robotic arm to unfold the garment, then fold it
- HangCloth
- Single task
- Initial state: Hanger is randomly placed, garment is randomly positioned on the table
- Manipulation task: Operate the robotic arm to thread the hanger through the garment, then hang it on the rod
- TeeShirtSort
- Garment classification and arrangement task
- Initial state: Randomly pick a garment from the laundry basket
- Classification: Determine whether the garment is a T-shirt or a dress shirt
- Manipulation task:
- If it is a T-shirt, fold the garment
- If it is a dress shirt, expose the collar, then push it to one side of the table
- FlattenFold
Count of the dataset
Task Base (episodes count/hours) DAgger (episodes count/hours) Total(episodes count/hours) FlattenFold 3,055/~42 hours 3,457/ ~13 Hours 6,512 /~55 hours HangCloth 6954/~61 hours 686/~12 hours 7640/~73 hours TeeShirtSort 5988/~31 hours 769/~22 hours 6757/~53 hours Total 15,997/~134 hours 4,912/~47 hours 20,909/~181 hours
Load the dataset
- This dataset was created using LeRobot
- The dataset's version is LeRobotDataset v2.1
For LeRobot version < 0.4.0
Choose the appropriate import based on your version:
| Version | Import Path |
|---|---|
<= 0.1.0 |
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset |
> 0.1.0 and < 0.4.0 |
from lerobot.datasets.lerobot_dataset import LeRobotDataset |
# For version <= 0.1.0
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
# For version > 0.1.0 and < 0.4.0
from lerobot.datasets.lerobot_dataset import LeRobotDataset
# Load the dataset
dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored')
For LeRobot version >= 0.4.0
You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: Migrate the dataset from v2.1 to v3.0
python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID>
Download the Dataset
Python Script
from huggingface_hub import hf_hub_download, snapshot_download
from datasets import load_dataset
# Download a single file
hf_hub_download(
repo_id="OpenDriveLab-org/kai0",
filename="episodes.jsonl",
subfolder="meta",
repo_type="dataset",
local_dir="where/you/want/to/save"
)
# Download a specific folder
snapshot_download(
repo_id="OpenDriveLab-org/kai0",
local_dir="/where/you/want/to/save",
repo_type="dataset",
allow_patterns=["data/*"]
)
# Load the entire dataset
dataset = load_dataset("OpenDriveLab-org/kai0")
Terminal (CLI)
# Download a single file
hf download OpenDriveLab-org/kai0 \
--include "meta/info.json" \
--repo-type dataset \
--local-dir "/where/you/want/to/save"
# Download a specific folder
hf download OpenDriveLab-org/kai0 \
--repo-type dataset \
--include "meta/*" \
--local-dir "/where/you/want/to/save"
# Download the entire dataset
hf download OpenDriveLab-org/kai0 \
--repo-type dataset \
--local-dir "/where/you/want/to/save"
Dataset Structure
Folder hierarchy
Under each task directory, data is partitioned into two subsets: base and dagger.
- base contains original demonstration trajectories of robotic arm manipulation for garment arrangement tasks.
- dagger
contains on-policy recovery trajectories collected via iterative DAgger, designed to populate failure recovery modes absent in static demonstrations.
Kai0-data/
βββ FlattenFold/
β βββ base/
β β βββ data/
β β β βββ chunk-000/
β β β β βββ episode_000000.parquet
β β β β βββ episode_000001.parquet
β β β β βββ ...
β β β βββ ...
β β βββ videos/
β β β βββ chunk-000/
β β β β βββ observation.images.hand_left/
β β β β β βββ episode_000000.mp4
β β β β β βββ episode_000001.mp4
β β β β β βββ ...
β β β β βββ observation.images.hand_right/
β β β β β βββ episode_000000.mp4
β β β β β βββ episode_000001.mp4
β β β β β βββ ...
β β β β βββ observation.images.top_head/
β β β β β βββ episode_000000.mp4
β β β β β βββ episode_000001.mp4
β β β β β βββ ...
β β β β βββ ...
β β β βββ ...
β β βββ meta/
β β βββ info.json
β β βββ episodes.jsonl
β β βββ tasks.jsonl
β β βββ episodes_stats.jsonl
β βββ dagger/
βββ HangCloth/
β βββ base/
β βββ dagger/
βββ TeeShirtSort/
β βββ base/
β βββ dagger/
βββ README.md
Details
info.json
the basic struct of the info.json
{
"codebase_version": "v2.1",
"robot_type": "agilex",
"total_episodes": ..., # the total episodes in the dataset
"total_frames": ..., # The total number of video frames in any single camera perspective
"total_tasks": ..., # Total number of tasks
"total_videos": ..., # The total number of videos from all camera perspectives in the dataset
"total_chunks": ..., # The number of chunks in the dataset
"chunks_size": ..., # The max number of episodes in a chunk
"fps": ..., # Video frame rate per second
"splits": { # how to split the dataset
"train": ...
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.top_head": { # the camera perspective
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.hand_left": { # the camera perspective
...
},
"observation.images.hand_right": { # the camera perspective
...
},
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": null
},
"action": {
"dtype": "float32",
"shape": [
14
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
Parquet file format
| Field Name | shape | Meaning |
|---|---|---|
| observation.state | [N, 14] | left [:, :6], right [:, 7:13], joint angleleft [:, 6], right [:, 13] , gripper open range |
| action | [N, 14] | left [:, :6], right [:, 7:13], joint angleleft [:, 6], right [:, 13] , gripper open range |
| timestamp | [N, 1] | Time elapsed since the start of the episode (in seconds) |
| frame_index | [N, 1] | Index of this frame within the current episode (0-indexed) |
| episode_index | [N, 1] | Index of the episode this frame belongs to |
| index | [N, 1] | Global unique index across all frames in the dataset |
| task_index | [N, 1] | Index identifying the task type being performed |
tasks.jsonl
Contains task language prompts (natural language instructions) that specify the manipulation task to be performed. Each entry maps a task_index to its corresponding task description, which can be used for language-conditioned policy training.
License and Citation
All the data and code within this repo are under . Please consider citing our project if it helps your research.
@misc{,
title={},
author={},
howpublished={\url{}},
year={}
}
- Downloads last month
- 13