Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
License:
jadechoghari HF Staff commited on
Commit
24b61cc
·
verified ·
1 Parent(s): fb7aebf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -138
README.md CHANGED
@@ -10,152 +10,54 @@ configs:
10
  ---
11
 
12
  This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
 
13
 
14
- ## Dataset Description
 
 
15
 
 
 
 
 
16
 
 
17
 
18
- - **Homepage:** [More Information Needed]
19
- - **Paper:** [More Information Needed]
20
- - **License:** apache-2.0
21
 
22
  ## Dataset Structure
23
 
24
- [meta/info.json](meta/info.json):
25
- ```json
26
- {
27
- "codebase_version": "v2.1",
28
- "robot_type": null,
29
- "total_episodes": 50,
30
- "total_frames": 13021,
31
- "total_tasks": 1,
32
- "total_videos": 0,
33
- "total_chunks": 1,
34
- "chunks_size": 1000,
35
- "fps": 20,
36
- "splits": {
37
- "train": "0:50"
38
- },
39
- "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
40
- "video_path": null,
41
- "features": {
42
- "observation.images.image": {
43
- "dtype": "image",
44
- "shape": [
45
- 256,
46
- 256,
47
- 3
48
- ],
49
- "names": [
50
- "height",
51
- "width",
52
- "channel"
53
- ]
54
- },
55
- "observation.images.wrist_image": {
56
- "dtype": "image",
57
- "shape": [
58
- 256,
59
- 256,
60
- 3
61
- ],
62
- "names": [
63
- "height",
64
- "width",
65
- "channel"
66
- ]
67
- },
68
- "observation.state.end_effector": {
69
- "dtype": "float32",
70
- "shape": [
71
- 8
72
- ],
73
- "names": {
74
- "motors": [
75
- "x",
76
- "y",
77
- "z",
78
- "roll",
79
- "pitch",
80
- "yaw",
81
- "gripper",
82
- "gripper"
83
- ]
84
- }
85
- },
86
- "observation.state.joint": {
87
- "dtype": "float32",
88
- "shape": [
89
- 7
90
- ],
91
- "names": {
92
- "motors": [
93
- "joint_1",
94
- "joint_2",
95
- "joint_3",
96
- "joint_4",
97
- "joint_5",
98
- "joint_6",
99
- "joint_7"
100
- ]
101
- }
102
- },
103
- "action": {
104
- "dtype": "float32",
105
- "shape": [
106
- 7
107
- ],
108
- "names": {
109
- "motors": [
110
- "x",
111
- "y",
112
- "z",
113
- "roll",
114
- "pitch",
115
- "yaw",
116
- "gripper"
117
- ]
118
- }
119
- },
120
- "timestamp": {
121
- "dtype": "float32",
122
- "shape": [
123
- 1
124
- ],
125
- "names": null
126
- },
127
- "frame_index": {
128
- "dtype": "int64",
129
- "shape": [
130
- 1
131
- ],
132
- "names": null
133
- },
134
- "episode_index": {
135
- "dtype": "int64",
136
- "shape": [
137
- 1
138
- ],
139
- "names": null
140
- },
141
- "index": {
142
- "dtype": "int64",
143
- "shape": [
144
- 1
145
- ],
146
- "names": null
147
- },
148
- "task_index": {
149
- "dtype": "int64",
150
- "shape": [
151
- 1
152
- ],
153
- "names": null
154
- }
155
- }
156
- }
157
- ```
158
 
 
 
 
 
159
 
160
  ## Citation
161
 
 
10
  ---
11
 
12
  This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
13
+ # Dataset Card for **Smol-LIBERO**
14
 
15
+ ## Dataset Summary
16
+ Smol-LIBERO is a **compact version of the LIBERO benchmark**, built to make experimentation fast and accessible.
17
+ At just **1.79 GB** (compared to ~34 GB for the full LIBERO), it contains fewer trajectories and cameras while keeping the same multimodal structure.
18
 
19
+ Each sample includes:
20
+ - **Images** from two fixed cameras
21
+ - **Two types of robot state** (end-effector pose + gripper, and full 7-DoF joint positions)
22
+ - **Actions** (7-DoF joint commands)
23
 
24
+ This setup is especially useful for comparing **low-dimensional state inputs** with **high-dimensional visual inputs**, or combining them in multimodal training.
25
 
26
+ ---
 
 
27
 
28
  ## Dataset Structure
29
 
30
+ ### Data Fields
31
+ - **`observation.images.image`**: 256×256×3 RGB image (camera 1)
32
+ - **`observation.images.image2`**: 256×256×3 RGB image (camera 2)
33
+ - **`observation.state`** *(8 floats)*: end-effector Cartesian pose + gripper
34
+ `[x, y, z, roll, pitch, yaw, gripper, gripper]`
35
+ - **`observation.state.joint`** *(7 floats)*: full joint angles
36
+ `[joint_1, …, joint_7]`
37
+ - **`action`** *(7 floats)*: target joint commands
38
+
39
+ ---
40
+
41
+ ## Why is it smaller than LIBERO?
42
+ - **Fewer trajectories/tasks** → subset of the full benchmark
43
+ - **Only two camera views** → reduced visual redundancy
44
+ - **Reduced total frames** → shorter episodes or lower FPS
45
+
46
+ That’s why Smol-LIBERO is **1.79 GB instead of 34 GB**.
47
+
48
+ ---
49
+
50
+ ## Intended Uses
51
+ - Quick prototyping and debugging
52
+ - Comparing joint-space vs. Cartesian state inputs
53
+ - Training small VLA baselines before scaling to LIBERO
54
+
55
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
+ ## Limitations
58
+ - Smaller task and visual diversity compared to LIBERO
59
+ - Only two fixed camera views
60
+ - May not fully represent generalization behavior on larger benchmarks
61
 
62
  ## Citation
63