File size: 7,078 Bytes
3631ac3 6305b9e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
---
dataset_info:
features:
- name: episode_id
dtype: int64
- name: goal
dtype: string
- name: screenshots_b64
sequence: string
- name: actions
list:
- name: action_type
dtype: string
- name: app_name
dtype: string
- name: direction
dtype: string
- name: text
dtype: string
- name: x
dtype: int64
- name: y
dtype: int64
- name: step_instructions
sequence: string
splits:
- name: test
num_bytes: 13222360728
num_examples: 3051
- name: train
num_bytes: 54855940258
num_examples: 12232
download_size: 67435592965
dataset_size: 68078300986
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# Dataset Card: Android Control Episodes (with Screenshots)
## Dataset Summary
This dataset contains Android UI control episodes consisting of:
- episode-level metadata (`episode_id`, `goal`)
- step-by-step instructions (`step_instructions`)
- action sequences (`actions` as a list of structs)
- base64-encoded screenshots per step (`screenshots_b64`)
Each episode records a short interaction trajectory on an Android device, including what the agent/user attempted to do and how (tap, swipe, text input, etc.).
## Supported Tasks and Benchmarks
- UI task planning and control
- Multimodal grounding (text instructions + UI visual context)
- Imitation learning / behavior cloning for mobile agents
- Action prediction and trajectory modeling
## Languages
- Prompts and instructions: English
## Dataset Structure
### Data Fields
- `episode_id` (int64): Unique identifier of the episode.
- `goal` (string): Natural language description of the objective for the episode.
- `screenshots_b64` (list[string]): Base64-encoded screenshots captured along the trajectory. Large; dominates file sizes.
- `actions` (list[struct]): Sequence of actions taken. Each element has:
- `action_type` (string): e.g., "open_app", "click", "swipe", "type".
- `app_name` (string or null): App associated with the action, if any.
- `direction` (string or null): For gestures like swipe (e.g., "up", "down").
- `text` (string or null): Text content for typing actions, if applicable.
- `x` (int64 or null): X coordinate for tap/click.
- `y` (int64 or null): Y coordinate for tap/click.
- `step_instructions` (list[string]): Short imperative instructions per step.
### Example Instance (images excluded for brevity)
```json
{
"episode_id": 13,
"goal": "On cruisedeals, I would like to view the cruise schedules for a four-night trip from New York to Canada.",
"actions": [
{"action_type": "open_app", "app_name": "CruiseDeals", "direction": null, "text": null, "x": null, "y": null},
{"action_type": "click", "app_name": null, "direction": null, "text": null, "x": 313, "y": 742},
{"action_type": "swipe", "app_name": null, "direction": "up", "text": null, "x": null, "y": null}
],
"step_instructions": [
"Open the cruisedeals app",
"Click on the suggested searched result",
"Swipe up to view schedules"
],
"screenshots_b64": ["<base64>", "<base64>", "<base64>"]
}
```
## Data Splits
- `train`: 12,232 episodes across 275 Parquet shards (1 row group per file)
- `test`: 3,051 episodes across 67 Parquet shards (1 row group per file)
Total compressed size on the Hub is approximately 67.4 GB (train ≈ 54.3 GB, test ≈ 13.1 GB). The `screenshots_b64` column contributes the majority of the size.
Typical per-shard stats (example shard):
- ~45 episodes per shard
- ~6–7 screenshots per episode on average
- ~5–6 actions per episode on average
- ~5–6 step instructions per episode on average
## Usage
### Load with Datasets (streaming to avoid full download)
```python
from datasets import load_dataset
ds = load_dataset(
"parquet",
data_files="hf://datasets/<owner>/<repo>@~parquet/default/train/*.parquet",
streaming=True,
)["train"]
for i, ex in enumerate(ds):
ex.pop("screenshots_b64", None) # skip large images for lightweight inspection
print(ex["episode_id"], ex["goal"])
if i >= 4:
break
```
### Materialize a small slice without streaming
```python
from datasets import load_dataset
small = load_dataset(
"parquet",
data_files="hf://datasets/<owner>/<repo>@~parquet/default/train/*.parquet",
split="train[:1%]",
)
print(len(small))
```
### DuckDB: schema preview and lightweight sampling
```python
import duckdb
# Peek schema of one shard
duckdb.sql("""
DESCRIBE SELECT * FROM
'hf://datasets/<owner>/<repo>@~parquet/default/train/0000.parquet'
""").show()
# Count rows via metadata only (no full scan)
duckdb.sql("""
SELECT SUM(row_group_num_rows) AS total_rows
FROM parquet_metadata('hf://datasets/<owner>/<repo>@~parquet/default/train/*.parquet')
""").show()
# Sample a few rows excluding heavy images
duckdb.sql("""
SELECT episode_id, goal,
list_length(actions) AS num_actions,
list_length(step_instructions) AS num_steps
FROM 'hf://datasets/<owner>/<repo>@~parquet/default/train/*.parquet'
LIMIT 10
""").show()
```
### PyArrow: footer-only metadata or row-group reads
```python
from huggingface_hub import HfFileSystem
import pyarrow.parquet as pq
fs = HfFileSystem()
path = "hf://datasets/<owner>/<repo>@~parquet/default/train/0000.parquet"
# Metadata-only: schema & row groups
with fs.open(path, "rb") as f:
pf = pq.ParquetFile(f)
print(pf.schema_arrow)
print(pf.metadata.num_rows, pf.num_row_groups)
# Read a single row group without images
with fs.open(path, "rb") as f:
pf = pq.ParquetFile(f)
cols = [c for c in pf.schema_arrow.names if c != "screenshots_b64"]
tbl = pf.read_row_group(0, columns=cols)
print(tbl.slice(0, 3).to_pydict())
```
### Dask: predicate/projection pushdown
```python
import dask.dataframe as dd
ddf = dd.read_parquet(
"hf://datasets/<owner>/<repo>@~parquet/default/train/*.parquet",
columns=["episode_id", "goal", "actions", "step_instructions"],
)
print(ddf.head())
```
## Efficiency Tips
- Prefer streaming or column selection to avoid downloading `screenshots_b64` unless needed.
- Use DuckDB `parquet_metadata(...)` or PyArrow `ParquetFile(...).metadata` to inspect sizes/counts without reading data pages.
- Each file has one row group; shard-level parallelism is straightforward.
## Licensing
[More Information Needed]
## Citation
If you use this dataset in your work, please cite the source dataset/creators as appropriate and this repository. Example placeholder:
```bibtex
@misc{android_control_episodes,
title = {Android Control Episodes Dataset},
year = {2025},
url = {https://huggingface.co/datasets/smolagents/android-control}
}
```
## Limitations and Risks
- Screenshots are stored as base64 strings and can be large; consider storage and memory implications.
- Some action fields (e.g., `app_name`, `direction`, `text`) may be null for many steps.
- Visual UI elements may vary across Android versions/devices.
## Maintainers
[more information needed]
|