Datasets:

Modalities:
Tabular
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
File size: 5,718 Bytes
51dbacc
 
 
 
1020616
 
bef5015
 
1020616
bef5015
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1020616
 
 
 
bef5015
 
1020616
 
 
 
 
 
 
bef5015
 
 
 
3abe42b
bef5015
 
 
 
 
 
3abe42b
bef5015
 
 
 
 
3abe42b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bef5015
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: mit
task_categories:
- robotics
---

<div align="center">
<h2>Magma: A Foundation Model for Multimodal AI Agents</h2>

[Jianwei Yang](https://jwyang.github.io/)<sup>*</sup><sup>1</sup><sup></sup>&nbsp;
[Reuben Tan](https://cs-people.bu.edu/rxtan/)<sup>1</sup><sup></sup>&nbsp;
[Qianhui Wu](https://qianhuiwu.github.io/)<sup>1</sup><sup></sup>&nbsp;
[Ruijie Zheng](https://ruijiezheng.com/)<sup>2</sup><sup></sup>&nbsp;
[Baolin Peng](https://scholar.google.com/citations?user=u1CNjgwAAAAJ&hl=en&oi=ao)<sup>1</sup><sup></sup>&nbsp;
[Yongyuan Liang](https://cheryyunl.github.io)<sup>2</sup><sup></sup>

[Yu Gu](http://yu-gu.me/)<sup>1</sup>&nbsp;
[Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>3</sup>&nbsp;
[Seonghyeon Ye](https://seonghyeonye.github.io/)<sup>4</sup>&nbsp;
[Joel Jang](https://joeljang.github.io/)<sup>5</sup>&nbsp;
[Yuquan Deng](https://scholar.google.com/citations?user=LTC0Q6YAAAAJ&hl=en)<sup>5</sup>&nbsp;
[Lars Liden](https://sites.google.com/site/larsliden)<sup>1</sup>&nbsp;
[Jianfeng Gao](https://www.microsoft.com/en-us/research/people/jfgao/)<sup>1</sup><sup></sup>

<sup>1</sup> Microsoft Research; <sup>2</sup> University of Maryland; <sup>3</sup> University of Wisconsin-Madison  
<sup>4</sup> KAIST; <sup>5</sup> University of Washington

<sup>*</sup> Project lead  <sup></sup> First authors  <sup></sup> Second authors  <sup></sup> Leadership  

\[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\] &nbsp; \[[Project Page](https://microsoft.github.io/Magma/)\] &nbsp; \[[Hugging Face Paper](https://huggingface.co/papers/2502.13130)\] &nbsp; \[[Github Repo](https://github.com/microsoft/Magma)\] &nbsp; \[[Video](https://www.youtube.com/watch?v=SbfzvUU5yM8)\] 

</div>

## Introduction

This dataset contains the robotic manipulation data used in Magma pretraining. For fair comparison, we followed OpenVLA to use the data mix "siglip-224px+mx-oxe-magic-soup".

The dataset is organized by following source datasets, with each source containing one or more arrow files:

| Folder                                                |   Number of Shards |
|:------------------------------------------------------|-------------------:|
| austin_buds_dataset_converted_externally_to_rlds      |                  1 |
| austin_sailor_dataset_converted_externally_to_rlds    |                  4 |
| austin_sirius_dataset_converted_externally_to_rlds    |                  3 |
| berkeley_autolab_ur5                                  |                  1 |
| berkeley_cable_routing                                |                  1 |
| berkeley_fanuc_manipulation                           |                  1 |
| bridge_orig                                           |                 17 |
| cmu_stretch                                           |                  1 |
| dlr_edan_shared_control_converted_externally_to_rlds  |                  1 |
| fractal20220817_data                                  |                 21 |
| furniture_bench_dataset_converted_externally_to_rlds  |                  4 |
| iamlab_cmu_pickup_insert_converted_externally_to_rlds |                  2 |
| jaco_play                                             |                  1 |
| kuka                                                  |                 21 |
| language_table                                        |                  8 |
| nyu_franka_play_dataset_converted_externally_to_rlds  |                  1 |
| roboturk                                              |                  3 |
| stanford_hydra_dataset_converted_externally_to_rlds   |                  4 |
| taco_play                                             |                  3 |
| toto                                                  |                  3 |
| ucsd_kitchen_dataset_converted_externally_to_rlds     |                  1 |
| utaustin_mutex                                        |                  4 |
| viola                                                 |                  1 |


### Features

In addition to the default features, we extracted the visual traces of future 16 frames for each frame. The dataset contains the following fields:

- `dataset_name`: Original source dataset name
- `image`: Image of the robot scene (binary)
- `task_string`: Description of the task
- `frame_index`: Index of the frame in the video
- `traj_index`: Index of the trajectory in the dataset
- `action`: Robot action vector (serialized numpy array)
- `trace`: Robot trajectory trace (serialized numpy array)
- `trace_visibility`: Visibility mask for the trace (serialized numpy array)

## Dataset Loading

### Full Dataset Load

```py
from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-OXE-ToM", streaming=True, split="train")
```

### Individual Dataset Load
or specify a dataset by:

```py
from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-OXE-ToM", data_dir="austin_buds_dataset_converted_externally_to_rlds", streaming=True, split="train")
```

### Sample Decoding

```py
# Helper function to deserialize binary fields
def deserialize_array(bytes_data):
    return pickle.loads(bytes_data)

# Helper function to convert binary image data to PIL Image
def bytes_to_image(image_bytes):
    return Image.open(io.BytesIO(image_bytes))

for i, example in enumerate(dataset):   
    # decode the image: 256 x 256 x 3
    image = bytes_to_image(example['image'])
    # decode action: 1 x 7
    action = deserialize_array(example['action'])
    # decode trace: 1 x 17 x 256 x 2
    trace = deserialize_array(example['trace'])
    # decode trace visibility: 1 x 17 x 256 x 1
    trace_visibility = deserialize_array(example['trace_visibility'])
```