Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Frederik Hvilshøj commited on
Commit
27d4734
·
1 Parent(s): 5416e86

Update README

Browse files
Files changed (1) hide show
  1. README.md +51 -14
README.md CHANGED
@@ -1,9 +1,14 @@
1
  ---
2
  license: odc-by
3
  language:
4
- - en
5
  size_categories:
6
- - 100M<n<1B
 
 
 
 
 
7
  ---
8
 
9
  # Dataset Card for E-MM1-100M
@@ -12,7 +17,7 @@ size_categories:
12
 
13
  ## Dataset Summary
14
 
15
- **E-MM1-100M** is a large-scale multimodal dataset of 100M+ data groups, pairing data from five modalities: audio, image, video, point cloud, and text.
16
  Each pair is a 5-tuple of a caption and an item from one of the four other modalities.
17
  The data and captions are sourced from [public data sources](https://github.com/encord-team/E-MM1/blob/main/SOURCE_DATASETS.md).
18
  The dataset was created to advance work on joint embeddings for multimodal applications like cross-modal retrieval. <br>
@@ -24,29 +29,61 @@ To visually explore the dataset, please visit our [E-MM1 Explorer](https://data.
24
  ## Dataset Splits
25
 
26
  We provide two data splits:
 
27
  - (this dataset) **E-MM1-100M (automated)**: very large, built via nearest-neighbour retrieval for pre-training applications.
28
  - **[E-MM1-1M](https://huggingface.co/datasets/encord-team/E-MM1-1M/) (annotated)**: validated with high quality, human-verified annotations for post-training applications.
29
 
30
- The **E-MM1-100M** split contains the large-scale dataset built with nearest-neighbour retrieval.
31
  For each of ~6.7M captions, we retrieved the top-16 nearest neighbours across all modalities, resulting in roughly 1B multimodal connections or 100M groups.
32
 
33
  ## Data Schema
34
 
35
- | Column | Type | Description |
36
- | -------- | ------- | ------- |
37
- | `encord_{modality}_id` | Integer | Unique ID for a specific file in that (dataset,modality) combination |
38
- | `save_folder` | String | Relative folder under your chosen root where the asset is stored. |
39
- | `file_name` | String | Filename of the asset |
40
- | `encord_text_id` | Integer | ID of the caption row |
41
- | `caption` | String | The caption text |
42
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ## Additional Information
45
 
46
  ### Usage Documentation
47
 
48
  Please find more detailed usage instructions on [Github](https://github.com/encord-team/E-MM1).
49
- We have also created an [interactive demonstration](data.encord.com) in Encord for visual exploration of the dataset, where you can find tutorials about the dataset.
50
 
51
  ### Contact
52
 
@@ -60,4 +97,4 @@ title={EBind: A Practical Approach To Space Binding},
60
  author={Broadbent, Jim and Cohen, Felix and Hvilshøj, Frederik and Landau, Eric and Sasoglu, Eren}
61
  year={2025}
62
  }
63
- ```
 
1
  ---
2
  license: odc-by
3
  language:
4
+ - en
5
  size_categories:
6
+ - 100M<n<1B
7
+ configs:
8
+ - config_name: default
9
+ data_files:
10
+ - split: train
11
+ path: "data/nn_*.csv"
12
  ---
13
 
14
  # Dataset Card for E-MM1-100M
 
17
 
18
  ## Dataset Summary
19
 
20
+ **E-MM1-100M** is a large-scale multimodal dataset of 100M+ data groups, pairing data from five modalities: audio, image, video, point cloud, and text.
21
  Each pair is a 5-tuple of a caption and an item from one of the four other modalities.
22
  The data and captions are sourced from [public data sources](https://github.com/encord-team/E-MM1/blob/main/SOURCE_DATASETS.md).
23
  The dataset was created to advance work on joint embeddings for multimodal applications like cross-modal retrieval. <br>
 
29
  ## Dataset Splits
30
 
31
  We provide two data splits:
32
+
33
  - (this dataset) **E-MM1-100M (automated)**: very large, built via nearest-neighbour retrieval for pre-training applications.
34
  - **[E-MM1-1M](https://huggingface.co/datasets/encord-team/E-MM1-1M/) (annotated)**: validated with high quality, human-verified annotations for post-training applications.
35
 
36
+ The **E-MM1-100M** split contains the large-scale dataset built with nearest-neighbour retrieval.
37
  For each of ~6.7M captions, we retrieved the top-16 nearest neighbours across all modalities, resulting in roughly 1B multimodal connections or 100M groups.
38
 
39
  ## Data Schema
40
 
41
+ ```json
42
+ {
43
+ "caption": "String: Caption of the audio, image, points, text, or video.",
44
+ "dataset_license_audio": "(String) The license of the dataset that the audio belongs to. !! This is not the license of the audio file !!",
45
+ "dataset_license_image": "(String) The license of the dataset that the image belongs to. !! This is not the license of the image file !!",
46
+ "dataset_license_points": "(String) The license of the dataset that the points belongs to.",
47
+ "dataset_license_text": "(String) The license of the dataset that the text belongs to.",
48
+ "dataset_license_video": "(String) The license of the dataset that the video belongs to. !! This is not the license of the video file !!",
49
+ "encord_audio_id": "(Int64) Unique ID of the audio file (and segment).",
50
+ "encord_image_id": "(Int64) Unique ID of the image file.",
51
+ "encord_points_id": "(Int64) Unique ID of the points file.",
52
+ "encord_text_id": "(Int64) Unique ID of the text file.",
53
+ "encord_video_id": "(Int64) Unique ID of the video file (and segment).",
54
+ "end_time_audio": "(Int64) End time of the audio segment in seconds.",
55
+ "end_time_video": "(Int64) End time of the video segment in seconds.",
56
+ "file_id_audio": "(String) Audio identifier from the source dataset.",
57
+ "file_id_image": "(String) Image identifier from the source dataset.",
58
+ "file_id_points": "(String) 3D object identifier from the source dataset.",
59
+ "file_id_video": "(String) Video identifier from the source dataset.",
60
+ "file_name_audio": "(String) Filename of the audio file if downloaded with download script.",
61
+ "file_name_image": "(String) Filename of the image file if downloaded with download script.",
62
+ "file_name_points": "(String) Filename of the points file if downloaded with download script.",
63
+ "file_name_video": "(String) Filename of the video file if downloaded with download script.",
64
+ "nn_index": "(Int64) all items in the row arethe `nn_index` nearest neighbors to the caption.",
65
+ "save_folder_audio": "(String) Folder name of the audio file if downloaded with download script.",
66
+ "save_folder_image": "(String) Folder name of the image file if downloaded with download script.",
67
+ "save_folder_points": "(String) Folder name of the points file if downloaded with download script.",
68
+ "save_folder_video": "(String) Folder name of the video file if downloaded with download script.",
69
+ "source_dataset_audio": "(String) Source dataset of the audio file.",
70
+ "source_dataset_image": "(String) Source dataset of the image file.",
71
+ "source_dataset_points": "(String) Source dataset of the points file.",
72
+ "source_dataset_text": "(String) Source dataset of the text file.",
73
+ "source_dataset_video": "(String) Source dataset of the video file.",
74
+ "start_time_audio": "(Int64) Start time of the audio segment in seconds.",
75
+ "start_time_video": "(Int64) Start time of the video segment in seconds.",
76
+ "youtube_id_audio": "(String) Youtube ID of the audio file.",
77
+ "youtube_id_video": "(String) Youtube ID of the video file."
78
+ }
79
+ ```
80
 
81
  ## Additional Information
82
 
83
  ### Usage Documentation
84
 
85
  Please find more detailed usage instructions on [Github](https://github.com/encord-team/E-MM1).
86
+ We have also created an [interactive demonstration](data.encord.com) in Encord for visual exploration of a subset of the dataset, where you can find dataset tutorials as well.
87
 
88
  ### Contact
89
 
 
97
  author={Broadbent, Jim and Cohen, Felix and Hvilshøj, Frederik and Landau, Eric and Sasoglu, Eren}
98
  year={2025}
99
  }
100
+ ```