Datasets:
Add task category, library name, and update paper links
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,19 +1,23 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
3 |
-
pretty_name: TerraMesh
|
4 |
size_categories:
|
5 |
- 1M<n<10M
|
|
|
6 |
viewer: false
|
7 |
tags:
|
8 |
- Earth observation
|
9 |
- Multimodal
|
10 |
- Pre-training
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
# TerraMesh
|
14 |
|
15 |
> **A planetary‑scale, multimodal analysis‑ready dataset for Earth‑Observation foundation models.**
|
16 |
|
|
|
17 |
|
18 |
**TerraMesh** merges data from **Sentinel‑1 SAR, Sentinel‑2 optical, Copernicus DEM, NDVI and land‑cover** sources into more than **9 million co‑registered patches** ready for large‑scale representation learning.
|
19 |
|
@@ -89,7 +93,7 @@ Heat map of the sample count in a one-degree grid. | Monthly distribution of al
|
|
89 |
TerraMesh was used to pre-train [TerraMind-B](https://huggingface.co/ibm-esa-geospatial/TerraMind-1.0-base).
|
90 |
On the six evaluated segmentation tasks from PANGAEA bench, TerraMind‑B reaches an average mIoU of 66.6%, the best overall score with an average rank of 2.33. This amounts to roughly a 3pp improvement over the next‑best open model (CROMA), underscoring the benefits of pre‑training on TerraMesh.
|
91 |
Compared to an ablation model pre-trained only on SSL4EO-S12 locations TerraMind-B performs overall 1pp better with better global generalization on more remote tasks like CTM-SS.
|
92 |
-
More details in our [paper](https://
|
93 |
|
94 |
---
|
95 |
|
@@ -260,7 +264,8 @@ If you use TerraMesh, please cite:
|
|
260 |
title={Terramesh: A planetary mosaic of multimodal earth observation data},
|
261 |
author={Blumenstiel, Benedikt and Fraccaro, Paolo and Marsocci, Valerio and Jakubik, Johannes and Maurogiovanni, Stefano and Czerkawski, Mikolaj and Sedona, Rocco and Cavallaro, Gabriele and Brunschwiler, Thomas and Bernabe-Moreno, Juan and others},
|
262 |
journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
|
263 |
-
year={2025}
|
|
|
264 |
}
|
265 |
```
|
266 |
|
@@ -282,4 +287,4 @@ The LULC data is provided by [ESRI, Impact Observatory, and Microsoft](https://p
|
|
282 |
|
283 |
The cloud masks used for augmentating the LULC maps and provided as metadata are produced using the [SEnSeIv2](https://github.com/aliFrancis/SEnSeIv2/tree/main?tab=readme-ov-file) model.
|
284 |
|
285 |
-
The DEM data is produced using [Copernicus WorldDEM-30](https://dataspace.copernicus.eu/explore-data/data-collections/copernicus-contributing-missions/collections-description/COP-DEM) © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved
|
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
|
|
3 |
size_categories:
|
4 |
- 1M<n<10M
|
5 |
+
pretty_name: TerraMesh
|
6 |
viewer: false
|
7 |
tags:
|
8 |
- Earth observation
|
9 |
- Multimodal
|
10 |
- Pre-training
|
11 |
+
task_categories:
|
12 |
+
- image-feature-extraction
|
13 |
+
library_name: webdataset
|
14 |
---
|
15 |
|
16 |
# TerraMesh
|
17 |
|
18 |
> **A planetary‑scale, multimodal analysis‑ready dataset for Earth‑Observation foundation models.**
|
19 |
|
20 |
+
Paper: [TerraMesh: A Planetary Mosaic of Multimodal Earth Observation Data](https://huggingface.co/papers/2504.11172)
|
21 |
|
22 |
**TerraMesh** merges data from **Sentinel‑1 SAR, Sentinel‑2 optical, Copernicus DEM, NDVI and land‑cover** sources into more than **9 million co‑registered patches** ready for large‑scale representation learning.
|
23 |
|
|
|
93 |
TerraMesh was used to pre-train [TerraMind-B](https://huggingface.co/ibm-esa-geospatial/TerraMind-1.0-base).
|
94 |
On the six evaluated segmentation tasks from PANGAEA bench, TerraMind‑B reaches an average mIoU of 66.6%, the best overall score with an average rank of 2.33. This amounts to roughly a 3pp improvement over the next‑best open model (CROMA), underscoring the benefits of pre‑training on TerraMesh.
|
95 |
Compared to an ablation model pre-trained only on SSL4EO-S12 locations TerraMind-B performs overall 1pp better with better global generalization on more remote tasks like CTM-SS.
|
96 |
+
More details in our [paper](https://huggingface.co/papers/2504.11172).
|
97 |
|
98 |
---
|
99 |
|
|
|
264 |
title={Terramesh: A planetary mosaic of multimodal earth observation data},
|
265 |
author={Blumenstiel, Benedikt and Fraccaro, Paolo and Marsocci, Valerio and Jakubik, Johannes and Maurogiovanni, Stefano and Czerkawski, Mikolaj and Sedona, Rocco and Cavallaro, Gabriele and Brunschwiler, Thomas and Bernabe-Moreno, Juan and others},
|
266 |
journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
|
267 |
+
year={2025},
|
268 |
+
url={https://huggingface.co/papers/2504.11172}
|
269 |
}
|
270 |
```
|
271 |
|
|
|
287 |
|
288 |
The cloud masks used for augmentating the LULC maps and provided as metadata are produced using the [SEnSeIv2](https://github.com/aliFrancis/SEnSeIv2/tree/main?tab=readme-ov-file) model.
|
289 |
|
290 |
+
The DEM data is produced using [Copernicus WorldDEM-30](https://dataspace.copernicus.eu/explore-data/data-collections/copernicus-contributing-missions/collections-description/COP-DEM) © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved
|