rampnet-dataset / README.md
johnomeara's picture
Update README.md
ee882e3 verified
---
license: mit
task_categories:
- object-detection
tags:
- curb-ramp
- accessibility
- streetscape
---
**RampNet** is a two-stage pipeline that addresses the scarcity of curb ramp detection datasets by using government location data to automatically generate over 210,000 annotated Google Street View panoramas. This new dataset is then used to train a state-of-the-art curb ramp detection model that significantly outperforms previous efforts. In this repo, we provide our generated curb ramp dataset that we use to train the model.
Each parquet row contains a panoramic image and `curb_ramp_points_normalized`, which contains <x, y> normalized points of curb ramps in the image. To unnormalize these pixel coordinates, multiply by the image width (for x) and height (for y).
## Example Usage
```py
from datasets import load_dataset
dataset = load_dataset("projectsidewalk/rampnet-dataset", split="validation", streaming=True)
example = next(iter(dataset))
image = example["image"]
pano_id = example["pano_id"]
curb_ramp_points = example["curb_ramp_points_normalized"]
width, height = image.size
unnormalized_points = [
(x * width, y * height) for x, y in curb_ramp_points
]
print(f"Pano ID: {pano_id}")
print(f"Curb Ramp Points (unnormalized): {unnormalized_points}")
```
## Dataset Summary
| Name | Description | # of Panoramas | # of Labels |
| :--- | :--- | :--- | :--- |
| **Open Government Datasets** | The initial source of curb ramp locations (<lat, long> coordinates) from 3 US cities (NYC, Portland, Bend) with "Good" location precision. Used as input for Stage 1. | N/A (Geo-data) | 276,615¹ |
| **Project Sidewalk Crop Pre-training Set** | A subset of Project Sidewalk data used to initially pre-train the crop-level model in Stage 1, which identifies curb ramps within a small, directional image crop. Can be downloaded with `stage_one/crop_model/ps_model/data/download_data.py` | 20,698 | 27,704 |
| [**Manual Crop Model Training Set**](https://huggingface.co/datasets/projectsidewalk/rampnet-crop-model-dataset) | A small, fully and manually labeled dataset used for a second round of training on the crop-level model to improve its precision and recall. | 312 | 1,212 |
| [**⭐ RampNet Stage 1 Dataset (Final Output)**](https://huggingface.co/datasets/projectsidewalk/rampnet-dataset) | The main, large-scale dataset generated by the Stage 1 auto-translation pipeline, containing curb ramp pixel coordinates on GSV panoramas. This is the primary dataset contribution. | 214,376 | 849,895 |
| **Manual Ground Truth Set (1k Panos)** | A set of 1,000 panoramas randomly sampled and then fully and manually labeled. This serves as the "gold standard" for evaluating both Stage 1 and Stage 2 performance. Images are included in the Stage 1 Dataset on Hugging Face, but the labels themselves are in `manual_labels`. | 1,000 | 3,919 |
¹This number is the sum of curb ramp locations from the three cities with "Good" location precision listed in Table 1: New York City (217,680), Portland (45,324), and Bend (13,611).
This HF repo is for [**⭐ RampNet Stage 1 Dataset (Final Output)**](https://huggingface.co/datasets/projectsidewalk/rampnet-dataset)!
## Citation
```bibtex
@inproceedings{omeara2025rampnet,
author = {John S. O'Meara and Jared Hwang and Zeyu Wang and Michael Saugstad and Jon E. Froehlich},
title = {{RampNet: A Two-Stage Pipeline for Bootstrapping Curb Ramp Detection in Streetscape Images from Open Government Metadata}},
booktitle = {{ICCV'25 Workshop on Vision Foundation Models and Generative AI for Accessibility: Challenges and Opportunities (ICCV 2025 Workshop)}},
year = {2025},
doi = {https://doi.org/10.48550/arXiv.2508.09415},
url = {https://cv4a11y.github.io/ICCV2025/index.html},
note = {DOI: forthcoming}
}
```