license: apache-2.0
task_categories:
- image-text-to-text
tags:
- image
language:
- en
size_categories:
- 1M<n<10M
configs:
- config_name: default
data_files:
- split: train
path: data/train/*
- split: valid
path: data/validation/*
- split: test
path: data/test/*
Open Images V7 + Narratives
Original Source | Google Localized Narrative
π Introduction
This dataset collects the images and annotations from the original Open Images Dataset V7 and the annotations from the project localized-narratives.
Out of the 9M images, a subset of 1.9M images have been annotated with: bounding boxes, object segmentations, visual relationships, localized narratives, point-level labels, and image-level labels. (The remaining images have only image-level labels).
π Acknowledgement
All credits to the original Open Images Dataset V7 and the localized-narratives teams.
π Cite
Please consider to cite the following related papers:
"Extreme clicking for efficient object annotation", Papadopolous et al., ICCV 2017.
"We don't need no bounding-boxes: Training object class detectors using only human verification", Papadopolous et al., CVPR 2016.
"The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale", Kuznetsova et al., arXiv:1811.00982 2018.
"Large-scale interactive object segmentation with human annotators", Benenson et al., CVPR 2019.
"Natural Vocabulary Emerges from Free-Form Annotations", Pont-Tuset et al., arXiv 2019.
"From couloring-in to pointillism: revisiting semantic segmentation supervision", Benenson et al., arXiv 2022.