Datasets:

Modalities:
Image
Text
Formats:
arrow
Languages:
English
ArXiv:
Tags:
image
Libraries:
Datasets
License:
Fhrozen's picture
add train/test
58e62b2
metadata
license: apache-2.0
task_categories:
  - image-text-to-text
tags:
  - image
language:
  - en
size_categories:
  - 1M<n<10M
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train/*
      - split: valid
        path: data/validation/*
      - split: test
        path: data/test/*

Open Images V7 + Narratives

Original Source | Google Localized Narrative

πŸ“Œ Introduction

This dataset collects the images and annotations from the original Open Images Dataset V7 and the annotations from the project localized-narratives.

Out of the 9M images, a subset of 1.9M images have been annotated with: bounding boxes, object segmentations, visual relationships, localized narratives, point-level labels, and image-level labels. (The remaining images have only image-level labels).

πŸ™ Acknowledgement

All credits to the original Open Images Dataset V7 and the localized-narratives teams.

πŸ“œ Cite

Please consider to cite the following related papers:

  1. "Extreme clicking for efficient object annotation", Papadopolous et al., ICCV 2017.

  2. "We don't need no bounding-boxes: Training object class detectors using only human verification", Papadopolous et al., CVPR 2016.

  3. "The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale", Kuznetsova et al., arXiv:1811.00982 2018.

  4. "Large-scale interactive object segmentation with human annotators", Benenson et al., CVPR 2019.

  5. "Natural Vocabulary Emerges from Free-Form Annotations", Pont-Tuset et al., arXiv 2019.

  6. "From couloring-in to pointillism: revisiting semantic segmentation supervision", Benenson et al., arXiv 2022.