Datasets:

Modalities:
Image
Text
Formats:
parquet
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for IndoorCrowd

Dataset Summary

IndoorCrowd is a multi-scene dataset designed for indoor human detection, instance segmentation, and multi-object tracking. It captures diverse challenges such as viewpoint variation, partial occlusion, and varying crowd density across four distinct campus locations (ACS-EC, ACS-EG, IE-Central, R-Central). Faces are explicitly blurred to preserve privacy, making it suitable for safe research into intelligent crowd management and behaviour tracking.

The dataset consists of 31 videos sampled at 5 FPS, totalling 9,913 frames.

Subsets

  1. Object Detection and Segmentation: 9,913 frames featuring bounding boxes and instance segmentation masks. Includes a rigorously annotated 620-frame pure-human control subset for foundation-model benchmarking.
  2. Multi-Object Tracking (MOT): A 2,552-frame tracking subset providing continuous identity tracks following the MOTChallenge format.

Supported Tasks

  • object-detection: Detecting human bounding boxes (Baselines benchmarked: YOLOv8n, YOLOv26n, RT-DETR-L).
  • image-segmentation: Generating instance-level masks for people in crowded indoor geometries.
  • video-object-tracking: Maintaining human identity across consecutive frames via tracking algorithms (Baselines benchmarked: ByteTrack, BoT-SORT, OC-SORT).

Dataset Creation

Curation Rationale

Outdoor datasets currently dominate development. Indoor environments introduce a new set of challenges like camera view obstructions (pillars, furniture), structural occlusions, near-to-distal scale variance, and abrupt density fluctuations.

Annotations

Annotations were produced using a semi-automated pipeline:

  1. Auto-labelling: Uses foundation models such as SAM3, GroundingSAM, and EfficientGroundingSAM to generate initial candidate masks and tracklets.
  2. Human Correction: Expert human reviewers used SAM 2.1 to manually delete false positives, append missing masks, correct identity switches, and linearly interpolate gaps, ensuring high-fidelity ground truth.

Data Splits

The dataset provides varied crowd density regimes:

  • ACS-EC: A dense multi-level atrium setting with small instance scales and high occlusion ($79.3%$ dense frames).
  • ACS-EG: A narrow ground-level corridor with substantial person scale variations lengthways.
  • IE-Central: An intermediate seating/entrance hall environment.
  • R-Central: An overhead-view atrium with prominent structural columns causing regular occlusions.

Personal and Sensitive Information

All human faces in the raw footage have been strictly blurred by an automated de-identification pipeline prior to release. No audio, demographic attributes, or personal identifiers are collected.

Additional Information

Licensing Information

The dataset is released under a license restricting its use strictly to non-commercial computer vision research. It prohibits surveillance and any re-identification of individuals.

Downloads last month
10