VSI-SUPER-Count / README.md
EdwinHuang's picture
Update README.md
f16e84b verified
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
tags:
  - video
  - spatial-intelligence
  - counting
  - benchmark
  - streaming
language:
  - en

VSI-SUPER-Count

Website | Paper | GitHub | Models

Authors: Shusheng Yang*, Jihan Yang*, Pinzhi Huang†, Ellis Brown†, et al.

VSI-SUPER-Count is a benchmark for testing continual counting capabilities across changing viewpoints and scenes in arbitrarily long videos. It challenges models to maintain accurate object counts as new objects appear throughout extended video sequences.

Overview

VSI-SUPER-Count evaluates spatial supersensing by testing whether models can:

  • Count distinct objects across long video sequences (10-120 minutes)
  • Track object appearances as viewpoints change
  • Maintain counting accuracy in streaming scenarios with multiple query points

This benchmark is part of VSI-Super, which also includes VSI-SUPER-Recall.

Dataset Structure

The dataset contains two evaluation modes:

Regular Mode

For standard counting evaluation:

{
    "video_path": "10mins/00000000.mp4",
    "question": "How many different socket(s) are there in the video?",
    "answer": 27.0,              # Final count (float)
    "split": "10mins",           # Video duration
    "query_times": null,         # Not used in regular mode
    "answers": null              # Not used in regular mode
}

Streaming Mode

For continual counting at multiple timestamps:

{
    "video_path": "10mins/00000037.mp4",
    "question": "How many different ceiling light(s) are there in the video?",
    "answer": null,              # Not used in streaming mode
    "split": "10mins_streaming",
    "query_times": [75, 132, 150, 158, 185, 198, 225, 600, 635, 32768],  # Query timestamps (seconds)
    "answers": [4, 7, 8, 8, 8, 8, 8, 29, 29, 29]  # Ground truth counts at each timestamp
}

Key points:

  • Videos are downsampled to 24 FPS
  • Questions follow the format: "How many different object are there in the video?"
  • Regular mode: single count answer for entire video
  • Streaming mode: 10 query timestamps with corresponding counts

Related

Citation

@article{yang2025cambrian,
  title={Cambrian-S: Towards Spatial Supersensing in Video},
  author={Yang, Shusheng and Yang, Jihan and Huang, Pinzhi and Brown, Ellis and Yang, Zihao and Yu, Yue and Tong, Shengbang and Zheng, Zihan and Xu, Yifan and Wang, Muhan and Lu, Danhao and Fergus, Rob and LeCun, Yann and Fei-Fei, Li and Xie, Saining},
  journal={arXiv preprint arXiv:2511.04670},
  year={2025}
}