--- license: apache-2.0 task_categories: - visual-question-answering tags: - video - spatial-intelligence - counting - benchmark - streaming language: - en --- # VSI-SUPER-Count **[Website](https://vision-x-nyu.github.io/cambrian-s.github.io/)** | **[Paper](https://arxiv.org/abs/2025)** | **[GitHub](https://github.com/cambrian-mllm/cambrian-s)** | **[Models](https://huggingface.co/collections/nyu-visionx/cambrian-s-models)** **Authors**: [Shusheng Yang*](https://github.com/vealocia), [Jihan Yang*](https://jihanyang.github.io/), [Pinzhi Huang†](https://pinzhihuang.github.io/), [Ellis Brown†](https://ellisbrown.github.io/), et al. VSI-SUPER-Count is a benchmark for testing continual counting capabilities across changing viewpoints and scenes in arbitrarily long videos. It challenges models to maintain accurate object counts as new objects appear throughout extended video sequences. ## Overview VSI-SUPER-Count evaluates spatial supersensing by testing whether models can: - Count distinct objects across long video sequences (10-120 minutes) - Track object appearances as viewpoints change - Maintain counting accuracy in streaming scenarios with multiple query points This benchmark is part of [VSI-Super](https://huggingface.co/collections/nyu-visionx/vsi-super), which also includes [VSI-SUPER-Recall](https://huggingface.co/datasets/nyu-visionx/VSI-SUPER-Recall). ## Dataset Structure The dataset contains two evaluation modes: ### Regular Mode For standard counting evaluation: ```python { "video_path": "10mins/00000000.mp4", "question": "How many different socket(s) are there in the video?", "answer": 27.0, # Final count (float) "split": "10mins", # Video duration "query_times": null, # Not used in regular mode "answers": null # Not used in regular mode } ``` ### Streaming Mode For continual counting at multiple timestamps: ```python { "video_path": "10mins/00000037.mp4", "question": "How many different ceiling light(s) are there in the video?", "answer": null, # Not used in streaming mode "split": "10mins_streaming", "query_times": [75, 132, 150, 158, 185, 198, 225, 600, 635, 32768], # Query timestamps (seconds) "answers": [4, 7, 8, 8, 8, 8, 8, 29, 29, 29] # Ground truth counts at each timestamp } ``` **Key points:** - Videos are downsampled to 24 FPS - Questions follow the format: "How many different [object](s) are there in the video?" - Regular mode: single count answer for entire video - Streaming mode: 10 query timestamps with corresponding counts ## Related - **[VSI-SUPER-Recall](https://huggingface.co/datasets/nyu-visionx/VSI-SUPER-Recall)**: Long-horizon spatial observation and recall - **[VSI-Bench](https://huggingface.co/datasets/nyu-visionx/VSI-Bench)**: Visual-spatial intelligence evaluation - **[VSI-590K](https://huggingface.co/datasets/nyu-visionx/VSI-590K)**: Training dataset for spatial reasoning ## Citation ```bibtex @article{yang2025cambrian, title={Cambrian-S: Towards Spatial Supersensing in Video}, author={Yang, Shusheng and Yang, Jihan and Huang, Pinzhi and Brown, Ellis and Yang, Zihao and Yu, Yue and Tong, Shengbang and Zheng, Zihan and Xu, Yifan and Wang, Muhan and Lu, Danhao and Fergus, Rob and LeCun, Yann and Fei-Fei, Li and Xie, Saining}, journal={arXiv preprint arXiv:2511.04670}, year={2025} } ```