File size: 6,960 Bytes
e61b887
16d8534
 
e61b887
 
16d8534
 
 
 
 
 
 
e61b887
 
 
 
 
 
458b0e0
8664a17
 
 
 
 
 
 
 
 
 
458b0e0
16d8534
458b0e0
16d8534
458b0e0
16d8534
458b0e0
16d8534
458b0e0
16d8534
458b0e0
16d8534
458b0e0
16d8534
e61b887
 
16d8534
 
 
671f3c1
995ae46
cc05aa2
995ae46
 
 
 
 
671f3c1
25f0922
843bac0
25f0922
 
 
 
 
 
 
 
 
843bac0
4c26390
 
 
 
f61db3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c26390
 
 
 
 
 
 
 
 
 
 
 
 
 
25f0922
 
 
 
 
8664a17
843bac0
e42cdd7
843bac0
e42cdd7
16d8534
1ca488c
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
annotations_creators:
- expert-generated
language:
- en
size_categories:
- 1K<n<10K
source_datasets:
- MS COCO-2017
task_categories:
- image-text-to-text
pretty_name: SPHERE
tags:
- image
- text
- vlm
- spatial-perception
- spatial-reasoning
configs:
- config_name: counting_only-paired-distance_and_counting
  data_files: single_skill/counting_only-paired-distance_and_counting.parquet
- config_name: counting_only-paired-position_and_counting
  data_files: single_skill/counting_only-paired-position_and_counting.parquet
- config_name: distance_only
  data_files: single_skill/distance_only.parquet
- config_name: position_only
  data_files: single_skill/position_only.parquet
- config_name: size_only
  data_files: single_skill/size_only.parquet
- config_name: distance_and_counting
  data_files: combine_2_skill/distance_and_counting.parquet
- config_name: distance_and_size
  data_files: combine_2_skill/distance_and_size.parquet
- config_name: position_and_counting
  data_files: combine_2_skill/position_and_counting.parquet
- config_name: object_manipulation
  data_files: reasoning/object_manipulation.parquet
- config_name: object_manipulation_w_intermediate
  data_files: reasoning/object_manipulation_w_intermediate.parquet
- config_name: object_occlusion
  data_files: reasoning/object_occlusion.parquet
- config_name: object_occlusion_w_intermediate
  data_files: reasoning/object_occlusion_w_intermediate.parquet
---

[SPHERE (Spatial Perception and Hierarchical Evaluation of REasoning)](https://huggingface.co/papers/2412.12693) is a benchmark for assessing spatial reasoning in vision-language models. It introduces a hierarchical evaluation framework with a human-annotated dataset, testing models on tasks ranging from basic spatial understanding to complex multi-skill reasoning. SPHERE poses significant challenges for both state-of-the-art open-source and proprietary models, revealing critical gaps in spatial cognition.

Project page: https://sphere-vlm.github.io/

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66178f9809f891c11c213a68/8yXBdWQI7KEbhoTPwxBZh.png" alt="SPHERE results summary" width="500"/>
</p>

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66178f9809f891c11c213a68/g0___KuwnEJ37i6-W96Ru.png" alt="SPHERE dataset examples" width="400"/>
</p>

## Dataset Usage

To use this dataset, run the following:

```python
from datasets import load_dataset

dataset = load_dataset("wei2912/SPHERE-VLM", "counting_only-paired-distance_and_counting")
```

where the second argument to `load_dataset` is the subset of your choice (see [Dataset Structure](#dataset-structure)).

## Dataset Structure

The dataset is split into the following subsets:

### Single-skill

1. **Position** (`position_only`) - 357 samples
    - Egocentric: 172, Allocentric: 185
2. **Counting** (`counting_only-paired-distance_and_counting` + `counting_only-paired-position_and_counting`) - 201 samples
    - The `counting_only-paired-distance_and_counting` subset comprises questions corresponding to those in `distance_and_counting`, and similarly for `counting_only-paired-position_and_counting` with `position-and_counting`.
    - For instance, every question in `distance_and_counting` (e.g. "How many crows are on the railing farther from the viewer?") has a corresponding question in `counting_only-paired-distance_and_counting` to count all such instances (e.g. "How many crows are in the photo?")
3. **Distance** (`distance_only`) - 202 samples
4. **Size** (`size_only`) - 198 samples

### Multi-skill

1. **Position + Counting** (`position_and_counting`) - 169 samples
    - Egocentric: 64, Allocentric: 105
2. **Distance + Counting** (`distance_and_counting`) - 158 samples
3. **Distance + Size** (`distance_and_size`) - 199 samples

### Reasoning

1. **Object occlusion** (`object_occlusion`) - 402 samples
    - Intermediate: 202, Final: 200
    - The `object_occlusion_w_intermediate` subset contains final questions with intermediate answers prefixed in the following format:
        > "Given that for the question: \<intermediate step question\> The answer is: \<intermediate step answer\>. \<final step question\> Answer the question directly."
    - For instance, given the two questions "Which object is thicker?" (intermediate) and "Where can a child be hiding?" (final) in `object_occlusion`, the corresponding question in `object_occlusion_w_intermediate` is:
        > "Given that for the question: Which object is thicker? Fire hydrant or tree trunk? The answer is: Tree trunk. Where can a child be hiding? Behind the fire hydrant or behind the tree? Answer the question directly."
3. **Object manipulation** (`object_manipulation`) - 399
    - Intermediate: 199, Final: 200

## Data Fields

The data fields are as follows:

- `question_id`: A unique ID for the question.
- `question`: Question to be passed to the VLM.
- `option`: A list of options that the VLM can select from. For counting tasks, this field is left as null.
- `answer`: The expected answer, which must be either one of the strings in `option` (for non-counting tasks) or a number (for counting tasks).
- `metadata`:
  - `viewpoint`: Either "allo" (allocentric) or "ego" (egocentric).
  - `format`: Expected format of the answer, e.g. "bool" (boolean), "name", "num" (numeric), "pos" (position).
  - `source_dataset`: Currently, this is "coco_test2017" ([MS COCO-2017](https://cocodataset.org)) for our entire set of annotations.
  - `source_img_id`: Source image ID in [MS COCO-2017](https://cocodataset.org).
  - `skill`: For reasoning tasks, a list of skills tested by the question, e.g. "count", "dist" (distance), "pos" (position), "shape", "size", "vis" (visual).

## Dataset Preparation

This version of the dataset was prepared by combining the [JSON annotations](https://github.com/zwenyu/SPHERE-VLM/tree/main/eval_datasets/coco_test2017_annotations) with the corresponding images from [MS COCO-2017](https://cocodataset.org).
The script used can be found at `prepare_parquet.py`, to be executed in the root of [our GitHub repository](https://github.com/zwenyu/SPHERE-VLM). 

## Licensing Information

Please note that the images are subject to the [Terms of Use of MS COCO-2017](https://cocodataset.org/#termsofuse):

> **Images**
>
> The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.

## BibTeX

```
@article{zhang2025sphere,
  title={SPHERE: Unveiling Spatial Blind Spots in Vision-Language Models Through Hierarchical Evaluation},
  author={Zhang, Wenyu and Ng, Wei En and Ma, Lixin and Wang, Yuwen and Zhao, Jungqi and Koenecke, Allison and Li, Boyang and Wang, Lu},
  journal={arXiv},
  year={2025}
}
```