Datasets:
File size: 2,756 Bytes
3d82248 e930b1d 3d82248 e930b1d d7d245b e930b1d 3d82248 e930b1d 3d82248 e930b1d d7d245b e930b1d 3d82248 b41209d 14e3b34 b41209d 258e827 b41209d 675a11f b41209d 675a11f b41209d 675a11f b41209d 675a11f b41209d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
configs:
- config_name: random
default: true
data_files:
- split: train
path: random/train.jsonl
- split: test
path: random/test.jsonl
- split: dev
path: random/dev.jsonl
- config_name: zeroshot
data_files:
- split: train
path: zeroshot/train.jsonl
- split: test
path: zeroshot/test.jsonl
- split: dev
path: zeroshot/dev.jsonl
license: apache-2.0
language:
- en
tags:
- visual reasoning
- grounded chat
- visual grounding
size_categories:
- 1K<n<10K
---
# Grounded Visual Spatial Reasoning
Code for generating the annotations can be found here: [github.com](https://github.com/tomhodemon/grounded-visual-spatial-reasoning)
## Dataset Summary
This dataset extends the [Visual Spatial Reasoning (VSR)](https://arxiv.org/pdf/2205.00363) dataset with **visual grounding annotations**: each caption is annotated with **COCO-category object mentions**, their **positions** , and corresponding **bounding boxes** in the image.
## Data instance
Each sample instance has the following structure:
| Field | Type | Description |
|---------------------------|------------------|--------------------------------------------------|
| `image_file` | `string` | COCO-style image filename |
| `image_link` | `string` | Direct COCO image URL |
| `width` | `int` | Image width |
| `height` | `int` | Image height |
| `caption` | `string` | Caption with two COCO-category object mentions |
| `label` | `bool` | Label from VSR original dataset |
| `relation` | `string` | Spatial relation |
| `ref_exp.labels` | `list[string]` | List of object labels from COCO categories |
| `ref_exp.label_positions` | `list[list[int]]`| Position (start, end) of each label in caption sentence|
| `ref_exp.bboxes` | `list[list[float]]` | Bounding boxes (`[x, y, w, h]` format) |
## Download Images
To download the images, follow the instructions from the [VSR official GitHub repo](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data).
## Citation
If you use this dataset, please cite the original **Visual Spatial Reasoning** paper:
```bibtex
@article{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
``` |