File size: 3,906 Bytes
64490b8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
---
license: apache-2.0
task_categories:
- video-classification
- question-answering
- visual-question-answering
tags:
- perception-test
- video-qa
- multiple-choice
- video-understanding
size_categories:
- 1K<n<10K
viewer: true
---
# Perception Test MCQ Dataset
## Dataset Description
This dataset contains **1000 video question-answering entries** from the Perception Test dataset. Each entry includes a video and a multiple-choice question about the video content, testing various aspects of video understanding including object tracking, action recognition, and temporal reasoning.
## Dataset Structure
This dataset follows the VideoFolder format with the following structure:
```
dataset/
├── data/
│ ├── videos/
│ │ ├── video_XXX.mp4
│ │ └── ...
│ └── metadata.csv
└── README.md
```
### Metadata Format
The `metadata.csv` contains:
- `file_name`: Path to the video file (relative to split directory)
- `video_id`: Unique video identifier
- `question`: The question text
- `options`: JSON string containing multiple choice options
- `correct_answer`: The correct answer (available for 997/1000 entries)
- `question_type`: Type of question (typically "multiple choice")
## Dataset Statistics
- **Total QA pairs**: 1000
- **Unique videos**: 1000
- **Average questions per video**: 1.0
- **Entries with answers**: 997/1000 (99.7%)
- **Video format**: MP4
### Question Type Distribution
- `unknown`: 1000 questions
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("advaitgupta/perception_test_mcq")
# Access the data split
data = dataset['data']
# Example: Get first sample
sample = data[0]
print("Question:", sample['question'])
print("Options:", json.loads(sample['options']))
print("Correct Answer:", sample['correct_answer'])
print("Video:", sample['file_name'])
```
### Processing Videos and Questions
```python
import json
import cv2
# Load metadata
metadata = data.to_pandas()
# Process a video-question pair
sample = metadata.iloc[0]
video_path = sample['file_name']
question = sample['question']
options = json.loads(sample['options'])
correct_answer = sample['correct_answer']
print(f"Question: {question}")
for i, option in enumerate(options):
print(f"{i+1}. {option}")
print(f"Correct Answer: {correct_answer}")
# Load and process video
cap = cv2.VideoCapture(video_path)
# ... your video processing code
```
## Task Types
This dataset covers various video understanding tasks:
### Object and Action Recognition
*"What ingredients did the person put in the bowl or on the plate?"*
### Temporal Reasoning
*"How many objects were put in the backpack throughout the video?"*
### Camera Motion Analysis
*"Is the camera moving or static?"*
### Spatial Understanding
*"Where is the person?"*
### Activity Recognition
*"What is the person preparing?"*
## Data Quality
- All video files have been validated to exist
- Questions are human-annotated from the Perception Test dataset
- Multiple choice format ensures consistent evaluation
- Fair distribution across videos (avg 1.0 questions per video)
- Correct answers provided for evaluation
## Example Entry
```json
{
"file_name": "videos/video_10909.mp4",
"video_id": "video_10909",
"question": "Is the camera moving or static?",
"options": ["I don't know", "moving", "static or shaking"],
"correct_answer": "static or shaking",
"question_type": "multiple choice"
}
```
## Citation
If you use this dataset, please cite the original Perception Test paper:
```bibtex
@article{perception-test-2022,
title={Perception Test: A Diagnostic Benchmark for Multimodal Video Models},
author={Pătrăucean, Viorica and others},
journal={arXiv preprint arXiv:2211.13775},
year={2022}
}
```
## License
This dataset is released under the Apache 2.0 license.
|