perception_test_mcq / README.md
advaitgupta's picture
Upload folder using huggingface_hub
64490b8 verified
metadata
license: apache-2.0
task_categories:
  - video-classification
  - question-answering
  - visual-question-answering
tags:
  - perception-test
  - video-qa
  - multiple-choice
  - video-understanding
size_categories:
  - 1K<n<10K
viewer: true

Perception Test MCQ Dataset

Dataset Description

This dataset contains 1000 video question-answering entries from the Perception Test dataset. Each entry includes a video and a multiple-choice question about the video content, testing various aspects of video understanding including object tracking, action recognition, and temporal reasoning.

Dataset Structure

This dataset follows the VideoFolder format with the following structure:

dataset/
├── data/
│   ├── videos/
│   │   ├── video_XXX.mp4
│   │   └── ...
│   └── metadata.csv
└── README.md

Metadata Format

The metadata.csv contains:

  • file_name: Path to the video file (relative to split directory)
  • video_id: Unique video identifier
  • question: The question text
  • options: JSON string containing multiple choice options
  • correct_answer: The correct answer (available for 997/1000 entries)
  • question_type: Type of question (typically "multiple choice")

Dataset Statistics

  • Total QA pairs: 1000
  • Unique videos: 1000
  • Average questions per video: 1.0
  • Entries with answers: 997/1000 (99.7%)
  • Video format: MP4

Question Type Distribution

  • unknown: 1000 questions

Usage

Loading the Dataset

from datasets import load_dataset
import json

# Load the dataset
dataset = load_dataset("advaitgupta/perception_test_mcq")

# Access the data split
data = dataset['data']

# Example: Get first sample
sample = data[0]
print("Question:", sample['question'])
print("Options:", json.loads(sample['options']))
print("Correct Answer:", sample['correct_answer'])
print("Video:", sample['file_name'])

Processing Videos and Questions

import json
import cv2

# Load metadata
metadata = data.to_pandas()

# Process a video-question pair
sample = metadata.iloc[0]
video_path = sample['file_name']
question = sample['question']
options = json.loads(sample['options'])
correct_answer = sample['correct_answer']

print(f"Question: {question}")
for i, option in enumerate(options):
    print(f"{i+1}. {option}")
print(f"Correct Answer: {correct_answer}")

# Load and process video
cap = cv2.VideoCapture(video_path)
# ... your video processing code

Task Types

This dataset covers various video understanding tasks:

Object and Action Recognition

"What ingredients did the person put in the bowl or on the plate?"

Temporal Reasoning

"How many objects were put in the backpack throughout the video?"

Camera Motion Analysis

"Is the camera moving or static?"

Spatial Understanding

"Where is the person?"

Activity Recognition

"What is the person preparing?"

Data Quality

  • All video files have been validated to exist
  • Questions are human-annotated from the Perception Test dataset
  • Multiple choice format ensures consistent evaluation
  • Fair distribution across videos (avg 1.0 questions per video)
  • Correct answers provided for evaluation

Example Entry

{
  "file_name": "videos/video_10909.mp4",
  "video_id": "video_10909",
  "question": "Is the camera moving or static?",
  "options": ["I don't know", "moving", "static or shaking"],
  "correct_answer": "static or shaking",
  "question_type": "multiple choice"
}

Citation

If you use this dataset, please cite the original Perception Test paper:

@article{perception-test-2022,
  title={Perception Test: A Diagnostic Benchmark for Multimodal Video Models},
  author={Pătrăucean, Viorica and others},
  journal={arXiv preprint arXiv:2211.13775},
  year={2022}
}

License

This dataset is released under the Apache 2.0 license.