Datasets:
license: mit
task_categories:
- image-classification
- object-detection
- visual-question-answering
- zero-shot-image-classification
language:
- en
tags:
- ego4d
- egocentric-vision
- computer-vision
- random-sampling
- video-frames
- first-person-view
- activity-recognition
size_categories:
- 10K<n<100K
pretty_name: Ego4D Random Views Dataset
dataset_info:
features:
- name: image
dtype: image
- name: frame_id
dtype: string
- name: video_uid
dtype: string
- name: video_filename
dtype: string
- name: video_path
dtype: string
- name: frame_idx
dtype: int32
- name: total_frames
dtype: int32
- name: timestamp_sec
dtype: float32
- name: fps
dtype: float32
- name: worker_id
dtype: int32
- name: generated_at
dtype: string
- name: image_width
dtype: int32
- name: image_height
dtype: int32
- name: original_shape_height
dtype: int32
- name: original_shape_width
dtype: int32
- name: original_shape_channels
dtype: int32
splits:
- name: train
num_bytes: 21000000000
num_examples: 20000
download_size: 21000000000
dataset_size: 21000000000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
viewer: true
Ego4D Random Views Dataset
This dataset contains 20,000 random view frames sampled from the Ego4D dataset using a high-performance multi-process generation system.
Dataset Overview
- Total Images: 20,000 high-quality frames
- Image Format: PNG (1024×1024 resolution)
- Source: Ego4D v2 dataset (52,665+ video files)
- Sampling Method: Multi-process random sampling with maximum diversity
- Generation Time: 797.57 seconds (~13 minutes)
- Generation Speed: 25.08 frames/second
- Success Rate: 100.0%
Key Features
🎬 Maximum Diversity: Sampled from 50,000+ different Ego4D videos
🚀 High Performance: Generated using 128 parallel workers
📊 Complete Metadata: Full metadata for each frame including video source, timestamp, etc.
🎯 High Quality: 1024×1024 resolution PNG images
💾 Efficient Storage: Stored in parquet format for fast loading
🔍 Rich Context: Each frame includes video UID, timestamp, and source information
Dataset Schema
Each sample contains:
Field | Type | Description |
---|---|---|
image |
Image | The frame image (1024×1024 PNG) |
frame_id |
string | Unique frame identifier |
video_uid |
string | Original Ego4D video UID |
video_filename |
string | Source video filename |
video_path |
string | Full path to source video |
frame_idx |
int32 | Frame index in original video |
total_frames |
int32 | Total frames in source video |
timestamp_sec |
float32 | Timestamp in video (seconds) |
fps |
float32 | Video frame rate |
worker_id |
int32 | Generation worker ID |
generated_at |
string | Generation timestamp |
image_width |
int32 | Image width (1024) |
image_height |
int32 | Image height (1024) |
original_shape_* |
int32 | Original video frame dimensions |
Usage
Quick Start
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("weikaih/ego4d-random-views-20k")
# Get a sample
sample = dataset['train'][0]
image = sample['image'] # PIL Image
print(f"Video: {sample['video_filename']}")
print(f"Timestamp: {sample['timestamp_sec']:.2f}s")
Exploring the Data
import matplotlib.pyplot as plt
# Display a sample image
sample = dataset['train'][42]
plt.figure(figsize=(10, 6))
plt.subplot(1, 2, 1)
plt.imshow(sample['image'])
plt.title(f"Frame from {sample['video_uid'][:8]}...")
plt.axis('off')
plt.subplot(1, 2, 2)
plt.text(0.1, 0.8, f"Video: {sample['video_filename'][:30]}...")
plt.text(0.1, 0.7, f"Timestamp: {sample['timestamp_sec']:.2f}s")
plt.text(0.1, 0.6, f"Frame: {sample['frame_idx']}/{sample['total_frames']}")
plt.text(0.1, 0.5, f"FPS: {sample['fps']}")
plt.axis('off')
plt.show()
PyTorch Integration
import torch
from torch.utils.data import DataLoader
from torchvision import transforms
# Define transforms
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
# Custom dataset class
class Ego4DDataset(torch.utils.data.Dataset):
def __init__(self, hf_dataset, transform=None):
self.dataset = hf_dataset
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
sample = self.dataset[idx]
image = sample['image']
if self.transform:
image = self.transform(image)
return image, sample
# Create dataset and dataloader
pytorch_dataset = Ego4DDataset(dataset['train'], transform=transform)
dataloader = DataLoader(pytorch_dataset, batch_size=32, shuffle=True)
# Training loop example
for batch_idx, (images, metadata) in enumerate(dataloader):
# Your training code here
print(f"Batch {batch_idx}: {images.shape}")
if batch_idx >= 2: # Just show first few batches
break
Data Analysis
import pandas as pd
from collections import Counter
# Convert to pandas for analysis
data = []
for sample in dataset['train']:
data.append({
'video_uid': sample['video_uid'],
'timestamp_sec': sample['timestamp_sec'],
'fps': sample['fps'],
'total_frames': sample['total_frames'],
'worker_id': sample['worker_id']
})
df = pd.DataFrame(data)
# Basic statistics
print(f"Unique videos: {df['video_uid'].nunique()}")
print(f"Average FPS: {df['fps'].mean():.2f}")
print(f"Timestamp range: {df['timestamp_sec'].min():.2f}s - {df['timestamp_sec'].max():.2f}s")
# Video distribution
video_counts = Counter(df['video_uid'])
print(f"Samples per video - Min: {min(video_counts.values())}, Max: {max(video_counts.values())}")
Applications
This dataset is suitable for:
- Egocentric vision research: First-person view understanding
- Activity recognition: Daily activity classification
- Object detection: Objects in natural settings
- Scene understanding: Indoor/outdoor scene analysis
- Transfer learning: Pre-training for egocentric tasks
- Multi-modal learning: Combining with video metadata
- Temporal analysis: Using timestamp information
Generation Statistics
- Target Frames: 20,000
- Generated Frames: 20,000
- Success Rate: 100.0%
- Generation Time: 13.3 minutes
- Workers Used: 128
- Processing Speed: 25.08 frames/second
- Source Videos: 52,665+ Ego4D video files
- Diversity: Maximum diversity through distributed sampling
Technical Details
Sampling Strategy
- Random Selection: Both video and frame positions randomly sampled
- Worker Distribution: Videos distributed across 128 workers for diversity
- Quality Control: Automatic validation and error recovery
- Metadata Preservation: Complete provenance tracking
Data Quality
- Image Quality: All frames validated during generation
- Resolution: Consistent 1024×1024 PNG format
- Color Space: RGB color space
- Compression: PNG lossless compression
- Metadata Completeness: 100% metadata coverage
Citation
If you use this dataset, please cite the original Ego4D paper:
@inproceedings{grauman2022ego4d,
title={Ego4d: Around the world in 3,000 hours of egocentric video},
author={Grauman, Kristen and Westbury, Andrew and Byrnes, Eugene and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={18211--18230},
year={2022}
}
License
This dataset follows the same license terms as the original Ego4D dataset. Please refer to the Ego4D license for usage terms.
Dataset Creation
This dataset was generated using a high-performance multi-process sampling system designed for maximum diversity and efficiency. The generation process:
- Video Indexing: Scanned 52,665+ Ego4D video files
- Distributed Sampling: Used 128 parallel workers for maximum diversity
- Quality Assurance: Validated each frame during generation
- Metadata Collection: Captured complete provenance information
- Efficient Upload: Used HuggingFace datasets library with parquet format
For more details on the generation process, see the technical documentation.