FutureOmni: Evaluating Future Forecasting from Omni-Modal Context for Multimodal LLMs
Abstract
FutureOmni presents the first benchmark for evaluating multimodal models' ability to forecast future events from audio-visual data, revealing current limitations and proposing an improved training strategy for better performance.
Although Multimodal Large Language Models (MLLMs) demonstrate strong omni-modal perception, their ability to forecast future events from audio-visual cues remains largely unexplored, as existing benchmarks focus mainly on retrospective understanding. To bridge this gap, we introduce FutureOmni, the first benchmark designed to evaluate omni-modal future forecasting from audio-visual environments. The evaluated models are required to perform cross-modal causal and temporal reasoning, as well as effectively leverage internal knowledge to predict future events. FutureOmni is constructed via a scalable LLM-assisted, human-in-the-loop pipeline and contains 919 videos and 1,034 multiple-choice QA pairs across 8 primary domains. Evaluations on 13 omni-modal and 7 video-only models show that current systems struggle with audio-visual future prediction, particularly in speech-heavy scenarios, with the best accuracy of 64.8% achieved by Gemini 3 Flash. To mitigate this limitation, we curate a 7K-sample instruction-tuning dataset and propose an Omni-Modal Future Forecasting (OFF) training strategy. Evaluations on FutureOmni and popular audio-visual and video-only benchmarks demonstrate that OFF enhances future forecasting and generalization. We publicly release all code (https://github.com/OpenMOSS/FutureOmni) and datasets (https://huggingface.co/datasets/OpenMOSS-Team/FutureOmni).
Community
FutureOmni: Evaluating Future Forecasting from Omni-Modal Context for Multimodal LLMs
๐ Paper: https://arxiv.org/pdf/2601.13836
๐ป Code: https://github.com/OpenMOSS/FutureOmni
๐ Project: https://openmoss.github.io/FutureOmni
๐ฌ Datasets: https://huggingface.co/datasets/OpenMOSS-Team/FutureOmni
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper