Abstract
Video generative models can interpret and execute visual instructions embedded within frames, enhancing controllability in image-to-video generation.
Large-scale video generative models have recently demonstrated strong visual capabilities, enabling the prediction of future frames that adhere to the logical and physical cues in the current observation. In this work, we investigate whether such capabilities can be harnessed for controllable image-to-video generation by interpreting visual signals embedded within the frames as instructions, a paradigm we term In-Video Instruction. In contrast to prompt-based control, which provides textual descriptions that are inherently global and coarse, In-Video Instruction encodes user guidance directly into the visual domain through elements such as overlaid text, arrows, or trajectories. This enables explicit, spatial-aware, and unambiguous correspondences between visual subjects and their intended actions by assigning distinct instructions to different objects. Extensive experiments on three state-of-the-art generators, including Veo 3.1, Kling 2.5, and Wan 2.2, show that video models can reliably interpret and execute such visually embedded instructions, particularly in complex multi-object scenarios.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Are Video Models Ready as Zero-Shot Reasoners? An Empirical Study with the MME-CoF Benchmark (2025)
- Plan-X: Instruct Video Generation via Semantic Planning (2025)
- Planning with Sketch-Guided Verification for Physics-Aware Video Generation (2025)
- BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration (2025)
- TempoControl: Temporal Attention Guidance for Text-to-Video Models (2025)
- Mask2IV: Interaction-Centric Video Generation via Mask Trajectories (2025)
- Simulating the Visual World with Artificial Intelligence: A Roadmap (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper