From Pixels to Words -- Towards Native Vision-Language Primitives at Scale Paper • 2510.14979 • Published 5 days ago • 64
InteractiveOmni: A Unified Omni-modal Model for Audio-Visual Multi-turn Dialogue Paper • 2510.13747 • Published 6 days ago • 28
CVD-STORM: Cross-View Video Diffusion with Spatial-Temporal Reconstruction Model for Autonomous Driving Paper • 2510.07944 • Published 12 days ago • 23
TempSamp-R1: Effective Temporal Sampling with Reinforcement Fine-Tuning for Video LLMs Paper • 2509.18056 • Published 29 days ago • 27
ELV-Halluc: Benchmarking Semantic Aggregation Hallucinations in Long Video Understanding Paper • 2508.21496 • Published Aug 29 • 54
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models Paper • 2504.10479 • Published Apr 14 • 297
MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction Paper • 2502.11663 • Published Feb 17 • 40 • 2
UniMLVG: Unified Framework for Multi-view Long Video Generation with Comprehensive Control Capabilities for Autonomous Driving Paper • 2412.04842 • Published Dec 6, 2024
MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction Paper • 2502.11663 • Published Feb 17 • 40
MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction Paper • 2502.11663 • Published Feb 17 • 40
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites Paper • 2404.16821 • Published Apr 25, 2024 • 57