FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution
Abstract
Diffusion models have recently advanced video restoration, but applying them to real-world video super-resolution (VSR) remains challenging due to high latency, prohibitive computation, and poor generalization to ultra-high resolutions. Our goal in this work is to make diffusion-based VSR practical by achieving efficiency, scalability, and real-time performance. To this end, we propose FlashVSR, the first diffusion-based one-step streaming framework towards real-time VSR. FlashVSR runs at approximately 17 FPS for 768x1408 videos on a single A100 GPU by combining three complementary innovations: (i) a train-friendly three-stage distillation pipeline that enables streaming super-resolution, (ii) locality-constrained sparse attention that cuts redundant computation while bridging the train-test resolution gap, and (iii) a tiny conditional decoder that accelerates reconstruction without sacrificing quality. To support large-scale training, we also construct VSR-120K, a new dataset with 120k videos and 180k images. Extensive experiments show that FlashVSR scales reliably to ultra-high resolutions and achieves state-of-the-art performance with up to 12x speedup over prior one-step diffusion VSR models. We will release the code, pretrained models, and dataset to foster future research in efficient diffusion-based VSR.
Community
TL;DR — FlashVSR is a streaming, one-step diffusion-based video super-resolution framework with block-sparse attention and a Tiny Conditional Decoder. It reaches ~17 FPS at 768×1408 on a single A100 GPU. A Locality-Constrained Attention design further improves generalization and perceptual quality on ultra-high-resolution videos.
FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution
Page: https://zhuang2002.github.io/FlashVSR/
Paper: https://arxiv.org/abs/2510.12747
Code: https://github.com/OpenImagingLab/FlashVSR
⭐ If you like our work, please give it a star!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InfVSR: Breaking Length Limits of Generic Video Super-Resolution (2025)
- Asymmetric VAE for One-Step Video Super-Resolution Acceleration (2025)
- Towards Redundancy Reduction in Diffusion Models for Efficient Video Super-Resolution (2025)
- SkipSR: Faster Super Resolution with Token Skipping (2025)
- UniMMVSR: A Unified Multi-Modal Framework for Cascaded Video Super-Resolution (2025)
- Rolling Forcing: Autoregressive Long Video Diffusion in Real Time (2025)
- TinySR: Pruning Diffusion for Real-World Image Super-Resolution (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper