DC-VideoGen: Efficient Video Generation with Deep Compression Video Autoencoder
Abstract
DC-VideoGen accelerates video generation by adapting pre-trained diffusion models to a deep compression latent space, reducing inference latency and enabling high-resolution video generation.
We introduce DC-VideoGen, a post-training acceleration framework for efficient video generation. DC-VideoGen can be applied to any pre-trained video diffusion model, improving efficiency by adapting it to a deep compression latent space with lightweight fine-tuning. The framework builds on two key innovations: (i) a Deep Compression Video Autoencoder with a novel chunk-causal temporal design that achieves 32x/64x spatial and 4x temporal compression while preserving reconstruction quality and generalization to longer videos; and (ii) AE-Adapt-V, a robust adaptation strategy that enables rapid and stable transfer of pre-trained models into the new latent space. Adapting the pre-trained Wan-2.1-14B model with DC-VideoGen requires only 10 GPU days on the NVIDIA H100 GPU. The accelerated models achieve up to 14.8x lower inference latency than their base counterparts without compromising quality, and further enable 2160x3840 video generation on a single GPU. Code: https://github.com/dc-ai-projects/DC-VideoGen.
Community
DC-VideoGen is a new post-training framework for accelerating video diffusion models. Key features:
🎬 Supports video generation up to 2160×3840 resolution on a single H100 GPU
⚡ Delivers 14.8× faster inference than the base model
💰 230× lower training cost compared to training from scratch (only 10 H100 GPU days for Wan-2.1-14B)
DC-VideoGen is built on two core innovations:
- Deep Compression Video Autoencoder (DC-AE-V): a new family of deep compression autoencoders for video data, providing 32×/64× spatial and 4× temporal compression.
- AE-Adapt-V: a robust adaptation strategy that enables rapid and stable transfer of pre-trained video diffusion models to DC-AE-V.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Turbo-VAED: Fast and Stable Transfer of Video-VAEs to Mobile Devices (2025)
- OneVAE: Joint Discrete and Continuous Optimization Helps Discrete Video VAE Train Better (2025)
- Asymmetric VAE for One-Step Video Super-Resolution Acceleration (2025)
- MIDAS: Multimodal Interactive Digital-humAn Synthesis via Real-time Autoregressive Video Generation (2025)
- SANA-Video: Efficient Video Generation with Block Linear Diffusion Transformer (2025)
- RAP: Real-time Audio-driven Portrait Animation with Video Diffusion Transformer (2025)
- LongLive: Real-time Interactive Long Video Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper