Diffusers
Safetensors
WanDMDPipeline
BrianChen1129's picture
Update README.md
811c84e verified
|
raw
history blame
2.97 kB
metadata
license: apache-2.0
datasets:
  - FastVideo/Wan-Syn_77x448x832_600k
base_model:
  - Wan-AI/Wan2.1-T2V-14B-Diffusers

FastVideo FastWan2.1-T2V-14B-480P-Diffusers

Introduction

This model is jointly finetuned with DMD and VSA, based on Wan-AI/Wan2.1-T2V-1.3B-Diffusers. It supports efficient 3-step inference and generates high-quality videos at 61×448×832 resolution. We adopt the FastVideo 480P Synthetic Wan dataset, consisting of 600k synthetic latents.


Model Overview

  • 3-step inference is supported and achieves up to 50x speed up on a single H100 GPU.
  • Supports generating videos with resolution 61×448×832.
  • Finetuning and inference scripts are available in the FastVideo repository:
  • Try it out on FastVideo — we support a wide range of GPUs from H100 to 4090, and also support Mac users!

Training Infrastructure

Training was conducted on 8 nodes with 64 H200 GPUs in total, using a global batch size = 64.
We enable gradient checkpointing, set HSDP_shard_dim = 8, sequence_parallel_size = 4, and use learning rate = 1e-5.
We set VSA attention sparsity to 0.9, and training runs for 3000 steps (~52 hours)
The detailed training example script is available here.

If you use FastWan2.1-T2V-14B-480P-Diffusers model for your research, please cite our paper:

@article{zhang2025vsa,
  title={VSA: Faster Video Diffusion with Trainable Sparse Attention},
  author={Zhang, Peiyuan and Huang, Haofeng and Chen, Yongqi and Lin, Will and Liu, Zhengzhong and Stoica, Ion and Xing, Eric and Zhang, Hao},
  journal={arXiv preprint arXiv:2505.13389},
  year={2025}
}
@article{zhang2025fast,
  title={Fast video generation with sliding tile attention},
  author={Zhang, Peiyuan and Chen, Yongqi and Su, Runlong and Ding, Hangliang and Stoica, Ion and Liu, Zhengzhong and Zhang, Hao},
  journal={arXiv preprint arXiv:2502.04507},
  year={2025}
}