pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation
Abstract
Policy-based flow models enable efficient and high-quality image generation by distilling teacher models into student models with dynamic flow velocities, improving diversity and quality.
Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models (pi-Flow). pi-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard ell_2 flow matching loss. By simply mimicking the teacher's behavior, pi-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 256^2, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, pi-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality.
Community
[arXiv] [Code] [pi-Qwen Demo🤗] [pi-FLUX Demo🤗]
Introducing pi-Flow, a new paradigm for few-step generation, which distills a pre-trained flow model into a policy-based flow model using simple imitation learning, achieving state-of-the-art diversity and teacher-aligned quality in 4-step text-to-image generation.
Highlights
Novel Framework: pi-Flow stands for policy-based flow models. The network does not output a denoised state; instead, it outputs a fast policy that rolls out multiple ODE substeps to reach the denoised state.
Simple Distillation: pi-Flow adopts policy-based imitation distillation (pi-ID). No JVPs, no auxiliary networks, no GANs—just a single L2 loss between the policy and the teacher.
Diversity and Teacher Alignment: pi-Flow mitigates the quality–diversity trade-off, generating highly diverse samples while maintaining high quality. It also remains highly faithful to the teacher’s style. The example below shows that pi-Flow samples generally align with the teacher’s outputs and exhibit significantly higher diversity than those from DMD students (e.g., SenseFlow, Qwen-Image Lightning).
Scalability: pi-Flow scales from ImageNet DiT to 20-billion-parameter text-to-image models (Qwen-Image).
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Revisiting Diffusion Q-Learning: From Iterative Denoising to One-Step Action Generation (2025)
- Transition Models: Rethinking the Generative Learning Objective (2025)
- Offline Reinforcement Learning with Generative Trajectory Policies (2025)
- DiffusionNFT: Online Diffusion Reinforcement with Forward Process (2025)
- CMT: Mid-Training for Efficient Learning of Consistency, Mean Flow, and Flow Map Models (2025)
- Advantage Weighted Matching: Aligning RL with Pretraining in Diffusion Models (2025)
- Modular MeanFlow: Towards Stable and Scalable One-Step Generative Modeling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend