--- license: apache-2.0 datasets: - Lakonik/t2i-prompts-3m base_model: - Qwen/Qwen-Image pipeline_tag: text-to-image library_name: diffusers --- # pi-Flow: Policy-Based Flow Models Distilled 4-step Qwen-Image models proposed in the paper: **pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation**
[Hansheng Chen](https://lakonik.github.io/)1, [Kai Zhang](https://kai-46.github.io/website/)2, [Hao Tan](https://research.adobe.com/person/hao-tan/)2, [Leonidas Guibas](https://geometry.stanford.edu/?member=guibas)1, [Gordon Wetzstein](http://web.stanford.edu/~gordonwz/)1, [Sai Bi](https://sai-bi.github.io/)2
1Stanford University, 2Adobe Research
[[arXiv](https://arxiv.org/abs/2510.14974)] [[Code](https://github.com/Lakonik/piFlow)] [[pi-Qwen Demo🤗](https://huggingface.co/spaces/Lakonik/pi-Qwen)] [[pi-FLUX Demo🤗](https://huggingface.co/spaces/Lakonik/pi-FLUX.1)] ![teaser](https://cdn-uploads.huggingface.co/production/uploads/638067fcb334960c987fbeda/H0J1LYUcSS5YqOwZqQ0Jb.jpeg) ## Usage Please first install the [official code repository](https://github.com/Lakonik/piFlow). We provide diffusers pipelines for easy inference. The following code demonstrates how to sample images from the distilled FLUX models. ### 4-NFE GM-Qwen (GMFlow Policy, Recommended) Note: GM-Qwen supports elastic inference. Feel free to set `num_inference_steps` to any value above 4. ```python import torch from diffusers import FlowMatchEulerDiscreteScheduler from lakonlab.pipelines.piqwen_pipeline import PiQwenImagePipeline pipe = PiQwenImagePipeline.from_pretrained( 'Qwen/Qwen-Image', torch_dtype=torch.bfloat16) adapter_name = pipe.load_piflow_adapter( # you may later call `pipe.set_adapters([adapter_name, ...])` to combine other adapters (e.g., style LoRAs) 'Lakonik/pi-Qwen-Image', subfolder='gmqwen_k8_piid_4step', target_module_name='transformer') pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config( # use fixed shift=3.2 pipe.scheduler.config, shift=3.2, shift_terminal=None, use_dynamic_shifting=False) pipe = pipe.to('cuda') out = pipe( prompt='Photo of a coffee shop entrance featuring a chalkboard sign reading "π-Qwen Coffee 😊 $2 per cup," with a neon ' 'light beside it displaying "π-通义千问". Next to it hangs a poster showing a beautiful Chinese woman, ' 'and beneath the poster is written "e≈2.71828-18284-59045-23536-02874-71352".', width=1920, height=1080, num_inference_steps=4, generator=torch.Generator().manual_seed(42), ).images[0] out.save('gmqwen_4nfe.png') ``` ![gmqwen_4nfe](https://cdn-uploads.huggingface.co/production/uploads/638067fcb334960c987fbeda/0FrSLiI1SkbGJhJ9znZw8.png) ### 4-NFE DX-Qwen (DX Policy) ``` import torch from diffusers import FlowMatchEulerDiscreteScheduler from lakonlab.pipelines.piqwen_pipeline import PiQwenImagePipeline pipe = PiQwenImagePipeline.from_pretrained( 'Qwen/Qwen-Image', policy_type='DX', policy_kwargs=dict( segment_size=1 / 3.5, # 1 / (nfe - 1 + final_step_size_scale) shift=3.2), torch_dtype=torch.bfloat16) adapter_name = pipe.load_piflow_adapter( # you may later call `pipe.set_adapters([adapter_name, ...])` to combine other adapters (e.g., style LoRAs) 'Lakonik/pi-Qwen-Image', subfolder='dxqwen_n10_piid_4step', target_module_name='transformer') pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config( # use fixed shift=3.2 pipe.scheduler.config, shift=3.2, shift_terminal=None, use_dynamic_shifting=False) pipe = pipe.to('cuda') out = pipe( prompt='Photo of a coffee shop entrance featuring a chalkboard sign reading "π-Qwen Coffee 😊 $2 per cup," with a neon ' 'light beside it displaying "π-通义千问". Next to it hangs a poster showing a beautiful Chinese woman, ' 'and beneath the poster is written "e≈2.71828-18284-59045-23536-02874-71352".', width=1920, height=1080, num_inference_steps=4, generator=torch.Generator().manual_seed(42), ).images[0] out.save('dxqwen_4nfe.png') ``` ![dxqwen_4nfe](https://cdn-uploads.huggingface.co/production/uploads/638067fcb334960c987fbeda/Cq1SiHQ0YYjCFk_rVdHmd.png) ## Citation ``` @misc{piflow, title={pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation}, author={Hansheng Chen and Kai Zhang and Hao Tan and Leonidas Guibas and Gordon Wetzstein and Sai Bi}, year={2025}, eprint={2510.14974}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2510.14974}, } ```