Text-to-Image
Diffusers
Safetensors

pi-Flow: Policy-Based Flow Models

Distilled 4-step Qwen-Image models proposed in the paper:

pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation
Hansheng Chen1, Kai Zhang2, Hao Tan2, Leonidas Guibas1, Gordon Wetzstein1, Sai Bi2
1Stanford University, 2Adobe Research
[arXiv] [Code] [pi-Qwen Demo๐Ÿค—] [pi-FLUX Demo๐Ÿค—]

teaser

Usage

Please first install the official code repository.

We provide diffusers pipelines for easy inference. The following code demonstrates how to sample images from the distilled FLUX models.

4-NFE GM-Qwen (GMFlow Policy, Recommended)

Note: GM-Qwen supports elastic inference. Feel free to set num_inference_steps to any value above 4.

import torch
from diffusers import FlowMatchEulerDiscreteScheduler
from lakonlab.pipelines.piqwen_pipeline import PiQwenImagePipeline

pipe = PiQwenImagePipeline.from_pretrained(
    'Qwen/Qwen-Image',
    torch_dtype=torch.bfloat16)
adapter_name = pipe.load_piflow_adapter(  # you may later call `pipe.set_adapters([adapter_name, ...])` to combine other adapters (e.g., style LoRAs)
    'Lakonik/pi-Qwen-Image',
    subfolder='gmqwen_k8_piid_4step',
    target_module_name='transformer')
pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(  # use fixed shift=3.2
    pipe.scheduler.config, shift=3.2, shift_terminal=None, use_dynamic_shifting=False)
pipe = pipe.to('cuda')

out = pipe(
    prompt='Photo of a coffee shop entrance featuring a chalkboard sign reading "ฯ€-Qwen Coffee ๐Ÿ˜Š $2 per cup," with a neon '
           'light beside it displaying "ฯ€-้€šไน‰ๅƒ้—ฎ". Next to it hangs a poster showing a beautiful Chinese woman, '
           'and beneath the poster is written "eโ‰ˆ2.71828-18284-59045-23536-02874-71352".',
    width=1920,
    height=1080,
    num_inference_steps=4,
    generator=torch.Generator().manual_seed(42),
).images[0]
out.save('gmqwen_4nfe.png')

gmqwen_4nfe

4-NFE DX-Qwen (DX Policy)

import torch
from diffusers import FlowMatchEulerDiscreteScheduler
from lakonlab.pipelines.piqwen_pipeline import PiQwenImagePipeline

pipe = PiQwenImagePipeline.from_pretrained(
    'Qwen/Qwen-Image',
    policy_type='DX',
    policy_kwargs=dict(
        segment_size=1 / 3.5,  # 1 / (nfe - 1 + final_step_size_scale)
        shift=3.2),
    torch_dtype=torch.bfloat16)
adapter_name = pipe.load_piflow_adapter(  # you may later call `pipe.set_adapters([adapter_name, ...])` to combine other adapters (e.g., style LoRAs)
    'Lakonik/pi-Qwen-Image',
    subfolder='dxqwen_n10_piid_4step',
    target_module_name='transformer')
pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(  # use fixed shift=3.2
    pipe.scheduler.config, shift=3.2, shift_terminal=None, use_dynamic_shifting=False)
pipe = pipe.to('cuda')

out = pipe(
    prompt='Photo of a coffee shop entrance featuring a chalkboard sign reading "ฯ€-Qwen Coffee ๐Ÿ˜Š $2 per cup," with a neon '
           'light beside it displaying "ฯ€-้€šไน‰ๅƒ้—ฎ". Next to it hangs a poster showing a beautiful Chinese woman, '
           'and beneath the poster is written "eโ‰ˆ2.71828-18284-59045-23536-02874-71352".',
    width=1920,
    height=1080,
    num_inference_steps=4,
    generator=torch.Generator().manual_seed(42),
).images[0]
out.save('dxqwen_4nfe.png')

dxqwen_4nfe

Citation

@misc{piflow,
      title={pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation}, 
      author={Hansheng Chen and Kai Zhang and Hao Tan and Leonidas Guibas and Gordon Wetzstein and Sai Bi},
      year={2025},
      eprint={2510.14974},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2510.14974}, 
}
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Lakonik/pi-Qwen-Image

Base model

Qwen/Qwen-Image
Finetuned
(35)
this model

Dataset used to train Lakonik/pi-Qwen-Image

Space using Lakonik/pi-Qwen-Image 1