--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers base_model_relation: quantized tags: - quantization --- # Usage with Diffusers To use this quantized FLUX.1 [dev] checkpoint, you need to install the 🧨 diffusers and bitsandbytes library: ``` pip install -U diffusers pip install -U bitsandbytes ``` After installing the required library, you can run the following script: ```python from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained( "diffusers/FLUX.1-dev-bnb-8bit", torch_dtype=torch.bfloat16 ) pipe.to("cuda") prompt = "Baroque style, a lavish palace interior with ornate gilded ceilings, intricate tapestries, and dramatic lighting over a grand staircase." pipe_kwargs = { "prompt": prompt, "height": 1024, "width": 1024, "guidance_scale": 3.5, "num_inference_steps": 50, "max_sequence_length": 512, } image = pipe( **pipe_kwargs, generator=torch.manual_seed(0), ).images[0] image.save("flux.png") ```
BF16![]() |
BnB 4-bit![]() |
|
Visual comparison of Flux-dev model outputs using BF16 (left), BnB 4-bit (right) (Click on an image to zoom) |