About
Tiny AutoEncoder trained on the latent space of black-forest-labs/FLUX.2-dev's autoencoder. Works to convert between latent and image space up to 20x faster and in 28x fewer parameters at the expense of a small amount of quality.
Code for this model is available here.
Round-Trip Comparisons
Example Usage
import torch
import torchvision.transforms.functional as F
from PIL import Image
from flux2_tiny_autoencoder import Flux2TinyAutoEncoder
device = torch.device("cuda")
tiny_vae = Flux2TinyAutoEncoder.from_pretrained(
"fal/FLUX.2-Tiny-AutoEncoder",
).to(device=device, dtype=torch.bfloat16)
pil_image = Image.open("/path/to/image.png")
image_tensor = F.to_tensor(pil_image)
image_tensor = image_tensor.unsqueeze(0) * 2.0 - 1.0
image_tensor = image_tensor.to(device, dtype=tiny_vae.dtype)
with torch.inference_mode():
latents = tiny_vae.encode(image_tensor, return_dict=False)
recon = tiny_vae.decode(latents, return_dict=False)
recon = recon.squeeze(0).clamp(-1, 1) / 2.0 + 0.5
recon = recon.float().detach().cpu()
recon_image = F.to_pil_image(recon)
recon_image.save("reconstituted.png")
Use with Diffusers 🧨
import torch
from diffusers import AutoModel, Flux2Pipeline
device = torch.device("cuda")
tiny_vae = AutoModel.from_pretrained(
"fal/FLUX.2-Tiny-AutoEncoder", trust_remote_code=True, torch_dtype=torch.bfloat16
).to(device)
pipe = Flux2Pipeline.from_pretrained(
"black-forest-labs/FLUX.2-dev", vae=tiny_vae, torch_dtype=torch.bfloat16
).to(device)
- Downloads last month
- 253
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for fal/FLUX.2-Tiny-AutoEncoder
Base model
black-forest-labs/FLUX.2-dev

