File size: 826 Bytes
734f877
c2a36d1
 
10d2af3
734f877
 
 
aeb48c6
c2a36d1
 
734f877
 
0749e05
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
title: Kontext Lightning 8-Step Model | FLUX [dev]
emoji: 
colorFrom: red
colorTo: yellow
sdk: gradio
sdk_version: 5.35.0
app_file: app_kontext.py
pinned: true
short_description: Inspired by our 8-Step FLUX Merged/Fusion Models
---

**Update 7/9/25:** This model is now quantized and implemented in [this example space.](https://huggingface.co/spaces/LPX55/Kontext-Multi_Lightning_4bit-nf4/) Seeing preliminary VRAM usage at around ~10GB with faster inferencing. Will be experimenting with different weights and schedulers to find particularly well-performing libraries.

# FLUX.1 Kontext-dev X LoRA Experimentation

Highly experimental, will update with more details later.

- 6-8 steps
- <s>Euler, SGM Uniform (Recommended, feel free to play around)</s> Getting mixed results now, feel free to play around and share.