FlowSlider: Training-Free Continuous Image Editing via Fidelity-Steering Decomposition
Abstract
FlowSlider enables continuous image editing with slider-style control by decomposing updates into fidelity and steering components within Rectified Flow, providing stable strength control without additional training.
Continuous image editing aims to provide slider-style control of edit strength while preserving source-image fidelity and maintaining a consistent edit direction. Existing learning-based slider methods typically rely on auxiliary modules trained with synthetic or proxy supervision. This introduces additional training overhead and couples slider behavior to the training distribution, which can reduce reliability under distribution shifts in edits or domains. We propose FlowSlider, a training-free method for continuous editing in Rectified Flow that requires no post-training. FlowSlider decomposes FlowEdit's update into (i) a fidelity term, which acts as a source-conditioned stabilizer that preserves identity and structure, and (ii) a steering term that drives semantic transition toward the target edit. Geometric analysis and empirical measurements show that these terms are approximately orthogonal, enabling stable strength control by scaling only the steering term while keeping the fidelity term unchanged. As a result, FlowSlider provides smooth and reliable control without post-training, improving continuous editing quality across diverse tasks.
Community
Training-free continuous image editing
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The Unreasonable Effectiveness of Text Embedding Interpolation for Continuous Image Steering (2026)
- Editing on the Generative Manifold: A Theoretical and Empirical Study of General Diffusion-Based Image Editing Trade-offs (2026)
- ChordEdit: One-Step Low-Energy Transport for Image Editing (2026)
- BiFM: Bidirectional Flow Matching for Few-Step Image Editing and Generation (2026)
- FusionEdit: Semantic Fusion and Attention Modulation for Training-Free Image Editing (2026)
- TokenDial: Continuous Attribute Control in Text-to-Video via Spatiotemporal Token Offsets (2026)
- Towards Training-Free Scene Text Editing (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.02088 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper