Temporal Alignment Guidance: On-Manifold Sampling in Diffusion Models
Abstract
Diffusion models have achieved remarkable success as generative models. However, even a well-trained model can accumulate errors throughout the generation process. These errors become particularly problematic when arbitrary guidance is applied to steer samples toward desired properties, which often breaks sample fidelity. In this paper, we propose a general solution to address the off-manifold phenomenon observed in diffusion models. Our approach leverages a time predictor to estimate deviations from the desired data manifold at each timestep, identifying that a larger time gap is associated with reduced generation quality. We then design a novel guidance mechanism, `Temporal Alignment Guidance' (TAG), attracting the samples back to the desired manifold at every timestep during generation. Through extensive experiments, we demonstrate that TAG consistently produces samples closely aligned with the desired manifold at each timestep, leading to significant improvements in generation quality across various downstream tasks.
Community
TL;DR: We propose Temporal Alignment Guidance (TAG), a framework that provably mitigates off-manifold errors in diffusion models by guiding samples back to the data manifold at each timestep, significantly improving generation quality across tasks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Prompt-aware classifier free guidance for diffusion models (2025)
- Penalizing Boundary Activation for Object Completeness in Diffusion Models (2025)
- HiGS: History-Guided Sampling for Plug-and-Play Enhancement of Diffusion Models (2025)
- CountLoop: Training-Free High-Instance Image Generation via Iterative Agent Guidance (2025)
- No MoCap Needed: Post-Training Motion Diffusion Models with Reinforcement Learning using Only Textual Prompts (2025)
- Inference-Time Alignment Control for Diffusion Models with Reinforcement Learning Guidance (2025)
- Guiding Noisy Label Conditional Diffusion Models with Score-based Discriminator Correction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper