Back to Search Start Over

Faster Diffusion via Temporal Attention Decomposition

Authors :
Liu, Haozhe
Zhang, Wentian
Xie, Jinheng
Faccio, Francesco
Xu, Mengmeng
Xiang, Tao
Shou, Mike Zheng
Perez-Rua, Juan-Manuel
Schmidhuber, Jürgen
Publication Year :
2024

Abstract

We explore the role of attention mechanism during inference in text-conditional diffusion models. Empirical observations suggest that cross-attention outputs converge to a fixed point after several inference steps. The convergence time naturally divides the entire inference process into two phases: an initial phase for planning text-oriented visual semantics, which are then translated into images in a subsequent fidelity-improving phase. Cross-attention is essential in the initial phase but almost irrelevant thereafter. However, self-attention initially plays a minor role but becomes crucial in the second phase. These findings yield a simple and training-free method known as temporally gating the attention (TGATE), which efficiently generates images by caching and reusing attention outputs at scheduled time steps. Experimental results show when widely applied to various existing text-conditional diffusion models, TGATE accelerates these models by 10%-50%. The code of TGATE is available at https://github.com/HaozheLiu-ST/T-GATE.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.02747
Document Type :
Working Paper