Back to Search Start Over

What the DAAM: Interpreting Stable Diffusion Using Cross Attention

Authors :
Tang, Raphael
Liu, Linqing
Pandey, Akshat
Jiang, Zhiying
Yang, Gefei
Kumar, Karun
Stenetorp, Pontus
Lin, Jimmy
Ture, Ferhan
Publication Year :
2022

Abstract

Large-scale diffusion neural networks represent a substantial milestone in text-to-image generation, but they remain poorly understood, lacking interpretability analyses. In this paper, we perform a text-image attribution analysis on Stable Diffusion, a recently open-sourced model. To produce pixel-level attribution maps, we upscale and aggregate cross-attention word-pixel scores in the denoising subnetwork, naming our method DAAM. We evaluate its correctness by testing its semantic segmentation ability on nouns, as well as its generalized attribution quality on all parts of speech, rated by humans. We then apply DAAM to study the role of syntax in the pixel space, characterizing head--dependent heat map interaction patterns for ten common dependency relations. Finally, we study several semantic phenomena using DAAM, with a focus on feature entanglement, where we find that cohyponyms worsen generation quality and descriptive adjectives attend too broadly. To our knowledge, we are the first to interpret large diffusion models from a visuolinguistic perspective, which enables future lines of research. Our code is at https://github.com/castorini/daam.<br />Comment: First two authors contributed equally. 13 pages, 15 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.04885
Document Type :
Working Paper