Back to Search Start Over

Diffusion Soup: Model Merging for Text-to-Image Diffusion Models

Authors :
Biggs, Benjamin
Seshadri, Arjun
Zou, Yang
Jain, Achin
Golatkar, Aditya
Xie, Yusheng
Achille, Alessandro
Swaminathan, Ashwin
Soatto, Stefano
Publication Year :
2024

Abstract

We present Diffusion Soup, a compartmentalization method for Text-to-Image Generation that averages the weights of diffusion models trained on sharded data. By construction, our approach enables training-free continual learning and unlearning with no additional memory or inference costs, since models corresponding to data shards can be added or removed by re-averaging. We show that Diffusion Soup samples from a point in weight space that approximates the geometric mean of the distributions of constituent datasets, which offers anti-memorization guarantees and enables zero-shot style mixing. Empirically, Diffusion Soup outperforms a paragon model trained on the union of all data shards and achieves a 30% improvement in Image Reward (.34 $\to$ .44) on domain sharded data, and a 59% improvement in IR (.37 $\to$ .59) on aesthetic data. In both cases, souping also prevails in TIFA score (respectively, 85.5 $\to$ 86.5 and 85.6 $\to$ 86.8). We demonstrate robust unlearning -- removing any individual domain shard only lowers performance by 1% in IR (.45 $\to$ .44) -- and validate our theoretical insights on anti-memorization using real data. Finally, we showcase Diffusion Soup's ability to blend the distinct styles of models finetuned on different shards, resulting in the zero-shot generation of hybrid styles.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.08431
Document Type :
Working Paper