Back to Search Start Over

Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis

Authors :
Nair, Nithin Gopalakrishnan
Cherian, Anoop
Lohit, Suhas
Wang, Ye
Koike-Akino, Toshiaki
Patel, Vishal M.
Marks, Tim K.
Publication Year :
2023

Abstract

Conditional generative models typically demand large annotated training sets to achieve high-quality synthesis. As a result, there has been significant interest in designing models that perform plug-and-play generation, i.e., to use a predefined or pretrained model, which is not explicitly trained on the generative task, to guide the generative process (e.g., using language). However, such guidance is typically useful only towards synthesizing high-level semantics rather than editing fine-grained details as in image-to-image translation tasks. To this end, and capitalizing on the powerful fine-grained generative control offered by the recent diffusion-based generative models, we introduce Steered Diffusion, a generalized framework for photorealistic zero-shot conditional image generation using a diffusion model trained for unconditional generation. The key idea is to steer the image generation of the diffusion model at inference time via designing a loss using a pre-trained inverse model that characterizes the conditional task. This loss modulates the sampling trajectory of the diffusion process. Our framework allows for easy incorporation of multiple conditions during inference. We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution. Our results demonstrate clear qualitative and quantitative improvements over state-of-the-art diffusion-based plug-and-play models while adding negligible additional computational cost.<br />Comment: Accepted at ICCV 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.00224
Document Type :
Working Paper