Back to Search Start Over

Enhancing Prompt Following with Visual Control Through Training-Free Mask-Guided Diffusion

Authors :
Chen, Hongyu
Gao, Yiqi
Zhou, Min
Wang, Peng
Li, Xubin
Ge, Tiezheng
Zheng, Bo
Publication Year :
2024

Abstract

Recently, integrating visual controls into text-to-image~(T2I) models, such as ControlNet method, has received significant attention for finer control capabilities. While various training-free methods make efforts to enhance prompt following in T2I models, the issue with visual control is still rarely studied, especially in the scenario that visual controls are misaligned with text prompts. In this paper, we address the challenge of ``Prompt Following With Visual Control" and propose a training-free approach named Mask-guided Prompt Following (MGPF). Object masks are introduced to distinct aligned and misaligned parts of visual controls and prompts. Meanwhile, a network, dubbed as Masked ControlNet, is designed to utilize these object masks for object generation in the misaligned visual control region. Further, to improve attribute matching, a simple yet efficient loss is designed to align the attention maps of attributes with object regions constrained by ControlNet and object masks. The efficacy and superiority of MGPF are validated through comprehensive quantitative and qualitative experiments.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.14768
Document Type :
Working Paper