Back to Search Start Over

SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-form Layout-to-Image Generation

Authors :
Jia, Chengyou
Luo, Minnan
Dang, Zhuohang
Dai, Guang
Chang, Xiaojun
Wang, Mengmeng
Wang, Jingdong
Source :
38th AAAI Conference on Artificial Intelligence (AAAI2024), Vancouver, BC, Canada, 2024
Publication Year :
2023

Abstract

Despite significant progress in Text-to-Image (T2I) generative models, even lengthy and complex text descriptions still struggle to convey detailed controls. In contrast, Layout-to-Image (L2I) generation, aiming to generate realistic and complex scene images from user-specified layouts, has risen to prominence. However, existing methods transform layout information into tokens or RGB images for conditional control in the generative process, leading to insufficient spatial and semantic controllability of individual instances. To address these limitations, we propose a novel Spatial-Semantic Map Guided (SSMG) diffusion model that adopts the feature map, derived from the layout, as guidance. Owing to rich spatial and semantic information encapsulated in well-designed feature maps, SSMG achieves superior generation quality with sufficient spatial and semantic controllability compared to previous works. Additionally, we propose the Relation-Sensitive Attention (RSA) and Location-Sensitive Attention (LSA) mechanisms. The former aims to model the relationships among multiple objects within scenes while the latter is designed to heighten the model's sensitivity to the spatial information embedded in the guidance. Extensive experiments demonstrate that SSMG achieves highly promising results, setting a new state-of-the-art across a range of metrics encompassing fidelity, diversity, and controllability.<br />Comment: Accepted to AAAI 2024

Details

Database :
arXiv
Journal :
38th AAAI Conference on Artificial Intelligence (AAAI2024), Vancouver, BC, Canada, 2024
Publication Type :
Report
Accession number :
edsarx.2308.10156
Document Type :
Working Paper