Back to Search Start Over

FlipConcept: Tuning-Free Multi-Concept Personalization for Text-to-Image Generation

Authors :
Woo, Young Beom
Kim, Sun Eung
Publication Year :
2025

Abstract

Recently, methods that integrate multiple personalized concepts into a single image have garnered significant attention in the field of text-to-image (T2I) generation. However, existing methods experience performance degradation in complex scenes with multiple objects due to distortions in non-personalized regions. To address this issue, we propose FlipConcept, a novel approach that seamlessly integrates multiple personalized concepts into a single image without requiring additional tuning. We introduce guided appearance attention to accurately mimic the appearance of a personalized concept as intended. Additionally, we introduce mask-guided noise mixing to protect non-personalized regions during editing. Lastly, we apply background dilution to minimize attribute leakage, which is the undesired blending of personalized concept attributes with other objects in the image. In our experiments, we demonstrate that the proposed method, despite not requiring tuning, outperforms existing models in both single and multiple personalized concept inference.<br />Comment: 9 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.15203
Document Type :
Working Paper