Back to Search Start Over

Shaping a Stabilized Video by Mitigating Unintended Changes for Concept-Augmented Video Editing

Authors :
Guo, Mingce
He, Jingxuan
Tang, Shengeng
Wang, Zhangye
Cheng, Lechao
Publication Year :
2024

Abstract

Text-driven video editing utilizing generative diffusion models has garnered significant attention due to their potential applications. However, existing approaches are constrained by the limited word embeddings provided in pre-training, which hinders nuanced editing targeting open concepts with specific attributes. Directly altering the keywords in target prompts often results in unintended disruptions to the attention mechanisms. To achieve more flexible editing easily, this work proposes an improved concept-augmented video editing approach that generates diverse and stable target videos flexibly by devising abstract conceptual pairs. Specifically, the framework involves concept-augmented textual inversion and a dual prior supervision mechanism. The former enables plug-and-play guidance of stable diffusion for video editing, effectively capturing target attributes for more stylized results. The dual prior supervision mechanism significantly enhances video stability and fidelity. Comprehensive evaluations demonstrate that our approach generates more stable and lifelike videos, outperforming state-of-the-art methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.12526
Document Type :
Working Paper