Back to Search Start Over

Motion Inversion for Video Customization

Authors :
Wang, Luozhou
Shen, Guibao
Liang, Yixun
Tao, Xin
Wan, Pengfei
Zhang, Di
Li, Yijun
Chen, Yingcong
Publication Year :
2024

Abstract

In this research, we present a novel approach to motion customization in video generation, addressing the widespread gap in the thorough exploration of motion representation within video generative models. Recognizing the unique challenges posed by video's spatiotemporal nature, our method introduces Motion Embeddings, a set of explicit, temporally coherent one-dimensional embeddings derived from a given video. These embeddings are designed to integrate seamlessly with the temporal transformer modules of video diffusion models, modulating self-attention computations across frames without compromising spatial integrity. Our approach offers a compact and efficient solution to motion representation and enables complex manipulations of motion characteristics through vector arithmetic in the embedding space. Furthermore, we identify the Temporal Discrepancy in video generative models, which refers to variations in how different motion modules process temporal relationships between frames. We leverage this understanding to optimize the integration of our motion embeddings. Our contributions include the introduction of a tailored motion embedding for customization tasks, insights into the temporal processing differences in video models, and a demonstration of the practical advantages and effectiveness of our method through extensive experiments.<br />Comment: Project Page: https://wileewang.github.io/MotionInversion/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.20193
Document Type :
Working Paper