Back to Search Start Over

Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning

Authors :
Ruan, Penghui
Wang, Pichao
Saxena, Divya
Cao, Jiannong
Shi, Yuhui
Publication Year :
2024

Abstract

Despite advancements in Text-to-Video (T2V) generation, producing videos with realistic motion remains challenging. Current models often yield static or minimally dynamic outputs, failing to capture complex motions described by text. This issue stems from the internal biases in text encoding, which overlooks motions, and inadequate conditioning mechanisms in T2V generation models. To address this, we propose a novel framework called DEcomposed MOtion (DEMO), which enhances motion synthesis in T2V generation by decomposing both text encoding and conditioning into content and motion components. Our method includes a content encoder for static elements and a motion encoder for temporal dynamics, alongside separate content and motion conditioning mechanisms. Crucially, we introduce text-motion and video-motion supervision to improve the model's understanding and generation of motion. Evaluations on benchmarks such as MSR-VTT, UCF-101, WebVid-10M, EvalCrafter, and VBench demonstrate DEMO's superior ability to produce videos with enhanced motion dynamics while maintaining high visual quality. Our approach significantly advances T2V generation by integrating comprehensive motion understanding directly from textual descriptions. Project page: https://PR-Ryan.github.io/DEMO-project/<br />Comment: Accepted at NeurIPS 2024, code available at https://github.com/PR-Ryan/DEMO

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.24219
Document Type :
Working Paper