Back to Search Start Over

InstructAvatar: Text-Guided Emotion and Motion Control for Avatar Generation

Authors :
Wang, Yuchi
Guo, Junliang
Bai, Jianhong
Yu, Runyi
He, Tianyu
Tan, Xu
Sun, Xu
Bian, Jiang
Publication Year :
2024

Abstract

Recent talking avatar generation models have made strides in achieving realistic and accurate lip synchronization with the audio, but often fall short in controlling and conveying detailed expressions and emotions of the avatar, making the generated video less vivid and controllable. In this paper, we propose a novel text-guided approach for generating emotionally expressive 2D avatars, offering fine-grained control, improved interactivity, and generalizability to the resulting video. Our framework, named InstructAvatar, leverages a natural language interface to control the emotion as well as the facial motion of avatars. Technically, we design an automatic annotation pipeline to construct an instruction-video paired training dataset, equipped with a novel two-branch diffusion-based generator to predict avatars with audio and text instructions at the same time. Experimental results demonstrate that InstructAvatar produces results that align well with both conditions, and outperforms existing methods in fine-grained emotion control, lip-sync quality, and naturalness. Our project page is https://wangyuchi369.github.io/InstructAvatar/.<br />Comment: Project page: https://wangyuchi369.github.io/InstructAvatar/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.15758
Document Type :
Working Paper