Back to Search
Start Over
ExpGest: Expressive Speaker Generation Using Diffusion Model and Hybrid Audio-Text Guidance
- Publication Year :
- 2024
-
Abstract
- Existing gesture generation methods primarily focus on upper body gestures based on audio features, neglecting speech content, emotion, and locomotion. These limitations result in stiff, mechanical gestures that fail to convey the true meaning of audio content. We introduce ExpGest, a novel framework leveraging synchronized text and audio information to generate expressive full-body gestures. Unlike AdaIN or one-hot encoding methods, we design a noise emotion classifier for optimizing adversarial direction noise, avoiding melody distortion and guiding results towards specified emotions. Moreover, aligning semantic and gestures in the latent space provides better generalization capabilities. ExpGest, a diffusion model-based gesture generation framework, is the first attempt to offer mixed generation modes, including audio-driven gestures and text-shaped motion. Experiments show that our framework effectively learns from combined text-driven motion and audio-induced gesture datasets, and preliminary results demonstrate that ExpGest achieves more expressive, natural, and controllable global motion in speakers compared to state-of-the-art models.<br />Comment: Accepted by ICME 2024
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2410.09396
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/ICME57554.2024.10687922