Back to Search Start Over

FleSpeech: Flexibly Controllable Speech Generation with Various Prompts

Authors :
Li, Hanzhao
Li, Yuke
Wang, Xinsheng
Hu, Jingbin
Xie, Qicong
Yang, Shan
Xie, Lei
Publication Year :
2025

Abstract

Controllable speech generation methods typically rely on single or fixed prompts, hindering creativity and flexibility. These limitations make it difficult to meet specific user needs in certain scenarios, such as adjusting the style while preserving a selected speaker's timbre, or choosing a style and generating a voice that matches a character's visual appearance. To overcome these challenges, we propose \textit{FleSpeech}, a novel multi-stage speech generation framework that allows for more flexible manipulation of speech attributes by integrating various forms of control. FleSpeech employs a multimodal prompt encoder that processes and unifies different text, audio, and visual prompts into a cohesive representation. This approach enhances the adaptability of speech synthesis and supports creative and precise control over the generated speech. Additionally, we develop a data collection pipeline for multimodal datasets to facilitate further research and applications in this field. Comprehensive subjective and objective experiments demonstrate the effectiveness of FleSpeech. Audio samples are available at https://kkksuper.github.io/FleSpeech/<br />Comment: 14 pages, 3 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.04644
Document Type :
Working Paper