Back to Search Start Over

Speak in the Scene: Diffusion-based Acoustic Scene Transfer toward Immersive Speech Generation

Authors :
Kim, Miseul
Chung, Soo-Whan
Ji, Youna
Kang, Hong-Goo
Choi, Min-Seok
Publication Year :
2024

Abstract

This paper introduces a novel task in generative speech processing, Acoustic Scene Transfer (AST), which aims to transfer acoustic scenes of speech signals to diverse environments. AST promises an immersive experience in speech perception by adapting the acoustic scene behind speech signals to desired environments. We propose AST-LDM for the AST task, which generates speech signals accompanied by the target acoustic scene of the reference prompt. Specifically, AST-LDM is a latent diffusion model conditioned by CLAP embeddings that describe target acoustic scenes in either audio or text modalities. The contributions of this paper include introducing the AST task and implementing its baseline model. For AST-LDM, we emphasize its core framework, which is to preserve the input speech and generate audio consistently with both the given speech and the target acoustic environment. Experiments, including objective and subjective tests, validate the feasibility and efficacy of our approach.<br />Comment: Accepted to Interspeech 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.12688
Document Type :
Working Paper