Back to Search Start Over

BATON: Aligning Text-to-Audio Model with Human Preference Feedback

Authors :
Liao, Huan
Han, Haonan
Yang, Kai
Du, Tianjiao
Yang, Rui
Xu, Zunnan
Xu, Qinmei
Liu, Jingquan
Lu, Jiasheng
Li, Xiu
Publication Year :
2024

Abstract

With the development of AI-Generated Content (AIGC), text-to-audio models are gaining widespread attention. However, it is challenging for these models to generate audio aligned with human preference due to the inherent information density of natural language and limited model understanding ability. To alleviate this issue, we formulate the BATON, a framework designed to enhance the alignment between generated audio and text prompt using human preference feedback. Our BATON comprises three key stages: Firstly, we curated a dataset containing both prompts and the corresponding generated audio, which was then annotated based on human feedback. Secondly, we introduced a reward model using the constructed dataset, which can mimic human preference by assigning rewards to input text-audio pairs. Finally, we employed the reward model to fine-tune an off-the-shelf text-to-audio model. The experiment results demonstrate that our BATON can significantly improve the generation quality of the original text-to-audio models, concerning audio integrity, temporal relationship, and alignment with human preference.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.00744
Document Type :
Working Paper