Back to Search Start Over

Visual Echoes: A Simple Unified Transformer for Audio-Visual Generation

Authors :
Yang, Shiqi
Zhong, Zhi
Zhao, Mengjie
Takahashi, Shusuke
Ishii, Masato
Shibuya, Takashi
Mitsufuji, Yuki
Publication Year :
2024

Abstract

In recent years, with the realistic generation results and a wide range of personalized applications, diffusion-based generative models gain huge attention in both visual and audio generation areas. Compared to the considerable advancements of text2image or text2audio generation, research in audio2visual or visual2audio generation has been relatively slow. The recent audio-visual generation methods usually resort to huge large language model or composable diffusion models. Instead of designing another giant model for audio-visual generation, in this paper we take a step back showing a simple and lightweight generative transformer, which is not fully investigated in multi-modal generation, can achieve excellent results on image2audio generation. The transformer operates in the discrete audio and visual Vector-Quantized GAN space, and is trained in the mask denoising manner. After training, the classifier-free guidance could be deployed off-the-shelf achieving better performance, without any extra training or modification. Since the transformer model is modality symmetrical, it could also be directly deployed for audio2image generation and co-generation. In the experiments, we show that our simple method surpasses recent image2audio generation methods. Generated audio samples can be found at https://docs.google.com/presentation/d/1ZtC0SeblKkut4XJcRaDsSTuCRIXB3ypxmSi7HTY3IyQ/<br />Comment: 10 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.14598
Document Type :
Working Paper