Back to Search Start Over

A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation

Authors :
Kim, Gwanghyun
Martinez, Alonso
Su, Yu-Chuan
Jou, Brendan
Lezama, José
Gupta, Agrim
Yu, Lijun
Jiang, Lu
Jansen, Aren
Walker, Jacob
Somandepalli, Krishna
Publication Year :
2024

Abstract

Training diffusion models for audiovisual sequences allows for a range of generation tasks by learning conditional distributions of various input-output combinations of the two modalities. Nevertheless, this strategy often requires training a separate model for each task which is expensive. Here, we propose a novel training approach to effectively learn arbitrary conditional distributions in the audiovisual space.Our key contribution lies in how we parameterize the diffusion timestep in the forward diffusion process. Instead of the standard fixed diffusion timestep, we propose applying variable diffusion timesteps across the temporal dimension and across modalities of the inputs. This formulation offers flexibility to introduce variable noise levels for various portions of the input, hence the term mixture of noise levels. We propose a transformer-based audiovisual latent diffusion model and show that it can be trained in a task-agnostic fashion using our approach to enable a variety of audiovisual generation tasks at inference time. Experiments demonstrate the versatility of our method in tackling cross-modal and multimodal interpolation tasks in the audiovisual space. Notably, our proposed approach surpasses baselines in generating temporally and perceptually consistent samples conditioned on the input. Project page: avdit2024.github.io

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.13762
Document Type :
Working Paper