Back to Search Start Over

DMMAN: A two-stage audio-visual fusion framework for sound separation and event localization

Authors :
Zhi Ri Tang
Qijun Huang
Sheng Chang
Hu Ruihan
Songbing Zhou
Wei Han
Edmond Q. Wu
Yisen Liu
Source :
Neural networks : the official journal of the International Neural Network Society. 133
Publication Year :
2020

Abstract

Videos are used widely as the media platforms for human beings to touch the physical change of the world. However, we always receive the mixed sound from the multiple sound objects, and cannot distinguish and localize the sounds as the separate entities in videos. In order to solve this problem, a model named the Deep Multi-Modal Attention Network (DMMAN), is established to model the unconstrained video datasets for further finishing the sound source separation and event localization tasks in this paper. Based on the multi-modal separator and multi-modal matching classifier module, our model focuses on the sound separation and modal synchronization problems using two stage fusion of the sound and visual features. To link the multi-modal separator and multi-modal matching classifier modules, the regression and classification losses are employed to build the loss function of the DMMAN. The estimated spectrum masks and attention synchronization scores calculated by the DMMAN can be easily generalized to the sound source and event localization tasks. The quantitative experimental results show the DMMAN not only separates the high quality of the sound sources evaluated by Signal-to-Distortion Ratio and Signal-to-Interference Ratio metrics, but also is suitable for the mixed sound scenes that are never heard jointly. Meanwhile, DMMAN achieves better classification accuracy than other contrast baselines for the event localization tasks.

Details

ISSN :
18792782
Volume :
133
Database :
OpenAIRE
Journal :
Neural networks : the official journal of the International Neural Network Society
Accession number :
edsair.doi.dedup.....e409f37928cf3d4d05340135f52d4d84