Back to Search Start Over

Multimodal Graph for Unaligned Multimodal Sequence Analysis via Graph Convolution and Graph Pooling

Authors :
Sijie Mai
Songlong Xing
Jiaxuan He
Ying Zeng
Haifeng Hu
Source :
ACM Transactions on Multimedia Computing, Communications, and Applications. 19:1-24
Publication Year :
2023
Publisher :
Association for Computing Machinery (ACM), 2023.

Abstract

Multimodal sequence analysis aims to draw inferences from visual, language, and acoustic sequences. A majority of existing works focus on the aligned fusion of three modalities to explore inter-modal interactions, which is impractical in real-world scenarios. To overcome this issue, we seek to focus on analyzing unaligned sequences, which is still relatively underexplored and also more challenging. We propose Multimodal Graph, whose novelty mainly lies in transforming the sequential learning problem into graph learning problem. The graph-based structure enables parallel computation in time dimension (as opposed to recurrent neural network) and can effectively learn longer intra- and inter-modal temporal dependency in unaligned sequences. First, we propose multiple ways to construct the adjacency matrix for sequence to perform sequence to graph transformation. To learn intra-modal dynamics, a graph convolution network is employed for each modality based on the defined adjacency matrix. To learn inter-modal dynamics, given that the unimodal sequences are unaligned, the commonly considered word-level fusion does not pertain. To this end, we innovatively devise graph pooling algorithms to automatically explore the associations between various time slices from different modalities and learn high-level graph representation hierarchically. Multimodal Graph outperforms state-of-the-art models on three datasets under the same experimental setting.

Details

ISSN :
15516865 and 15516857
Volume :
19
Database :
OpenAIRE
Journal :
ACM Transactions on Multimedia Computing, Communications, and Applications
Accession number :
edsair.doi...........44ee9cd3116d7de47f70ac467fe0c1a0
Full Text :
https://doi.org/10.1145/3542927