Back to Search Start Over

Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning

Authors :
Li, Mingcheng
Yang, Dingkang
Liu, Yang
Wang, Shunli
Chen, Jiawei
Wang, Shuaibing
Wei, Jinjie
Jiang, Yue
Xu, Qingyao
Hou, Xiaolu
Sun, Mingyang
Qian, Ziyun
Kou, Dongliang
Zhang, Lihua
Publication Year :
2024

Abstract

Multimodal Sentiment Analysis (MSA) is an important research area that aims to understand and recognize human sentiment through multiple modalities. The complementary information provided by multimodal fusion promotes better sentiment analysis compared to utilizing only a single modality. Nevertheless, in real-world applications, many unavoidable factors may lead to situations of uncertain modality missing, thus hindering the effectiveness of multimodal modeling and degrading the model's performance. To this end, we propose a Hierarchical Representation Learning Framework (HRLF) for the MSA task under uncertain missing modalities. Specifically, we propose a fine-grained representation factorization module that sufficiently extracts valuable sentiment information by factorizing modality into sentiment-relevant and modality-specific representations through crossmodal translation and sentiment semantic reconstruction. Moreover, a hierarchical mutual information maximization mechanism is introduced to incrementally maximize the mutual information between multi-scale representations to align and reconstruct the high-level semantics in the representations. Ultimately, we propose a hierarchical adversarial learning mechanism that further aligns and adapts the latent distribution of sentiment-relevant representations to produce robust joint multimodal representations. Comprehensive experiments on three datasets demonstrate that HRLF significantly improves MSA performance under uncertain modality missing cases.<br />Comment: Accepted by NeurIPS 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.02793
Document Type :
Working Paper