Back to Search Start Over

WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge

Authors :
Wang, Wenbin
Ding, Liang
Shen, Li
Luo, Yong
Hu, Han
Tao, Dacheng
Publication Year :
2024

Abstract

Sentiment analysis is rapidly advancing by utilizing various data modalities (e.g., text, image). However, most previous works relied on superficial information, neglecting the incorporation of contextual world knowledge (e.g., background information derived from but beyond the given image and text pairs) and thereby restricting their ability to achieve better multimodal sentiment analysis (MSA). In this paper, we proposed a plug-in framework named WisdoM, to leverage the contextual world knowledge induced from the large vision-language models (LVLMs) for enhanced MSA. WisdoM utilizes LVLMs to comprehensively analyze both images and corresponding texts, simultaneously generating pertinent context. To reduce the noise in the context, we also introduce a training-free contextual fusion mechanism. Experiments across diverse granularities of MSA tasks consistently demonstrate that our approach has substantial improvements (brings an average +1.96% F1 score among five advanced methods) over several state-of-the-art methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.06659
Document Type :
Working Paper