Back to Search
Start Over
VisdaNet: Visual Distillation and Attention Network for Multimodal Sentiment Classification.
- Source :
-
Sensors (14248220) . Jan2023, Vol. 23 Issue 2, p661. 20p. - Publication Year :
- 2023
-
Abstract
- Sentiment classification is a key task in exploring people's opinions; improved sentiment classification can help individuals make better decisions. Social media users are increasingly using both images and text to express their opinions and share their experiences, instead of only using text in conventional social media. As a result, understanding how to fully utilize them is critical in a variety of activities, including sentiment classification. In this work, we provide a fresh multimodal sentiment classification approach: visual distillation and attention network or VisdaNet. First, this method proposes a knowledge augmentation module, which overcomes the lack of information in short text by integrating the information of image captions and short text; secondly, aimed at the information control problem in the multi-modal fusion process in the product review scene, this paper proposes a knowledge distillation based on the CLIP module to reduce the noise information of the original modalities and improve the quality of the original modal information. Finally, regarding the single-text multi-image fusion problem in the product review scene, this paper proposes visual aspect attention based on the CLIP module, which correctly models the text-image interaction relationship in special scenes and realizes feature-level fusion across modalities. The results of the experiment on the Yelp multimodal dataset reveal that our model outperforms the previous SOTA model. Furthermore, the ablation experiment results demonstrate the efficacy of various tactics in the suggested model. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 14248220
- Volume :
- 23
- Issue :
- 2
- Database :
- Academic Search Index
- Journal :
- Sensors (14248220)
- Publication Type :
- Academic Journal
- Accession number :
- 161560065
- Full Text :
- https://doi.org/10.3390/s23020661