Back to Search Start Over

Dual self-attention with co-attention networks for visual question answering.

Authors :
Liu, Yun
Zhang, Xiaoming
Zhang, Qianyun
Li, Chaozhuo
Huang, Feiran
Tang, Xianghong
Li, Zhoujun
Source :
Pattern Recognition. Sep2021, Vol. 117, pN.PAG-N.PAG. 1p.
Publication Year :
2021

Abstract

• A novel model based on the self-attention mechanism is proposed to learn more effective multi-modal representations. • The DSACA model is proposed to capture the internal dependencies and cross-modal correlation between the image and question sentence. • Extensive experiments and analysis confirm the superiority of the proposed DSACA. Visual Question Answering (VQA) as an important task in understanding vision and language has been proposed and aroused wide interests. In previous VQA methods, Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are generally used to extract visual and textual features respectively, and then the correlation between these two features is explored to infer the answer. However, CNN mainly focuses on extracting local spatial information and RNN pays more attention on exploiting sequential architecture and long-range dependencies. It is difficult for them to integrate the local features with their global dependencies to learn more effective representations of the image and question. To address this problem, we propose a novel model, i.e., Dual Self-Attention with Co-Attention networks (DSACA), for VQA. It aims to model the internal dependencies of both the spatial and sequential structure respectively by using the newly proposed self-attention mechanism. Specifically, DSACA mainly contains three submodules. The visual self-attention module selectively aggregates the visual features at each region by a weighted sum of the features at all positions. The textual self-attention module automatically emphasizes the interdependent word features by integrating associated features among the sentence words. Besides, the visual-textual co-attention module explores the close correlation between visual and textual features learned from self-attention modules. The three modules are integrated into an end-to-end framework to infer the answer. Extensive experiments performed on three generally used VQA datasets confirm the favorable performance of DSACA compared with state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
117
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
150699317
Full Text :
https://doi.org/10.1016/j.patcog.2021.107956