Back to Search Start Over

Reciprocal Attention Fusion for Visual Question Answering

Authors :
Farazi, Moshiur R
Khan, Salman H
Source :
Proceedings of the British Machine Vision Conference (250) 2018
Publication Year :
2018

Abstract

Existing attention mechanisms either attend to local image grid or object level features for Visual Question Answering (VQA). Motivated by the observation that questions can relate to both object instances and their parts, we propose a novel attention mechanism that jointly considers reciprocal relationships between the two levels of visual details. The bottom-up attention thus generated is further coalesced with the top-down information to only focus on the scene elements that are most relevant to a given question. Our design hierarchically fuses multi-modal information i.e., language, object- and gird-level features, through an efficient tensor decomposition scheme. The proposed model improves the state-of-the-art single model performances from 67.9% to 68.2% on VQAv1 and from 65.7% to 67.4% on VQAv2, demonstrating a significant boost.<br />Comment: To appear in the British Machine Vision Conference (BMVC), September 2018

Details

Database :
arXiv
Journal :
Proceedings of the British Machine Vision Conference (250) 2018
Publication Type :
Report
Accession number :
edsarx.1805.04247
Document Type :
Working Paper