Back to Search Start Over

Focus Your Attention: A Bidirectional Focal Attention Network for Image-Text Matching

Authors :
Liu, Chunxiao
Mao, Zhendong
Liu, An-An
Zhang, Tianzhu
Wang, Bin
Zhang, Yongdong
Publication Year :
2019

Abstract

Learning semantic correspondence between image and text is significant as it bridges the semantic gap between vision and language. The key challenge is to accurately find and correlate shared semantics in image and text. Most existing methods achieve this goal by representing the shared semantic as a weighted combination of all the fragments (image regions or text words), where fragments relevant to the shared semantic obtain more attention, otherwise less. However, despite relevant ones contribute more to the shared semantic, irrelevant ones will more or less disturb it, and thus will lead to semantic misalignment in the correlation phase. To address this issue, we present a novel Bidirectional Focal Attention Network (BFAN), which not only allows to attend to relevant fragments but also diverts all the attention into these relevant fragments to concentrate on them. The main difference with existing works is they mostly focus on learning attention weight while our BFAN focus on eliminating irrelevant fragments from the shared semantic. The focal attention is achieved by pre-assigning attention based on inter-modality relation, identifying relevant fragments based on intra-modality relation and reassigning attention. Furthermore, the focal attention is jointly applied in both image-to-text and text-to-image directions, which enables to avoid preference to long text or complex image. Experiments show our simple but effective framework significantly outperforms state-of-the-art, with relative Recall@1 gains of 2.2% on both Flicr30K and MSCOCO benchmarks.<br />Comment: Accepted by ACMMM2019

Subjects

Subjects :
Computer Science - Multimedia

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1909.11416
Document Type :
Working Paper