Back to Search Start Over

Linguistic Structure Guided Context Modeling for Referring Image Segmentation

Authors :
Hui, Tianrui
Liu, Si
Huang, Shaofei
Li, Guanbin
Yu, Sansi
Zhang, Faxi
Han, Jizhong
Publication Year :
2020

Abstract

Referring image segmentation aims to predict the foreground mask of the object referred by a natural language sentence. Multimodal context of the sentence is crucial to distinguish the referent from the background. Existing methods either insufficiently or redundantly model the multimodal context. To tackle this problem, we propose a "gather-propagate-distribute" scheme to model multimodal context by cross-modal interaction and implement this scheme as a novel Linguistic Structure guided Context Modeling (LSCM) module. Our LSCM module builds a Dependency Parsing Tree suppressed Word Graph (DPT-WG) which guides all the words to include valid multimodal context of the sentence while excluding disturbing ones through three steps over the multimodal feature, i.e., gathering, constrained propagation and distributing. Extensive experiments on four benchmarks demonstrate that our method outperforms all the previous state-of-the-arts.<br />Comment: Accepted by ECCV 2020. Code is available at https://github.com/spyflying/LSCM-Refseg

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2010.00515
Document Type :
Working Paper