Back to Search Start Over

Learning Language to Symbol and Language to Vision Mapping for Visual Grounding

Authors :
Su He
Xiaofeng Yang
Guosheng Lin
School of Computer Science and Engineering
Source :
SSRN Electronic Journal.
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grounding is consistently improved by deep visual features in the last few years. While deep visual features contain rich information, they could also be noisy, biased and easily over-fitted. In contrast, symbolic features are discrete, easy to map and usually less noisy. In this work, we propose a novel modular network learning to match both the object's symbolic features and conventional visual features with the linguistic information. Moreover, the Residual Attention Parser is designed to alleviate the difficulty of understanding diverse expressions. Our model achieves competitive performance on three popular datasets of VG. Ministry of Education (MOE) National Research Foundation (NRF) Submitted/Accepted version This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-003), and the MOE AcRF Tier-1 research grants: RG28/18 (S), RG22/19 (S) and RG95/20.

Details

ISSN :
15565068
Database :
OpenAIRE
Journal :
SSRN Electronic Journal
Accession number :
edsair.doi.dedup.....1d0041012284bf8c81a82df6d35c526b
Full Text :
https://doi.org/10.2139/ssrn.3989572