Back to Search Start Over

Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling

Authors :
Lu, Xiaopeng
Fan, Zhen
Wang, Yansen
Oh, Jean
Rose, Carolyn P.
Publication Year :
2021

Abstract

As an important task in multimodal context understanding, Text-VQA (Visual Question Answering) aims at question answering through reading text information in images. It differentiates from the original VQA task as Text-VQA requires large amounts of scene-text relationship understanding, in addition to the cross-modal grounding capability. In this paper, we propose Localize, Group, and Select (LOGOS), a novel model which attempts to tackle this problem from multiple aspects. LOGOS leverages two grounding tasks to better localize the key information of the image, utilizes scene text clustering to group individual OCR tokens, and learns to select the best answer from different sources of OCR (Optical Character Recognition) texts. Experiments show that LOGOS outperforms previous state-of-the-art methods on two Text-VQA benchmarks without using additional OCR annotation data. Ablation studies and analysis demonstrate the capability of LOGOS to bridge different modalities and better understand scene text.<br />Comment: 9 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2108.08965
Document Type :
Working Paper