Back to Search Start Over

VQA Therapy: Exploring Answer Differences by Visually Grounding Answers

Authors :
Chen, Chongyan
Anjum, Samreen
Gurari, Danna
Publication Year :
2023

Abstract

Visual question answering is a task of predicting the answer to a question about an image. Given that different people can provide different answers to a visual question, we aim to better understand why with answer groundings. We introduce the first dataset that visually grounds each unique answer to each visual question, which we call VQAAnswerTherapy. We then propose two novel problems of predicting whether a visual question has a single answer grounding and localizing all answer groundings. We benchmark modern algorithms for these novel problems to show where they succeed and struggle. The dataset and evaluation server can be found publicly at https://vizwiz.org/tasks-and-datasets/vqa-answer-therapy/.<br />Comment: IEEE/CVF International Conference on Computer Vision

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.11662
Document Type :
Working Paper