Back to Search Start Over

Evaluating Image Review Ability of Vision Language Models

Authors :
Saito, Shigeki
Hayashi, Kazuki
Ide, Yusuke
Sakai, Yusuke
Onishi, Kazuma
Suzuki, Toma
Gobara, Seiji
Kamigaito, Hidetaka
Hayashi, Katsuhiko
Watanabe, Taro
Publication Year :
2024

Abstract

Large-scale vision language models (LVLMs) are language models that are capable of processing images and text inputs by a single model. This paper explores the use of LVLMs to generate review texts for images. The ability of LVLMs to review images is not fully understood, highlighting the need for a methodical evaluation of their review abilities. Unlike image captions, review texts can be written from various perspectives such as image composition and exposure. This diversity of review perspectives makes it difficult to uniquely determine a single correct review for an image. To address this challenge, we introduce an evaluation method based on rank correlation analysis, in which review texts are ranked by humans and LVLMs, then, measures the correlation between these rankings. We further validate this approach by creating a benchmark dataset aimed at assessing the image review ability of recent LVLMs. Our experiments with the dataset reveal that LVLMs, particularly those with proven superiority in other evaluative contexts, excel at distinguishing between high-quality and substandard image reviews.<br />Comment: 9pages, under reviewing

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.12121
Document Type :
Working Paper