Back to Search Start Over

How (not) to ensemble LVLMs for VQA

Authors :
Alazraki, Lisa
Castrejon, Lluis
Dehghani, Mostafa
Huot, Fantine
Uijlings, Jasper
Mensink, Thomas
Publication Year :
2023

Abstract

This paper studies ensembling in the era of Large Vision-Language Models (LVLMs). Ensembling is a classical method to combine different models to get increased performance. In the recent work on Encyclopedic-VQA the authors examine a wide variety of models to solve their task: from vanilla LVLMs, to models including the caption as extra context, to models augmented with Lens-based retrieval of Wikipedia pages. Intuitively these models are highly complementary, which should make them ideal for ensembling. Indeed, an oracle experiment shows potential gains from 48.8% accuracy (the best single model) all the way up to 67% (best possible ensemble). So it is a trivial exercise to create an ensemble with substantial real gains. Or is it?<br />Comment: 4th I Can't Believe It's Not Better Workshop (co-located with NeurIPS 2023)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.06641
Document Type :
Working Paper