Back to Search Start Over

Renaissance: Investigating the Pretraining of Vision-Language Encoders

Authors :
Fields, Clayton
Kennington, Casey
Publication Year :
2024

Abstract

In the past several years there has been an explosion of available models for vision-language tasks. Unfortunately, the literature still leaves open a number of questions related to best practices in designing and training such models. In this paper we seek to answer several questions related to the pretraining of vision-language encoders through meta-analysis. In our first set of experiments, we show that we can save significant compute at no cost to downstream performance, by freezing large parts of vision-language models during pretraining. In our second set of experiments we examine the effect of basing a VL transformer on a vision model versus a text model. Additionally, we introduce a VL modeling platform called Renaissance that we use to conduct all of the experiments. This program offers a great deal of flexibility in creating, training and evaluating transformer encoders for VL modeling. The source code for Renaissance can be found at https://github.com/bsu-slim/renaissance.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.06657
Document Type :
Working Paper