Back to Search Start Over

Fine-Tuning CLIP's Last Visual Projector: A Few-Shot Cornucopia

Authors :
Fahes, Mohammad
Vu, Tuan-Hung
Bursuc, Andrei
Pérez, Patrick
de Charette, Raoul
Publication Year :
2024

Abstract

We consider the problem of adapting a contrastively pretrained vision-language model like CLIP (Radford et al., 2021) for few-shot classification. The literature addresses this problem by learning a linear classifier of the frozen visual features, optimizing word embeddings, or learning external feature adapters. This paper introduces an alternative way for CLIP adaptation without adding 'external' parameters to optimize. We find that simply fine-tuning the last projection matrix of the vision encoder leads to performance better than all baselines. Furthermore, we show that regularizing training with the distance between the fine-tuned and pretrained matrices adds reliability for adapting CLIP. This simple approach, coined ProLIP, yields state-of-the-art performance on 11 few-shot classification benchmarks, few-shot domain generalization, cross-dataset transfer, base-to-new class generalization, and test-time adaptation. Code will be made available at: https://github.com/astra-vision/ProLIP .

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.05270
Document Type :
Working Paper