Back to Search Start Over

Understanding Why ViT Trains Badly on Small Datasets: An Intuitive Perspective

Authors :
Zhu, Haoran
Chen, Boyuan
Yang, Carter
Publication Year :
2023

Abstract

Vision transformer (ViT) is an attention neural network architecture that is shown to be effective for computer vision tasks. However, compared to ResNet-18 with a similar number of parameters, ViT has a significantly lower evaluation accuracy when trained on small datasets. To facilitate studies in related fields, we provide a visual intuition to help understand why it is the case. We first compare the performance of the two models and confirm that ViT has less accuracy than ResNet-18 when trained on small datasets. We then interpret the results by showing attention map visualization for ViT and feature map visualization for ResNet-18. The difference is further analyzed through a representation similarity perspective. We conclude that the representation of ViT trained on small datasets is hugely different from ViT trained on large datasets, which may be the reason why the performance drops a lot on small datasets.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2302.03751
Document Type :
Working Paper