Back to Search Start Over

Pretrained ViTs Yield Versatile Representations For Medical Images

Authors :
Matsoukas, Christos
Haslum, Johan Fredin
Söderberg, Magnus
Smith, Kevin
Publication Year :
2023

Abstract

Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Over the last years, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore the benefits and drawbacks of transformer-based models for medical image classification. We conduct a series of experiments on several standard 2D medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers can perform on par with CNNs when pretrained on ImageNet, both in a supervised and self-supervised setting, rendering them as a viable alternative to CNNs.<br />Comment: Extended version of arXiv:2108.09038 originally published at the ICCV 2021 Workshop on Computer Vision for Automated Medical Diagnosis

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2303.07034
Document Type :
Working Paper