Back to Search Start Over

Swin MAE: Masked Autoencoders for Small Datasets

Authors :
Xu, Zi'an
Dai, Yin
Liu, Fayu
Chen, Weibing
Liu, Yue
Shi, Lifu
Liu, Sheng
Zhou, Yuhang
Publication Year :
2022

Abstract

The development of deep learning models in medical image analysis is majorly limited by the lack of large-sized and well-annotated datasets. Unsupervised learning does not require labels and is more suitable for solving medical image analysis problems. However, most of the current unsupervised learning methods need to be applied to large datasets. To make unsupervised learning applicable to small datasets, we proposed Swin MAE, which is a masked autoencoder with Swin Transformer as its backbone. Even on a dataset of only a few thousand medical images and without using any pre-trained models, Swin MAE is still able to learn useful semantic features purely from images. It can equal or even slightly outperform the supervised model obtained by Swin Transformer trained on ImageNet in terms of the transfer learning results of downstream tasks. The code is publicly available at https://github.com/Zian-Xu/Swin-MAE.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.13805
Document Type :
Working Paper