Back to Search Start Over

MobileUtr: Revisiting the relationship between light-weight CNN and Transformer for efficient medical image segmentation

Authors :
Tang, Fenghe
Nian, Bingkun
Ding, Jianrui
Quan, Quan
Yang, Jie
Liu, Wei
Zhou, S. Kevin
Publication Year :
2023

Abstract

Due to the scarcity and specific imaging characteristics in medical images, light-weighting Vision Transformers (ViTs) for efficient medical image segmentation is a significant challenge, and current studies have not yet paid attention to this issue. This work revisits the relationship between CNNs and Transformers in lightweight universal networks for medical image segmentation, aiming to integrate the advantages of both worlds at the infrastructure design level. In order to leverage the inductive bias inherent in CNNs, we abstract a Transformer-like lightweight CNNs block (ConvUtr) as the patch embeddings of ViTs, feeding Transformer with denoised, non-redundant and highly condensed semantic information. Moreover, an adaptive Local-Global-Local (LGL) block is introduced to facilitate efficient local-to-global information flow exchange, maximizing Transformer's global context information extraction capabilities. Finally, we build an efficient medical image segmentation model (MobileUtr) based on CNN and Transformer. Extensive experiments on five public medical image datasets with three different modalities demonstrate the superiority of MobileUtr over the state-of-the-art methods, while boasting lighter weights and lower computational cost. Code is available at https://github.com/FengheTan9/MobileUtr.<br />Comment: 13 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.01740
Document Type :
Working Paper