Back to Search Start Over

Compute-Efficient Medical Image Classification with Softmax-Free Transformers and Sequence Normalization

Authors :
Khader, Firas
Nahhas, Omar S. M. El
Han, Tianyu
Müller-Franzes, Gustav
Nebelung, Sven
Kather, Jakob Nikolas
Truhn, Daniel
Publication Year :
2024

Abstract

The Transformer model has been pivotal in advancing fields such as natural language processing, speech recognition, and computer vision. However, a critical limitation of this model is its quadratic computational and memory complexity relative to the sequence length, which constrains its application to longer sequences. This is especially crucial in medical imaging where high-resolution images can reach gigapixel scale. Efforts to address this issue have predominantely focused on complex techniques, such as decomposing the softmax operation integral to the Transformer's architecture. This paper addresses this quadratic computational complexity of Transformer models and introduces a remarkably simple and effective method that circumvents this issue by eliminating the softmax function from the attention mechanism and adopting a sequence normalization technique for the key, query, and value tokens. Coupled with a reordering of matrix multiplications this approach reduces the memory- and compute complexity to a linear scale. We evaluate this approach across various medical imaging datasets comprising fundoscopic, dermascopic, radiologic and histologic imaging data. Our findings highlight that these models exhibit a comparable performance to traditional transformer models, while efficiently handling longer sequences.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.01314
Document Type :
Working Paper