Back to Search Start Over

Are the Latent Representations of Foundation Models for Pathology Invariant to Rotation?

Authors :
Elphick, Matouš
Turajlic, Samra
Yang, Guang
Publication Year :
2024

Abstract

Self-supervised foundation models for digital pathology encode small patches from H\&E whole slide images into latent representations used for downstream tasks. However, the invariance of these representations to patch rotation remains unexplored. This study investigates the rotational invariance of latent representations across twelve foundation models by quantifying the alignment between non-rotated and rotated patches using mutual $k$-nearest neighbours and cosine distance. Models that incorporated rotation augmentation during self-supervised training exhibited significantly greater invariance to rotations. We hypothesise that the absence of rotational inductive bias in the transformer architecture necessitates rotation augmentation during training to achieve learned invariance. Code: https://github.com/MatousE/rot-invariance-analysis.<br />Comment: Samra Turajlic and Guang Yang are joint last authors

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.11938
Document Type :
Working Paper