1. TFS-ViT: Token-level feature stylization for domain generalization.
- Author
-
Noori, Mehrdad, Cheraghalikhani, Milad, Bahri, Ali, Vargas Hakim, Gustavo A., Osowiechi, David, Ayed, Ismail Ben, and Desrosiers, Christian
- Subjects
- *
TRANSFORMER models , *DEEP learning , *CONVOLUTIONAL neural networks , *COMPUTER vision , *GENERALIZATION , *COMPUTATIONAL complexity - Abstract
Standard deep learning models such as convolutional neural networks (CNNs) lack the ability of generalizing to domains which have not been seen during training. This problem is mainly due to the common but often wrong assumption of such models that the source and target data come from the same i.i.d. distribution. Recently, Vision Transformers (ViTs) have shown outstanding performance for a broad range of computer vision tasks. However, very few studies have investigated their ability to generalize to new domains. This paper presents a first Token-level Feature Stylization (TFS-ViT) approach for domain generalization, which improves the performance of ViTs to unseen data by synthesizing new domains. Our approach transforms token features by mixing the normalization statistics of images from different domains. We further improve this approach with a novel strategy for attention-aware stylization, which uses the attention maps of class (CLS) tokens to compute and mix normalization statistics of tokens corresponding to different image regions. The proposed method is flexible to the choice of backbone model and can be easily applied to any ViT-based architecture with a negligible increase in computational complexity. Comprehensive experiments show that our approach is able to achieve state-of-the-art performance on five challenging benchmarks for domain generalization, and demonstrate its ability to deal with different types of domain shifts. The implementation is available at this repository. • A new token-level stylization method for Domain Generalization is introduced. • We further enhance the method with a novel attention-driven styling strategy. • Our method is flexible and easily adaptable to any ViT, adding minimal complexity. • Our method shows superior performance in most cases on five challenging datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF