Back to Search Start Over

Robustifying Vision Transformer Without Retraining from Scratch Using Attention-Based Test-Time Adaptation

Authors :
Takeshi Kojima
Yusuke Iwasawa
Yutaka Matsuo
Source :
New Generation Computing. 41:5-24
Publication Year :
2022
Publisher :
Springer Science and Business Media LLC, 2022.

Abstract

Vision Transformer (ViT) is becoming more and more popular in the field of image processing. This study aims to improve the robustness against the unknown perturbations without retraining the ViT model from scratch. Since our approach does not alter the training phase, it does not need to repeat computationally heavy pretraining of ViT. Specifically, we use test-time adaptation (TTA) for this purpose, which corrects its prediction during test-time by itself. The representative test-time adaptation method, Tent, is recently found to be applicable to ViT by modulating parameters and gradient clipping. However, we observed that Tent sometimes catastrophically fails, especially under severe perturbations. To stabilize the adaptation, we propose a new loss function called Attent, which minimizes the distributional differences of the attention entropy between the source and target. Experiments of image classification task on CIFAR-10-C, CIFAR-100-C, and ImageNet-C show that both Tent and Attent are effective on a wide variety of corruptions. The results also show that by combining Attent and Tent, the classification accuracy on corrupted data is further improved.

Details

ISSN :
18827055 and 02883635
Volume :
41
Database :
OpenAIRE
Journal :
New Generation Computing
Accession number :
edsair.doi...........27ed23429084bf890fbd278f77deae77
Full Text :
https://doi.org/10.1007/s00354-022-00197-9