Back to Search Start Over

Improving Vision Transformers by Revisiting High-frequency Components

Authors :
Bai, Jiawang
Yuan, Li
Xia, Shu-Tao
Yan, Shuicheng
Li, Zhifeng
Liu, Wei
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

The transformer models have shown promising effectiveness in dealing with various vision tasks. However, compared with training Convolutional Neural Network (CNN) models, training Vision Transformer (ViT) models is more difficult and relies on the large-scale training set. To explain this observation we make a hypothesis that \textit{ViT models are less effective in capturing the high-frequency components of images than CNN models}, and verify it by a frequency analysis. Inspired by this finding, we first investigate the effects of existing techniques for improving ViT models from a new frequency perspective, and find that the success of some techniques (e.g., RandAugment) can be attributed to the better usage of the high-frequency components. Then, to compensate for this insufficient ability of ViT models, we propose HAT, which directly augments high-frequency components of images via adversarial training. We show that HAT can consistently boost the performance of various ViT models (e.g., +1.2% for ViT-B, +0.5% for Swin-B), and especially enhance the advanced model VOLO-D5 to 87.3% that only uses ImageNet-1K data, and the superiority can also be maintained on out-of-distribution data and transferred to downstream tasks. The code is available at: https://github.com/jiawangbai/HAT.<br />Comment: Accepted to ECCV2022; Code: https://github.com/jiawangbai/HAT

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....a444f32ad320dd319dffbcc83b811d9b
Full Text :
https://doi.org/10.48550/arxiv.2204.00993