Back to Search Start Over

Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles

Authors :
Ryali, Chaitanya
Hu, Yuan-Ting
Bolya, Daniel
Wei, Chen
Fan, Haoqi
Huang, Po-Yao
Aggarwal, Vaibhav
Chowdhury, Arkabandhu
Poursaeed, Omid
Hoffman, Judy
Malik, Jitendra
Li, Yanghao
Feichtenhofer, Christoph
Publication Year :
2023

Abstract

Modern hierarchical vision transformers have added several vision-specific components in the pursuit of supervised classification performance. While these components lead to effective accuracies and attractive FLOP counts, the added complexity actually makes these transformers slower than their vanilla ViT counterparts. In this paper, we argue that this additional bulk is unnecessary. By pretraining with a strong visual pretext task (MAE), we can strip out all the bells-and-whistles from a state-of-the-art multi-stage vision transformer without losing accuracy. In the process, we create Hiera, an extremely simple hierarchical vision transformer that is more accurate than previous models while being significantly faster both at inference and during training. We evaluate Hiera on a variety of tasks for image and video recognition. Our code and models are available at https://github.com/facebookresearch/hiera.<br />Comment: ICML 2023 Oral version. Code+Models: https://github.com/facebookresearch/hiera

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.00989
Document Type :
Working Paper