Back to Search Start Over

Vision Transformers: From Semantic Segmentation to Dense Prediction

Authors :
Zhang, Li
Lu, Jiachen
Zheng, Sixiao
Zhao, Xinxuan
Zhu, Xiatian
Fu, Yanwei
Xiang, Tao
Feng, Jianfeng
Torr, Philip H. S.
Publication Year :
2022

Abstract

The emergence of vision transformers (ViTs) in image classification has shifted the methodologies for visual representation learning. In particular, ViTs learn visual representation at full receptive field per layer across all the image patches, in comparison to the increasing receptive fields of CNNs across layers and other alternatives (e.g., large kernels and atrous convolution). In this work, for the first time we explore the global context learning potentials of ViTs for dense visual prediction (e.g., semantic segmentation). Our motivation is that through learning global context at full receptive field layer by layer, ViTs may capture stronger long-range dependency information, critical for dense prediction tasks. We first demonstrate that encoding an image as a sequence of patches, a vanilla ViT without local convolution and resolution reduction can yield stronger visual representation for semantic segmentation. For example, our model, termed as SEgmentation TRansformer (SETR), excels on ADE20K (50.28% mIoU, the first position in the test leaderboard on the day of submission) and performs competitively on Cityscapes. However, the basic ViT architecture falls short in broader dense prediction applications, such as object detection and instance segmentation, due to its lack of a pyramidal structure, high computational demand, and insufficient local context. For tackling general dense visual prediction tasks in a cost-effective manner, we further formulate a family of Hierarchical Local-Global (HLG) Transformers, characterized by local attention within windows and global-attention across windows in a pyramidal architecture. Extensive experiments show that our methods achieve appealing performance on a variety of dense prediction tasks (e.g., object detection and instance segmentation and semantic segmentation) as well as image classification.<br />Comment: Extended version of CVPR 2021 paper arXiv:2012.15840 Published on International Journal of Computer Vision (2024)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2207.09339
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/s11263-024-02173-w