Back to Search Start Over

Swin3D: A Pretrained Transformer Backbone for 3D Indoor Scene Understanding

Authors :
Yang, Yu-Qi
Guo, Yu-Xiao
Xiong, Jian-Yu
Liu, Yang
Pan, Hao
Wang, Peng-Shuai
Tong, Xin
Guo, Baining
Publication Year :
2023

Abstract

The use of pretrained backbones with fine-tuning has been successful for 2D vision and natural language processing tasks, showing advantages over task-specific networks. In this work, we introduce a pretrained 3D backbone, called {\SST}, for 3D indoor scene understanding. We design a 3D Swin transformer as our backbone network, which enables efficient self-attention on sparse voxels with linear memory complexity, making the backbone scalable to large models and datasets. We also introduce a generalized contextual relative positional embedding scheme to capture various irregularities of point signals for improved network performance. We pretrained a large {\SST} model on a synthetic Structured3D dataset, which is an order of magnitude larger than the ScanNet dataset. Our model pretrained on the synthetic dataset not only generalizes well to downstream segmentation and detection on real 3D point datasets, but also outperforms state-of-the-art methods on downstream tasks with +2.3 mIoU and +2.2 mIoU on S3DIS Area5 and 6-fold semantic segmentation, +1.8 mIoU on ScanNet segmentation (val), +1.9 mAP@0.5 on ScanNet detection, and +8.1 mAP@0.5 on S3DIS detection. A series of extensive ablation studies further validate the scalability, generality, and superior performance enabled by our approach. The code and models are available at https://github.com/microsoft/Swin3D .<br />Comment: Project page: https://yukichiii.github.io/project/swin3D/swin3D.html

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.06906
Document Type :
Working Paper