Back to Search Start Over

ASSANet: An Anisotropic Separable Set Abstraction for Efficient Point Cloud Representation Learning

Authors :
Qian, Guocheng
Hammoud, Hasan Abed Al Kader
Li, Guohao
Thabet, Ali
Ghanem, Bernard
Publication Year :
2021

Abstract

Access to 3D point cloud representations has been widely facilitated by LiDAR sensors embedded in various mobile devices. This has led to an emerging need for fast and accurate point cloud processing techniques. In this paper, we revisit and dive deeper into PointNet++, one of the most influential yet under-explored networks, and develop faster and more accurate variants of the model. We first present a novel Separable Set Abstraction (SA) module that disentangles the vanilla SA module used in PointNet++ into two separate learning stages: (1) learning channel correlation and (2) learning spatial correlation. The Separable SA module is significantly faster than the vanilla version, yet it achieves comparable performance. We then introduce a new Anisotropic Reduction function into our Separable SA module and propose an Anisotropic Separable SA (ASSA) module that substantially increases the network's accuracy. We later replace the vanilla SA modules in PointNet++ with the proposed ASSA module, and denote the modified network as ASSANet. Extensive experiments on point cloud classification, semantic segmentation, and part segmentation show that ASSANet outperforms PointNet++ and other methods, achieving much higher accuracy and faster speeds. In particular, ASSANet outperforms PointNet++ by $7.4$ mIoU on S3DIS Area 5, while maintaining $1.6 \times $ faster inference speed on a single NVIDIA 2080Ti GPU. Our scaled ASSANet variant achieves $66.8$ mIoU and outperforms KPConv, while being more than $54 \times$ faster.<br />Comment: ASSANet gets accepted to NeurIPS'21 as a Spotlight paper. code available at https://github.com/guochengqian/ASSANet

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.10538
Document Type :
Working Paper