Back to Search Start Over

Weakly supervised semantic segmentation for point cloud based on view-based adversarial training and self-attention fusion.

Authors :
Miao, Yongwei
Ren, Guoxiang
Wang, Jinrong
Liu, Fuchang
Source :
Computers & Graphics. Nov2023, Vol. 116, p46-54. 9p.
Publication Year :
2023

Abstract

Traditional methods of weakly supervised semantic segmentation (WSsegmentation) for point cloud scenes have several limitations including limited precision and difficulty in handling complex scenes due to imprecise labels or partial annotations. To address these issues, we perform view-based adversarial training on the original point cloud scene samples through view resampling and Gaussian noise perturbation to reduce overfitting. Combining a self-attention mechanism with multi-layer perceptrons and point cloud segmentation strategy, we can perform dimensionality enhancement and dimensionality reduction operations to better capture the local features of point cloud data. Finally, we can obtain the semantic segmentation results of point cloud scene by fusing local and global semantic features. In the design of the network loss function, we combine the Siamese loss and the smoothness loss and the cross-entropy loss to improve the ability and fidelity of semantic networks. Specifically, the Siamese loss is used to compute the distance between different augmented point cloud data in their feature embedding space and the smoothness loss is used to penalize the discontinuity of semantic information between adjacent regions. The proposed weakly supervised segmentation network achieves an overall segmentation accuracy close to fully supervised segmentation methods and outperforms most of the existing weakly supervised segmentation methods by 5% to 10% in scene segmentation in terms of mIoU on S3DIS, ShapeNet, and PartNet datasets. Extensive experiments demonstrate the robustness, effectiveness, and generalization of the proposed point cloud segmentation network. [Display omitted] • We propose effective data augmentation methods, including view resampling and adversarial training, to tackle partial annotations. • To fully integrate the global and local features of point cloud, we employ high-dimensional feature self-attention mechanism and multi-layer perceptrons to fuse multi-level features. • We present a novel point cloud segmentation framework, which can achieve the higher segmentation accuracy than traditional segmentation networks for different point cloud scenes. • Extensive experiments demonstrate the robustness, effectiveness, and generalization of the proposed weakly supervised point cloud semantic segmentation network. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00978493
Volume :
116
Database :
Academic Search Index
Journal :
Computers & Graphics
Publication Type :
Academic Journal
Accession number :
174061404
Full Text :
https://doi.org/10.1016/j.cag.2023.08.007