Back to Search Start Over

MF-MOS: A Motion-Focused Model for Moving Object Segmentation

Authors :
Cheng, Jintao
Zeng, Kang
Huang, Zhuoxu
Tang, Xiaoyu
Wu, Jin
Zhang, Chengxi
Chen, Xieyuanli
Fan, Rui
Publication Year :
2024

Abstract

Moving object segmentation (MOS) provides a reliable solution for detecting traffic participants and thus is of great interest in the autonomous driving field. Dynamic capture is always critical in the MOS problem. Previous methods capture motion features from the range images directly. Differently, we argue that the residual maps provide greater potential for motion information, while range images contain rich semantic guidance. Based on this intuition, we propose MF-MOS, a novel motion-focused model with a dual-branch structure for LiDAR moving object segmentation. Novelly, we decouple the spatial-temporal information by capturing the motion from residual maps and generating semantic features from range images, which are used as movable object guidance for the motion branch. Our straightforward yet distinctive solution can make the most use of both range images and residual maps, thus greatly improving the performance of the LiDAR-based MOS task. Remarkably, our MF-MOS achieved a leading IoU of 76.7% on the MOS leaderboard of the SemanticKITTI dataset upon submission, demonstrating the current state-of-the-art performance. The implementation of our MF-MOS has been released at https://github.com/SCNU-RISLAB/MF-MOS.<br />Comment: Accepted by ICRA2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.17023
Document Type :
Working Paper