Back to Search Start Over

Multi-Modal Sensor Fusion-Based Semantic Segmentation for Snow Driving Scenarios

Authors :
Takanori Emaru
Yukinori Kobayashi
Sirawich Vachmanus
Ankit A. Ravankar
Source :
IEEE sensors journal. 21(15):16839-16851
Publication Year :
2021
Publisher :
IEEE (Institute of Electrical and Electronics Engineers), 2021.

Abstract

In recent years, autonomous vehicle driving technology and advanced driver assistance systems have played a key role in improving road safety. However, weather conditions such as snow pose severe challenges for autonomous driving and are an active research area. Thanks to their superior reliability, the resilience of detection, and improved accuracy, advances in computation and sensor technology have paved the way for deep learning and neural network–based techniques that can replace the classical approaches. In this research, we investigate the semantic segmentation of roads in snowy environments. We propose a multi-modal fused RGB-T semantic segmentation utilizing a color (RGB) image and thermal map (T) as inputs for the network. This paper introduces a novel fusion module that combines the feature map from both inputs. We evaluate the proposed model on a new snow dataset that we collected and on other publicly available datasets. The segmentation results show that the proposed fused RGB-T input can segregate human subjects in snowy environments better than an RGB-only input. The fusion module plays a vital role in improving the efficiency of multiple input neural networks for person detection. Our results show that the proposed network can generate a higher success rate than other state-of-the-art networks. The combination of our fused module and pyramid supervision path generated the best results in both mean accuracy and mean intersection over union in every dataset.

Details

Language :
English
ISSN :
1530437X
Volume :
21
Issue :
15
Database :
OpenAIRE
Journal :
IEEE sensors journal
Accession number :
edsair.doi.dedup.....78ff02f7989fdbd2d90f12be3cbd42d3