Back to Search Start Over

DA4AD: End-to-End Deep Attention-based Visual Localization for Autonomous Driving

Authors :
Zhou, Yao
Wan, Guowei
Hou, Shenhua
Yu, Li
Wang, Gang
Rui, Xiaofei
Song, Shiyu
Publication Year :
2020

Abstract

We present a visual localization framework based on novel deep attention aware features for autonomous driving that achieves centimeter level localization accuracy. Conventional approaches to the visual localization problem rely on handcrafted features or human-made objects on the road. They are known to be either prone to unstable matching caused by severe appearance or lighting changes, or too scarce to deliver constant and robust localization results in challenging scenarios. In this work, we seek to exploit the deep attention mechanism to search for salient, distinctive and stable features that are good for long-term matching in the scene through a novel end-to-end deep neural network. Furthermore, our learned feature descriptors are demonstrated to be competent to establish robust matches and therefore successfully estimate the optimal camera poses with high precision. We comprehensively validate the effectiveness of our method using a freshly collected dataset with high-quality ground truth trajectories and hardware synchronization between sensors. Results demonstrate that our method achieves a competitive localization accuracy when compared to the LiDAR-based localization solutions under various challenging circumstances, leading to a potential low-cost localization solution for autonomous driving.<br />Comment: 19 pages, 4 figures, Accepted by ECCV 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2003.03026
Document Type :
Working Paper