Back to Search Start Over

VDNA-PR: Using General Dataset Representations for Robust Sequential Visual Place Recognition

Authors :
Ramtoula, Benjamin
De Martini, Daniele
Gadd, Matthew
Newman, Paul
Publication Year :
2024

Abstract

This paper adapts a general dataset representation technique to produce robust Visual Place Recognition (VPR) descriptors, crucial to enable real-world mobile robot localisation. Two parallel lines of work on VPR have shown, on one side, that general-purpose off-the-shelf feature representations can provide robustness to domain shifts, and, on the other, that fused information from sequences of images improves performance. In our recent work on measuring domain gaps between image datasets, we proposed a Visual Distribution of Neuron Activations (VDNA) representation to represent datasets of images. This representation can naturally handle image sequences and provides a general and granular feature representation derived from a general-purpose model. Moreover, our representation is based on tracking neuron activation values over the list of images to represent and is not limited to a particular neural network layer, therefore having access to high- and low-level concepts. This work shows how VDNAs can be used for VPR by learning a very lightweight and simple encoder to generate task-specific descriptors. Our experiments show that our representation can allow for better robustness than current solutions to serious domain shifts away from the training data distribution, such as to indoor environments and aerial imagery.<br />Comment: Published at ICRA 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.09025
Document Type :
Working Paper