Back to Search Start Over

Beyond Geo-localization: Fine-grained Orientation of Street-view Images by Cross-view Matching with Satellite Imagery with Supplementary Materials

Authors :
Hu, Wenmiao
Zhang, Yichen
Liang, Yuxuan
Yin, Yifang
Georgescu, Andrei
Tran, An
Kruppa, Hannes
Ng, See-Kiong
Zimmermann, Roger
Source :
Proceedings of the 30th ACM International Conference on Multimedia (2022) 6155-6164
Publication Year :
2023

Abstract

Street-view imagery provides us with novel experiences to explore different places remotely. Carefully calibrated street-view images (e.g. Google Street View) can be used for different downstream tasks, e.g. navigation, map features extraction. As personal high-quality cameras have become much more affordable and portable, an enormous amount of crowdsourced street-view images are uploaded to the internet, but commonly with missing or noisy sensor information. To prepare this hidden treasure for "ready-to-use" status, determining missing location information and camera orientation angles are two equally important tasks. Recent methods have achieved high performance on geo-localization of street-view images by cross-view matching with a pool of geo-referenced satellite imagery. However, most of the existing works focus more on geo-localization than estimating the image orientation. In this work, we re-state the importance of finding fine-grained orientation for street-view images, formally define the problem and provide a set of evaluation metrics to assess the quality of the orientation estimation. We propose two methods to improve the granularity of the orientation estimation, achieving 82.4% and 72.3% accuracy for images with estimated angle errors below 2 degrees for CVUSA and CVACT datasets, corresponding to 34.9% and 28.2% absolute improvement compared to previous works. Integrating fine-grained orientation estimation in training also improves the performance on geo-localization, giving top 1 recall 95.5%/85.5% and 86.8%/80.4% for orientation known/unknown tests on the two datasets.<br />Comment: This paper has been accepted by ACM Multimedia 2022. This version contains additional supplementary materials

Details

Database :
arXiv
Journal :
Proceedings of the 30th ACM International Conference on Multimedia (2022) 6155-6164
Publication Type :
Report
Accession number :
edsarx.2307.03398
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3503161.3548102