Back to Search Start Over

Self-Supervised Pretraining on Satellite Imagery: a Case Study on Label-Efficient Vehicle Detection

Authors :
BOURCIER, Jules
Floquet, Thomas
Dashyan, Gohar
Ceillier, Tugdual
Alahari, Karteek
Chanussot, Jocelyn
BOURCIER, Jules
Floquet, Thomas
Dashyan, Gohar
Ceillier, Tugdual
Alahari, Karteek
Chanussot, Jocelyn
Publication Year :
2022

Abstract

In defense-related remote sensing applications, such as vehicle detection on satellite imagery, supervised learning requires a huge number of labeled examples to reach operational performances. Such data are challenging to obtain as it requires military experts, and some observables are intrinsically rare. This limited labeling capability, as well as the large number of unlabeled images available due to the growing number of sensors, make object detection on remote sensing imagery highly relevant for self-supervised learning. We study in-domain self-supervised representation learning for object detection on very high resolution optical satellite imagery, that is yet poorly explored. For the first time to our knowledge, we study the problem of label efficiency on this task. We use the large land use classification dataset Functional Map of the World to pretrain representations with an extension of the Momentum Contrast framework. We then investigate this model's transferability on a real-world task of fine-grained vehicle detection and classification on Preligens proprietary data, which is designed to be representative of an operational use case of strategic site surveillance. We show that our in-domain self-supervised learning model is competitive with ImageNet pretraining, and outperforms it in the low-label regime.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381575873
Document Type :
Electronic Resource