Back to Search Start Over

Enhanced Prototypical Network with Customized Region-Aware Convolution for Few-Shot SAR ATR

Authors :
Xuelian Yu
Hailong Yu
Yi Liu
Haohao Ren
Source :
Remote Sensing, Vol 16, Iss 19, p 3563 (2024)
Publication Year :
2024
Publisher :
MDPI AG, 2024.

Abstract

With the prosperous development and successful application of deep learning technologies in the field of remote sensing, numerous deep-learning-based methods have emerged for synthetic aperture radar (SAR) automatic target recognition (ATR) tasks over the past few years. Generally, most deep-learning-based methods can achieve outstanding recognition performance on the condition that an abundance of labeled samples are available to train the model. However, in real application scenarios, it is difficult and costly to acquire and to annotate abundant SAR images due to the imaging mechanism of SAR, which poses a big challenge to existing SAR ATR methods. Therefore, SAR target recognition in the situation of few-shot, where only a scarce few labeled samples are available, is a fundamental problem that needs to be solved. In this paper, a new method named enhanced prototypical network with customized region-aware convolution (CRCEPN) is put forward to specially tackle the few-shot SAR ATR tasks. To be specific, a feature-extraction network based on a customized and region-aware convolution is first developed. This network can adaptively adjust convolutional kernels and their receptive fields according to each SAR image’s own characteristics as well as the semantical similarity among spatial regions, thus augmenting its capability to extract more informative and discriminative features. To achieve accurate and robust target identity prediction under the few-shot condition, an enhanced prototypical network is proposed. This network can improve the representation ability of the class prototype by properly making use of training and test samples together, thus effectively raising the classification accuracy. Meanwhile, a new hybrid loss is designed to learn a feature space with both inter-class separability and intra-class tightness as much as possible, which can further upgrade the recognition performance of the proposed method. Experiments performed on the moving and stationary target acquisition and recognition (MSTAR) dataset, the OpenSARShip dataset, and the SAMPLE+ dataset demonstrate that the proposed method is competitive with some state-of-the-art methods for few-shot SAR ATR tasks.

Details

Language :
English
ISSN :
16193563 and 20724292
Volume :
16
Issue :
19
Database :
Directory of Open Access Journals
Journal :
Remote Sensing
Publication Type :
Academic Journal
Accession number :
edsdoj.9fd39b914374433ad8d8aa682ae7e09
Document Type :
article
Full Text :
https://doi.org/10.3390/rs16193563