Back to Search Start Over

Few‐shot learning with relation propagation and constraint.

Authors :
Gong, Huiyun
Wang, Shuo
Zhao, Xiaowei
Yan, Yifan
Ma, Yuqing
Liu, Wei
Liu, Xianglong
Source :
IET Computer Vision (Wiley-Blackwell). Dec2021, Vol. 15 Issue 8, p608-617. 10p.
Publication Year :
2021

Abstract

Previous deep learning methods usually required large‐scale annotated data, which is computationally exhaustive and unrealistic in certain scenarios. Therefore, few‐shot learning, where only a few annotated training images are available for training, has attracted increasing attention these days, showing huge potential in practical applications, such as portable equipment or security inspection, and so on. However, current few‐shot learning methods usually neglect the valuable semantic correlations between samples, thereby failing in extracting discriminating relations to achieve accurate predictive results. In this work, extending on a recent state‐of‐the‐art few‐shot learning method, transductive relation‐propagation network (TRPN), which considers the correlations between training samples, a constrained relation‐propagation network is proposed to further regularise the distilled correlations and thus achieve favourable few‐shot classification performance. The proposed framework contains three main components, namely preprocess module, relational propagation module, and relation constraint module. First, sample features are extracted and a relation graph node is constructed by treating the relation of each support–query pair as a graph node in the preprocess module. After that, in the relation propagation module (RPM), the valuable information of support–query pairs is modelled and propagated to directly generate the relational representations for further prediction. Then, a relation constraint module is introduced to regularise the relational representations and make it consistent with the ground‐truth relations as much as possible. With the guidance of the effective RPM and relation constraint module, the relational representations of the support–query pairs are distinguishable and thus can achieve accurate predictive results. Comprehensive experiments conducted on widely used benchmarks validate the effectiveness of our method compared to state‐of‐the‐art few‐shot classification approaches. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
17519632
Volume :
15
Issue :
8
Database :
Academic Search Index
Journal :
IET Computer Vision (Wiley-Blackwell)
Publication Type :
Academic Journal
Accession number :
153052935
Full Text :
https://doi.org/10.1049/cvi2.12074