Back to Search Start Over

SCL: Self-supervised contrastive learning for few-shot image classification.

Authors :
Lim, Jit Yan
Lim, Kian Ming
Lee, Chin Poo
Tan, Yong Xuan
Source :
Neural Networks. Aug2023, Vol. 165, p19-30. 12p.
Publication Year :
2023

Abstract

Few-shot learning aims to train a model with a limited number of base class samples to classify the novel class samples. However, to attain generalization with a limited number of samples is not a trivial task. This paper proposed a novel few-shot learning approach named Self-supervised Contrastive Learning (SCL) that enriched the model representation with multiple self-supervision objectives. Given the base class samples, the model is trained with the base class loss. Subsequently, contrastive-based self-supervision is introduced to minimize the distance between each training sample with their augmented variants to improve the sample discrimination. To recognize the distant sample, rotation-based self-supervision is proposed to enable the model to learn to recognize the rotation degree of the samples for better sample diversity. The multitask environment is introduced where each training sample is assigned with two class labels: base class label and rotation class label. Complex augmentation is put forth to help the model learn a deeper understanding of the object. The image structure of the training samples are augmented independent of the base class information. The proposed SCL is trained to minimize the base class loss, contrastive distance loss, and rotation class loss simultaneously to learn the generic features and improve the novel class performance. With the multiple self-supervision objectives, the proposed SCL outperforms state-of-the-art few-shot approaches on few-shot image classification benchmark datasets. • A novel self-supervised contrastive learning for few-shot image classification. • Contrastive learning is introduced to obtain a better sample discrimination. • Rotation prediction is proposed to enhance the sample diversity. • Heavy transformation is proposed for deeper object understanding. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08936080
Volume :
165
Database :
Academic Search Index
Journal :
Neural Networks
Publication Type :
Academic Journal
Accession number :
169815601
Full Text :
https://doi.org/10.1016/j.neunet.2023.05.037