Back to Search Start Over

Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning

Authors :
Yuqing Hu
Stéphane Pateux
Vincent Gripon
Source :
Algorithms, Vol 15, Iss 5, p 147 (2022)
Publication Year :
2022
Publisher :
MDPI AG, 2022.

Abstract

In many real-life problems, it is difficult to acquire or label large amounts of data, resulting in so-called few-shot learning problems. However, few-shot classification is a challenging problem due to the uncertainty caused by using few labeled samples. In the past few years, many methods have been proposed with the common aim of transferring knowledge acquired on a previously solved task, which is often achieved by using a pretrained feature extractor. As such, if the initial task contains many labeled samples, it is possible to circumvent the limitations of few-shot learning. A shortcoming of existing methods is that they often require priors about the data distribution, such as the balance between considered classes. In this paper, we propose a novel transfer-based method with a double aim: providing state-of-the-art performance, as reported on standardized datasets in the field of few-shot learning, while not requiring such restrictive priors. Our methodology is able to cope with both inductive cases, where prediction is performed on test samples independently from each other, and transductive cases, where a joint (batch) prediction is performed.

Details

Language :
English
ISSN :
19994893
Volume :
15
Issue :
5
Database :
Directory of Open Access Journals
Journal :
Algorithms
Publication Type :
Academic Journal
Accession number :
edsdoj.192c67596d544736b0d7edb1908a24a1
Document Type :
article
Full Text :
https://doi.org/10.3390/a15050147