1. MGML: Momentum group meta-learning for few-shot image classification.
- Author
-
Zhu, Xiaomeng and Li, Shuxiao
- Subjects
- *
BINARY codes , *PROBLEM solving , *CLASSIFICATION , *RANDOM matrices , *TRANSFER matrix - Abstract
• GML (Group Meta-Learning) is proposed to effectively alleviate the problem that low-quality samples have unfavourable impact on the training effect under few-shot conditions, thereby improving the performance of the model. • Momentum update strategy is introduced to few-shot learning for the first time to effectively improve the generalization and stability of the model, and an adaptive momentum coefficient is designed to form AMS (Adaptive Momentum Smoothing) to further improve the training efficiency. • We propose MGML (Momentum Group Meta-Learning) by combining GML and AMS. MGML not only improves the accuracy of Meta-Learning Baseline, but also has good mobility. It can be inserted into previous state-of-the-art methods and get consistent performance improvements. At present, image classification covers more and more fields, and it is often difficult to obtain enough data for learning in some specific scenarios, such as medical fields, personalized customization of robots, etc. Few-shot image classification aims to quickly learn new classes of features from few images, and the meta-learning method has become the mainstream due to its good performance. However, the generalization ability of the meta-learning method is still poor and easy to be disturbed by low-quality images. In order to solve the above problems, this paper proposes Momentum Group Meta-Learning (MGML) to achieve a better effect of few-shot learning, which contains Group Meta-Learning module (GML) and Adaptive Momentum Smoothing module (AMS). GML obtains an ensemble model by training multiple episodes in parallel and then grouping them, which can reduce the interference of low-quality samples and improve the stability of meta-learning training. AMS includes the adaptive momentum update rule to further optimally integrate models between different groups, so that the model can memorize experience in more scenarios and enhance the generalization ability. We conduct experiments on miniImageNet and tieredImageNet datasets. The results show that MGML improves the accuracy, stability and cross-domain transfer ability of few-shot image classification, and can be applied to different few-shot learning models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF