Back to Search Start Over

Continuous transfer of neural network representational similarity for incremental learning.

Authors :
Tian, Songsong
Li, Weijun
Ning, Xin
Ran, Hang
Qin, Hong
Tiwari, Prayag
Source :
Neurocomputing. Aug2023, Vol. 545, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

• Method: Pre-trained Model Knowledge Distillation (PMKD) for incremental learning. • Feature representation knowledge transferred via PMKD. • PMKD combined with replay yields competitive performance in incremental learning. The incremental learning paradigm in machine learning has consistently been a focus of academic research. It is similar to the way in which biological systems learn, and reduces energy consumption by avoiding excessive retraining. Existing studies utilize the powerful feature extraction capabilities of pre-trained models to address incremental learning, but there remains a problem of insufficient utilization of neural network feature knowledge. To address this issue, this paper proposes a novel method called Pre-trained Model Knowledge Distillation (PMKD) which combines knowledge distillation of neural network representations and replay. This paper designs a loss function based on centered kernel alignment to transfer neural network representations knowledge from the pre-trained model to the incremental model layer-by-layer. Additionally, the use of memory buffer for Dark Experience Replay helps the model retain past knowledge better. Experiments show that PMKD achieved superior performance on various datasets and different buffer sizes. Compared to other methods, our class incremental learning accuracy reached the best performance. The open-source code is published at https://github.com/TianSongS/PMKD-IL. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
545
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
164156082
Full Text :
https://doi.org/10.1016/j.neucom.2023.126300