Back to Search Start Over

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

Authors :
Ede, Sami
Baghdadlian, Serop
Weber, Leander
Nguyen, An
Zanca, Dario
Samek, Wojciech
Lapuschkin, Sebastian
Publication Year :
2022

Abstract

The ability to continuously process and retain new information like we do naturally as humans is a feat that is highly sought after when training neural networks. Unfortunately, the traditional optimization algorithms often require large amounts of data available during training time and updates wrt. new data are difficult after the training process has been completed. In fact, when new data or tasks arise, previous progress may be lost as neural networks are prone to catastrophic forgetting. Catastrophic forgetting describes the phenomenon when a neural network completely forgets previous knowledge when given new information. We propose a novel training algorithm called training by explaining in which we leverage Layer-wise Relevance Propagation in order to retain the information a neural network has already learned in previous tasks when training on new data. The method is evaluated on a range of benchmark datasets as well as more complex data. Our method not only successfully retains the knowledge of old tasks within the neural networks but does so more resource-efficiently than other state-of-the-art solutions.<br />Comment: 14 pages including appendix, 5 figures, 2 tables, 1 algorithm listing. v2 update increases figure readability, updates Fig 5 caption, adds our collaborators Dario and An as co-authors v3 brings the preprint in line with the final version accepted for peer-reviewed publication at CD-MAKE 2022. v4 metadata update

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.01929
Document Type :
Working Paper