1. Maintaining Adversarial Robustness in Continuous Learning
- Author
-
Ru, Xiaolei, Cao, Xiaowei, Liu, Zijia, Moore, Jack Murdoch, Zhang, Xin-Ya, Zhu, Xia, Wei, Wenjia, and Yan, Gang
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Adversarial robustness is essential for security and reliability of machine learning systems. However, adversarial robustness enhanced by defense algorithms is easily erased as the neural network's weights update to learn new tasks. To address this vulnerability, it is essential to improve the capability of neural networks in terms of robust continual learning. Specially, we propose a novel gradient projection technique that effectively stabilizes sample gradients from previous data by orthogonally projecting back-propagation gradients onto a crucial subspace before using them for weight updates. This technique can maintaining robustness by collaborating with a class of defense algorithms through sample gradient smoothing. The experimental results on four benchmarks including Split-CIFAR100 and Split-miniImageNet, demonstrate that the superiority of the proposed approach in mitigating rapidly degradation of robustness during continual learning even when facing strong adversarial attacks.
- Published
- 2024