1. Maintaining Adversarial Robustness in Continuous Learning
- Author
-
Ru, Xiaolei, Cao, Xiaowei, Liu, Zijia, Moore, Jack Murdoch, Zhang, Xin-Ya, Zhu, Xia, Wei, Wenjia, Yan, Gang, Ru, Xiaolei, Cao, Xiaowei, Liu, Zijia, Moore, Jack Murdoch, Zhang, Xin-Ya, Zhu, Xia, Wei, Wenjia, and Yan, Gang
- Abstract
Adversarial robustness is essential for security and reliability of machine learning systems. However, the adversarial robustness gained by sophisticated defense algorithms is easily erased as the neural network evolves to learn new tasks. This vulnerability can be addressed by fostering a novel capability for neural networks, termed continual robust learning, which focuses on both the (classification) performance and adversarial robustness on previous tasks during continuous learning. To achieve continuous robust learning, we propose an approach called Double Gradient Projection that projects the gradients for weight updates orthogonally onto two crucial subspaces -- one for stabilizing the smoothed sample gradients and another for stabilizing the final outputs of the neural network. The experimental results on four benchmarks demonstrate that the proposed approach effectively maintains continuous robustness against strong adversarial attacks, outperforming the baselines formed by combining the existing defense strategies and continual learning methods.
- Published
- 2024