1. Cognitive Bias-Inspired Deep Robust Neural Networks Against Transfer-Based Attacks Considering Confidence Score
- Author
-
Yuuki Ogasawara, Hiroshi Sato, and Masao Kubo
- Subjects
Adversarial examples ,transfer-based attacks ,cognitive bias ,neural networks ,robustness ,confidence score ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Transfer-based attacks, a type of adversarial attack, have become a growing threat in recent years with the proliferation of cloud services. Deep neural networks that exploit human cognitive bias (Loosely Symmetric-Deep Neural Network, LS-DNN) are known as a defensive technique against transfer-based attacks. LS-DNN can prevent malfunctions caused by adversarial examples with a high probability by incorporating human learning characteristics into the neural network’s nodes. However, maintaining accuracy against normal data and reducing Training Time is challenging. This paper proposes a new model called “LS+-DNN” inspired by the Dropout method to solve this problem. Evaluation experiments on two datasets show that the proposed model can achieve both in high dimensions. In addition, we analyze the proposed model focusing on the variance and confidence score of the training parameters of the proposed model. As a result, we point out that the confidence score is an important indicator of robust models against transfer-based attacks.
- Published
- 2024
- Full Text
- View/download PDF