1. Handling the adversarial attacks
- Author
-
Maoling Yan, Yongbin Zhao, Ning Cao, Jing Li, Yingying Wang, Sun Qian, Guofu Li, and Pengjia Zhu
- Subjects
020203 distributed computing ,General Computer Science ,Artificial neural network ,business.industry ,Network security ,Computer science ,Computational intelligence ,02 engineering and technology ,Machine learning ,computer.software_genre ,Random forest ,Adversarial system ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Cluster analysis ,business ,computer - Abstract
The i.i.d assumption is the corner stone of most conventional machine learning algorithms. However, reducing the bias and variance of the learning model on the i.i.d dataset may not help the model to prevent from their failure on the adversarial samples, which are intentionally generated by either the malicious users or its rival programs. This paper gives a brief introduction of machine learning and adversarial learning, discussing the research frontier of the adversarial issues noticed by both the machine learning and network security field. We argue that one key reason of the adversarial issue is that the learning algorithms may not exploit the input feature set enough, so that the attackers can focus on a small set of features to trick the model. To address this issue, we consider two important classes of classifiers. For random forest, we propose a type of random forest called Weighted Random Forest (WRF) to encourage the model to give even credits to the input features. This approach can be further improved by careful selection of a subset of trees based on the clustering analysis during the run time. For neural networks, we propose to introduce extra soft constraints based on the weight variance to the objective function, such that the model would base the classification decision on more evenly distributed feature impact. Empirical experiments show that these approaches can effectively improve the robustness of the learnt model against their baseline systems.
- Published
- 2018
- Full Text
- View/download PDF