Sorry, I don't understand your search. ×
Back to Search Start Over

Impacts of an Adversary Attacking Filter-Based Feature Selection Algorithms

Authors :
Gupta, Srishti
Gupta, Srishti
Publication Year :
2021

Abstract

Applying complex mathematical calculations to big data, extracting insightful information, adapting new data independently, and providing scalable solutions have attracted various industries including healthcare, financial, computer-vision, cyber-security, automation, etc. The ubiquitous use of Machine Learning (ML) has become almost ordinary. ML has not only lured businesses but has also interested the archenemies of society. Due to the multi-faceted applications of ML, practicing ML with malicious intent can cause severe deleterious effects on individuals, society, organizations, and the environment. With the recent spread of awareness for the ethical use of ML, we are centuries away from its noble only applications. Adversarial Machine Learning (AML) is a branch of ML that aims to make ML models robust and secure against adversaries. Since most of the feature selection and ML algorithms in a data science pipeline were developed in an adversary-unaware environment, studies have shown that these algorithms are vulnerable to attacks and can be easily compromised in the presence of an intelligent adversary. In the last decade, a tremendous amount of work has been done to develop carefully crafted attacks that can subvert the predictions of state-of-the-art ML models along with their suitable countermeasures. However, majority of these works are limited to the robustness of classifiers and their secure predictions. Unfortunately, with an intent to wreck an ML model, the adversary can seed an attack anywhere in a data science pipeline. Adversarial Feature Selection (AFS) is a novel sub-field of AML that intends to make feature selection algorithms guarded against adversaries. The study of AFS is ever more important due to the nature of damage an attack can do at the feature selection stage. For example, if an intelligently crafted adversarial input perturbation has been planted in the raw data right before the feature selection, a feature selector ends up selecting wro

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1359297172
Document Type :
Electronic Resource