Back to Search Start Over

Adversarial Learning for Coordinate Regression through k-layer Penetrating Representation

Authors :
Jiang, M
Sui, Y
Lei, Y
Xie, X
Li, C
Liu, Y
Tsang, IW
Jiang, M
Sui, Y
Lei, Y
Xie, X
Li, C
Liu, Y
Tsang, IW
Publication Year :
2024

Abstract

Adversarial attack is a crucial step when evaluating the reliability and robustness of deep neural networks (DNNs) models. Most existing attack approaches apply an end-to-end gradient update strategy to generate adversarial examples for a classification or regression problem. However, few of them consider the non-differentiable DNN models (e.g., coordinate regression model) that prevent end-to-end backpropagation resulting in the failure of gradient calculation. In this paper, we present a new adversarial example generation approach for both untargeted and targeted attacks on coordinate regression models with non-differentiable operations. The novelty of our approach lies in a k-layer penetrating representation, on which we perturb the hidden feature distribution of the k-th layer through relational guidance to influence the final output, in which end-to-end backpropagation is not required. Rather than modifying a large portion of the pixels in an image, the proposed approach only modifies a very small set of the input pixels. These pixels are carefully and precisely selected by three correlations between the input pixels and hidden features of the k-th layer of a DNN, thus significantly reducing the adversarial perturbation on a clean image. We successfully apply the proposed approach to two different tasks (i.e., 2D and 3D human pose estimation) which are typical applications of the coordinate regression learning. The comprehensive experiments demonstrate that our approach achieves better performance while using much less adversarial perturbation on clean images.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1455948116
Document Type :
Electronic Resource