Back to Search Start Over

THAT: Two Head Adversarial Training for Improving Robustness at Scale

Authors :
Wu, Zuxuan
Goldstein, Tom
Davis, Larry S.
Lim, Ser-Nam
Wu, Zuxuan
Goldstein, Tom
Davis, Larry S.
Lim, Ser-Nam
Publication Year :
2021

Abstract

Many variants of adversarial training have been proposed, with most research focusing on problems with relatively few classes. In this paper, we propose Two Head Adversarial Training (THAT), a two-stream adversarial learning network that is designed to handle the large-scale many-class ImageNet dataset. The proposed method trains a network with two heads and two loss functions; one to minimize feature-space domain shift between natural and adversarial images, and one to promote high classification accuracy. This combination delivers a hardened network that achieves state of the art robust accuracy while maintaining high natural accuracy on ImageNet. Through extensive experiments, we demonstrate that the proposed framework outperforms alternative methods under both standard and "free" adversarial training settings.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1269538266
Document Type :
Electronic Resource