Back to Search Start Over

Contextualized Perturbation for Textual Adversarial Attack

Authors :
Li, Dianqi
Zhang, Yizhe
Peng, Hao
Chen, Liqun
Brockett, Chris
Sun, Ming-Ting
Dolan, Bill
Publication Year :
2020

Abstract

Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness. Existing techniques of generating such examples are typically driven by local heuristic rules that are agnostic to the context, often resulting in unnatural and ungrammatical outputs. This paper presents CLARE, a ContextuaLized AdversaRial Example generation model that produces fluent and grammatical outputs through a mask-then-infill procedure. CLARE builds on a pre-trained masked language model and modifies the inputs in a context-aware manner. We propose three contextualized perturbations, Replace, Insert and Merge, allowing for generating outputs of varied lengths. With a richer range of available strategies, CLARE is able to attack a victim model more efficiently with fewer edits. Extensive experiments and human evaluation demonstrate that CLARE outperforms the baselines in terms of attack success rate, textual similarity, fluency and grammaticality.<br />Comment: Accepted by NAACL 2021, long paper

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2009.07502
Document Type :
Working Paper