AdvCodec: Towards A Unified Framework for Adversarial Text Generation

12/22/2019
by   Boxin Wang, et al.
0

While there has been great interest in generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating adversarial text in the discrete domain is still challenging. The main contribution of this paper is to propose a general targeted attack framework AdvCodec for adversarial text generation which addresses the challenge of discrete input space and is easily adapted to general natural language processing (NLP) tasks. In particular, we propose a tree-based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. A tree-based decoder is then applied to ensure the grammar correctness of the generated text. It also enables the flexibility of making manipulations on different levels of text, such as sentence (AdvCodec(sent)) and word (AdvCodec(word)) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve the arbitrary targeted attack. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results and human studies show that AdvCodec generated adversarial text can successfully attack the neural models without misleading the human. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from 0.703to 0.006, and a BERT-based QA model's F1 score to drop from 88.62 to 33.21 (with best targeted attack F1 score as 46.54). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2020

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation

NLP models are shown to suffer from robustness issues, i.e., a model's p...
research
12/22/2021

An Attention Score Based Attacker for Black-box NLP Classifier

Deep neural networks have a wide range of applications in solving variou...
research
09/28/2020

STRATA: Building Robustness with a Simple Method for Generating Black-box Adversarial Attacks for Models of Code

Adversarial examples are imperceptible perturbations in the input to a n...
research
09/06/2019

Natural Adversarial Sentence Generation with Gradient-based Perturbation

This work proposes a novel algorithm to generate natural language advers...
research
07/06/2023

Text Alignment Is An Efficient Unified Model for Massive NLP Tasks

Large language models (LLMs), typically designed as a function of next-w...
research
03/10/2020

Generating Natural Language Adversarial Examples on a Large Scale with Generative Models

Today text classification models have been widely used. However, these c...
research
05/19/2019

Things You May Not Know About Adversarial Example: A Black-box Adversarial Image Attack

Numerous methods for crafting adversarial examples were proposed recentl...

Please sign up or login with your details

Forgot password? Click here to reset