A Strong Baseline for Query Efficient Attacks in a Black Box Setting

09/10/2021
by   Rishabh Maheshwary, et al.
8

Existing black box search methods have achieved high success rate in generating adversarial attacks against NLP models. However, such search methods are inefficient as they do not consider the amount of queries required to generate adversarial attacks. Also, prior attacks do not maintain a consistent search space while comparing different search methods. In this paper, we propose a query efficient attack strategy to generate plausible adversarial examples on text classification and entailment tasks. Our attack jointly leverages attention mechanism and locality sensitive hashing (LSH) to reduce the query count. We demonstrate the efficacy of our approach by comparing our attack with four baselines across three different search spaces. Further, we benchmark our results across the same search space used in prior attacks. In comparison to attacks proposed, on an average, we are able to reduce the query count by 75 our attack achieves a higher success rate when compared to prior attacks in a limited query setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset