Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble

06/20/2020
by   Yi Zhou, et al.
0

Despite neural networks have achieved prominent performance on many natural language processing (NLP) tasks, they are vulnerable to adversarial examples. In this paper, we propose Dirichlet Neighborhood Ensemble (DNE), a randomized smoothing method for training a robust model to defense substitution-based attacks. During training, DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data. In such a way, the model is robust to adversarial attacks while maintaining the performance on the original clean data. DNE is agnostic to the network architectures and scales to large models for NLP applications. We demonstrate through extensive experimentation that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2021

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

Recent studies have shown that deep neural networks are vulnerable to in...
research
07/19/2022

Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders

While sequential recommender systems achieve significant improvements on...
research
07/28/2021

Towards Robustness Against Natural Language Word Substitutions

Robustness against word substitutions has a well-defined and widely acce...
research
10/08/2020

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

The prior work on natural language inference (NLI) debiasing mainly targ...
research
02/23/2021

Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation

Despite that deep neural networks (DNNs) have achieved enormous success ...
research
05/08/2021

Certified Robustness to Text Adversarial Attacks by Randomized [MASK]

Recently, few certified defense methods have been developed to provably ...
research
02/12/2023

TextDefense: Adversarial Text Detection based on Word Importance Entropy

Currently, natural language processing (NLP) models are wildly used in v...

Please sign up or login with your details

Forgot password? Click here to reset