Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT

02/27/2020
by   Lichao Sun, et al.
0

There is an increasing amount of literature that claims the brittleness of deep neural networks in dealing with adversarial examples that are created maliciously. It is unclear, however, how the models will perform in realistic scenarios where natural rather than malicious adversarial instances often exist. This work systematically explores the robustness of BERT, the state-of-the-art Transformer-style model in NLP, in dealing with noisy data, particularly mistakes in typing the keyboard, that occur inadvertently. Intensive experiments on sentiment analysis and question answering benchmarks indicate that: (i) Typos in various words of a sentence do not influence equally. The typos in informative words make severer damages; (ii) Mistype is the most damaging factor, compared with inserting, deleting, etc.; (iii) Humans and machines have different focuses on recognizing adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

Identifying Human Strategies for Generating Word-Level Adversarial Examples

Adversarial examples in NLP are receiving increasing research attention....
research
06/30/2022

The Topological BERT: Transforming Attention into Topology for Natural Language Processing

In recent years, the introduction of the Transformer models sparked a re...
research
09/15/2021

BERT is Robust! A Case Against Synonym-Based Adversarial Examples in Text Classification

Deep Neural Networks have taken Natural Language Processing by storm. Wh...
research
07/27/2019

Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment

Machine learning algorithms are often vulnerable to adversarial examples...
research
03/29/2020

User Generated Data: Achilles' heel of BERT

Pre-trained language models such as BERT are known to perform exceedingl...
research
08/10/2020

FireBERT: Hardening BERT-based classifiers against adversarial attack

We present FireBERT, a set of three proof-of-concept NLP classifiers har...

Please sign up or login with your details

Forgot password? Click here to reset