Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

03/25/2019
by   Jörn-Henrik Jacobsen, et al.
6

Adversarial examples are malicious inputs crafted to cause a model to misclassify them. Their most common instantiation, "perturbation-based" adversarial examples introduce changes to the input that leave its true label unchanged, yet result in a different model prediction. Conversely, "invariance-based" adversarial examples insert changes to the input that leave the model's prediction unaffected despite the underlying input's label having changed. In this paper, we demonstrate that robustness to perturbation-based adversarial examples is not only insufficient for general robustness, but worse, it can also increase vulnerability of the model to invariance-based adversarial examples. In addition to analytical constructions, we empirically study vision classifiers with state-of-the-art robustness to perturbation-based adversaries constrained by an ℓ_p norm. We mount attacks that exploit excessive model invariance in directions relevant to the task, which are able to find adversarial examples within the ℓ_p ball. In fact, we find that classifiers trained to be ℓ_p-norm robust are more vulnerable to invariance-based adversarial examples than their undefended counterparts. Excessive invariance is not limited to models trained to be robust to perturbation-based ℓ_p-norm adversaries. In fact, we argue that the term adversarial example is used to capture a series of model limitations, some of which may not have been discovered yet. Accordingly, we call for a set of precise definitions that taxonomize and address each of these shortcomings in learning.

READ FULL TEXT

page 7

page 8

page 12

page 13

page 14

research
02/11/2020

Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

Adversarial examples are malicious inputs crafted to induce misclassific...
research
11/01/2018

Excessive Invariance Causes Adversarial Vulnerability

Despite their impressive performance, deep neural networks exhibit strik...
research
06/11/2021

Relaxing Local Robustness

Certifiable local robustness, which rigorously precludes small-norm adve...
research
04/13/2023

Adversarial Examples from Dimensional Invariance

Adversarial examples have been found for various deep as well as shallow...
research
07/31/2019

Adversarial Robustness Curves

The existence of adversarial examples has led to considerable uncertaint...
research
08/10/2023

Symmetry Defense Against XGBoost Adversarial Perturbation Attacks

We examine whether symmetry can be used to defend tree-based ensemble cl...
research
03/24/2023

How many dimensions are required to find an adversarial example?

Past work exploring adversarial vulnerability have focused on situations...

Please sign up or login with your details

Forgot password? Click here to reset