On the Effectiveness of Low Frequency Perturbations

02/28/2019
by   Yash Sharma, et al.
6

Carefully crafted, often imperceptible, adversarial perturbations have been shown to cause state-of-the-art models to yield extremely inaccurate outputs, rendering them unsuitable for safety-critical application domains. In addition, recent work has shown that constraining the attack space to a low frequency regime is particularly effective. Yet, it remains unclear whether this is due to generally constraining the attack search space or specifically removing high frequency components from consideration. By systematically controlling the frequency components of the perturbation, evaluating against the top-placing defense submissions in the NeurIPS 2017 competition, we empirically show that performance improvements in both optimization and generalization are yielded only when low frequency components are preserved. In fact, the defended models based on (ensemble) adversarial training are roughly as vulnerable to low frequency perturbations as undefended models, suggesting that the purported robustness of proposed defenses is reliant upon adversarial perturbations being high frequency in nature. We do find that under ℓ_∞ ϵ=16/255, a commonly used distortion bound, low frequency perturbations are indeed perceptible. This questions the use of the ℓ_∞-norm, in particular, as a distortion metric, and suggests that explicitly considering the frequency space is promising for learning robust models which better align with human perception.

READ FULL TEXT

page 7

page 13

research
05/09/2022

How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?

Model robustness is vital for the reliable deployment of machine learnin...
research
08/19/2019

Adversarial Defense by Suppressing High-frequency Components

Recent works show that deep neural networks trained on image classificat...
research
10/13/2020

Toward Few-step Adversarial Training from a Frequency Perspective

We investigate adversarial-sample generation methods from a frequency do...
research
07/07/2020

Robust Learning with Frequency Domain Regularization

Convolution neural networks have achieved remarkable performance in many...
research
06/19/2022

JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

It has been observed that the unauthorized use of face recognition syste...
research
06/19/2020

Using Learning Dynamics to Explore the Role of Implicit Regularization in Adversarial Examples

Recent work (Ilyas et al, 2019) suggests that adversarial examples are f...
research
01/12/2023

Phase-shifted Adversarial Training

Adversarial training has been considered an imperative component for saf...

Please sign up or login with your details

Forgot password? Click here to reset