Defending Against Physically Realizable Attacks on Image Classification

09/20/2019
by   Tong Wu, et al.
Washington University in St Louis
13

We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.

READ FULL TEXT

page 3

page 14

page 16

page 22

06/25/2023

On Evaluating the Adversarial Robustness of Semantic Segmentation Models

Achieving robustness against adversarial input perturbation is an import...
03/25/2019

Robust Neural Networks using Randomized Adversarial Training

Since the discovery of adversarial examples in machine learning, researc...
03/04/2022

Dynamic Backdoors with Global Average Pooling

Outsourced training and machine learning as a service have resulted in n...
09/11/2020

Defending Against Multiple and Unforeseen Adversarial Videos

Adversarial examples of deep neural networks have been actively investig...
07/09/2019

Generating Adversarial Fragments with Adversarial Networks for Physical-world Implementation

Although deep neural networks have been widely applied in many applicati...
06/16/2022

Analysis and Extensions of Adversarial Training for Video Classification

Adversarial training (AT) is a simple yet effective defense against adve...
07/10/2020

Improving Adversarial Robustness by Enforcing Local and Global Compactness

The fact that deep neural networks are susceptible to crafted perturbati...

Code Repositories

phattacks

Defending Against Physically Realizable Attacks on Image Classification


view repo

DEFENDING-AGAINST-PHYSICALLY-REALIZABLE-ATTACKS-ON-IMAGE-CLASSIFICATION

None


view repo

Please sign up or login with your details

Forgot password? Click here to reset