Shape Defense

08/31/2020
by   Ali Borji, et al.
1

Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture. This is perhaps the main reason why CNNs are vulnerable to adversarial examples. Here, we explore how shape bias can be incorporated into CNNs to improve their robustness. Two algorithms are proposed, based on the observation that edges are invariant to moderate imperceptible perturbations. In the first one, a classifier is adversarially trained on images with the edge map as an additional channel. At inference time, the edge map is recomputed and concatenated to the image. In the second algorithm, a conditional GAN is trained to translate the edge maps, from clean and/or perturbed images, into clean images. Inference is done over the generated image corresponding to the input's edge map. Extensive experiments over 10 datasets demonstrate the effectiveness of the proposed algorithms against FGSM and ℓ_∞ PGD-40 attacks. Further, we show that a) edge information can also benefit other adversarial training methods, and b) CNNs trained on edge-augmented inputs are more robust against natural image corruptions such as motion blur, impulse noise and JPEG compression, than CNNs trained solely on RGB images. From a broader perspective, our study suggests that CNNs do not adequately account for image structures that are crucial for robustness. Code is available at: <https://github.com/aliborji/Shapedefence.git>.

READ FULL TEXT

page 3

page 17

page 18

page 19

page 20

page 21

page 30

page 34

research
05/23/2019

Interpreting Adversarially Trained Convolutional Neural Networks

We attempt to interpret how adversarially trained convolutional neural n...
research
07/12/2022

Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information

A counter-intuitive property of convolutional neural networks (CNNs) is ...
research
08/10/2020

Informative Dropout for Robust Representation Learning: A Shape-bias Perspective

Convolutional Neural Networks (CNNs) are known to rely more on local tex...
research
06/01/2023

How Do ConvNets Understand Image Intensity?

Convolutional Neural Networks (ConvNets) usually rely on edge/shape info...
research
11/06/2017

Mitigating adversarial effects through randomization

Convolutional neural networks have demonstrated their powerful ability o...
research
07/13/2021

Combining 3D Image and Tabular Data via the Dynamic Affine Feature Map Transform

Prior work on diagnosing Alzheimer's disease from magnetic resonance ima...
research
10/28/2021

Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks

Despite their tremendous successes, convolutional neural networks (CNNs)...

Please sign up or login with your details

Forgot password? Click here to reset