Understanding Adversarial Robustness from Feature Maps of Convolutional Layers

02/25/2022
by   Cong Xu, et al.
0

The adversarial robustness of a neural network mainly relies on two factors, one is the feature representation capacity of the network, and the other is its resistance ability to perturbations. In this paper, we study the anti-perturbation ability of the network from the feature maps of convolutional layers. Our theoretical analysis discovers that larger convolutional features before average pooling can contribute to better resistance to perturbations, but the conclusion is not true for max pooling. Based on the theoretical findings, we present two feasible ways to improve the robustness of existing neural networks. The proposed approaches are very simple and only require upsampling the inputs or modifying the stride configuration of convolution operators. We test our approaches on several benchmark neural network architectures, including AlexNet, VGG16, RestNet18 and PreActResNet18, and achieve non-trivial improvements on both natural accuracy and robustness under various attacks. Our study brings new insights into the design of robust neural networks. The code is available at <https://github.com/MTandHJ/rcm>.

READ FULL TEXT

page 6

page 10

research
03/23/2017

On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations

Deep convolutional neural networks are generally regarded as robust func...
research
11/16/2016

S3Pool: Pooling with Stochastic Spatial Sampling

Feature pooling layers (e.g., max pooling) in convolutional neural netwo...
research
07/14/2022

Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness

Neural network ensembles, such as Bayesian neural networks (BNNs), have ...
research
05/16/2021

Dynamic Pooling Improves Nanopore Base Calling Accuracy

In nanopore sequencing, electrical signal is measured as DNA molecules p...
research
11/29/2018

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

Verifying robustness of neural network classifiers has attracted great i...
research
06/28/2023

Reduce Computational Complexity for Convolutional Layers by Skipping Zeros

Deep neural networks rely on parallel processors for acceleration. To de...
research
05/17/2021

Itsy Bitsy SpiderNet: Fully Connected Residual Network for Fraud Detection

With the development of high technology, the scope of fraud is increasin...

Please sign up or login with your details

Forgot password? Click here to reset