Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

07/18/2018
by   Adnan Siraj Rakin, et al.
2

Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks. To this end, many defense approaches that attempt to improve the robustness of DNNs have been proposed. In a separate and yet related area, recent works have explored to quantize neural network weights and activation functions into low bit-width to compress model size and reduce computational complexity. In this work,we find that these two different tracks, namely the pursuit of network compactness and robustness, can bemerged into one and give rise to networks of both advantages. To the best of our knowledge, this is the first work that uses quantization of activation functions to defend against adversarial examples. We also propose to train robust neural networks by using adaptive quantization techniques for the activation functions. Our proposed Dynamic Quantized Activation (DQA) is verified through a wide range of experiments with the MNIST and CIFAR-10 datasets under different white-box attack methods, including FGSM, PGD, andC&W attacks. Furthermore, Zeroth Order Optimization and substitute model based black-box attacks are also considered in this work. The experimental results clearly show that the robustness of DNNs could be greatly improved using the proposed DQA.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2020

SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness

We introduce SPLASH units, a class of learnable activation functions sho...
research
11/04/2018

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

Deep Neural Networks (DNNs) have recently been shown vulnerable to adver...
research
06/19/2022

0/1 Deep Neural Networks via Block Coordinate Descent

The step function is one of the simplest and most natural activation fun...
research
12/18/2020

Towards Robust Explanations for Deep Neural Networks

Explanation methods shed light on the decision process of black-box clas...
research
01/29/2023

Towards Verifying the Geometric Robustness of Large-scale Neural Networks

Deep neural networks (DNNs) are known to be vulnerable to adversarial ge...
research
02/24/2022

Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling

Recent works reveal that re-calibrating the intermediate activation of a...
research
02/01/2020

Towards Sharper First-Order Adversary with Quantized Gradients

Despite the huge success of Deep Neural Networks (DNNs) in a wide spectr...

Please sign up or login with your details

Forgot password? Click here to reset