Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model

by   Ruoxi Qin, et al.

Deep neural networks have been shown to suffer from critical vulnerabilities under adversarial attacks. This phenomenon stimulated the creation of different attack and defense strategies similar to those adopted in cyberspace security. The dependence of such strategies on attack and defense mechanisms makes the associated algorithms on both sides appear as closely reciprocating processes. The defense strategies are particularly passive in these processes, and enhancing initiative of such strategies can be an effective way to get out of this arms race. Inspired by the dynamic defense approach in cyberspace, this paper builds upon stochastic ensemble smoothing based on defense method of random smoothing and model ensemble. Proposed method employs network architecture and smoothing parameters as ensemble attributes, and dynamically change attribute-based ensemble model before every inference prediction request. The proposed method handles the extreme transferability and vulnerability of ensemble models under white-box attacks. Experimental comparison of ASR-vs-distortion curves with different attack scenarios shows that even the attacker with the highest attack capability cannot easily exceed the attack success rate associated with the ensemble smoothed model, especially under untargeted attacks.


page 7

page 11

page 14


Dynamic Stochastic Ensemble with Adversarial Robust Lottery Ticket Subnetworks

Adversarial attacks are considered the intrinsic vulnerability of CNNs. ...

Towards Robust Neural Networks via Random Self-ensemble

Recent studies have revealed the vulnerability of deep neural networks -...

Dynamic ensemble selection based on Deep Neural Network Uncertainty Estimation for Adversarial Robustness

The deep neural network has attained significant efficiency in image rec...

Strategies to architect AI Safety: Defense to guard AI from Adversaries

The impact of designing for security of AI is critical for humanity in t...

Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks

Gradient-based adversarial attacks on neural networks can be crafted in ...

LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via Latent Ensemble Attack

Deepfakes, malicious visual contents created by generative models, pose ...

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

Non-parametric two-sample tests (TSTs) that judge whether two sets of sa...

Please sign up or login with your details

Forgot password? Click here to reset