CAAD 2018: Iterative Ensemble Adversarial Attack

11/07/2018
by   Jiayang Liu, et al.
0

Deep Neural Networks (DNNs) have recently led to significant improvements in many fields. However, DNNs are vulnerable to adversarial examples which are samples with imperceptible perturbations while dramatically misleading the DNNs. Adversarial attacks can be used to evaluate the robustness of deep learning models before they are deployed. Unfortunately, most of existing adversarial attacks can only fool a black-box model with a low success rate. To improve the success rates for black-box adversarial attacks, we proposed an iterated adversarial attack against an ensemble of image classifiers. With this method, we won the 5th place in CAAD 2018 Targeted Adversarial Attack competition.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset