Adversarial Image Generation and Training for Deep Convolutional Neural Networks
Deep convolutional neural networks (DCNNs) have achieved great success in image classification, but they may be very vulnerable to adversarial attacks with small perturbations to images. Moreover, the adversarial training based on adversarial image samples has been shown to improve the robustness and generalization of DCNNs. The aim of this paper is to develop a novel framework based on information-geometry sensitivity analysis and the particle swarm optimization to improve two aspects of adversarial image generation and training for DCNNs. The first one is customized generation of adversarial examples. It can design adversarial attacks from options of the number of perturbed pixels, the misclassification probability, and the targeted incorrect class, and hence it is more flexible and effective to locate vulnerable pixels and also enjoys certain adversarial universality. The other is targeted adversarial training. DCNN models can be improved in training with the adversarial information using a manifold-based influence measure effective in vulnerable image/pixel detection as well as allowing for targeted attacks, thereby exhibiting an enhanced adversarial defense in testing.
READ FULL TEXT