Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning
Modern neural image captioning systems typically adopt the encoder-decoder framework consisting of two principal components: a convolutional neural network (CNN) for image feature extraction and a recurrent neural network (RNN) for caption generation. Inspired by the robustness analysis of CNN-based image classifiers to adversarial perturbations, we propose Show-and-Fool, a novel algorithm for crafting adversarial examples in neural image captioning. Unlike image classification tasks with a finite set of class labels, finding visually-similar adversarial examples in an image captioning system is much more challenging since the space of possible captions in a captioning system is almost infinite. In this paper, we design three approaches for crafting adversarial examples in image captioning: (i) targeted caption method; (ii) targeted keyword method; and (iii) untargeted method. We formulate the process of finding adversarial perturbations as optimization problems and design novel loss functions for efficient search. Experimental results on the Show-and-Tell model and MSCOCO data set show that Show-and-Fool can successfully craft visually-similar adversarial examples with randomly targeted captions, and the adversarial examples can be made highly transferable to the Show-Attend-and-Tell model. Consequently, the presence of adversarial examples leads to new robustness implications of neural image captioning. To the best of our knowledge, this is the first work on crafting effective adversarial examples for image captioning tasks.
READ FULL TEXT