Controlled Caption Generation for Images Through Adversarial Attacks

07/07/2021
by   Nayyer Aafaq, et al.
0

Deep learning is found to be vulnerable to adversarial examples. However, its adversarial susceptibility in image caption generation is under-explored. We study adversarial examples for vision and language models, which typically adopt an encoder-decoder framework consisting of two major components: a Convolutional Neural Network (i.e., CNN) for image feature extraction and a Recurrent Neural Network (RNN) for caption generation. In particular, we investigate attacks on the visual encoder's hidden layer that is fed to the subsequent recurrent network. The existing methods either attack the classification layer of the visual encoder or they back-propagate the gradients from the language model. In contrast, we propose a GAN-based algorithm for crafting adversarial examples for neural image captioning that mimics the internal representation of the CNN such that the resulting deep features of the input image enable a controlled incorrect caption generation through the recurrent network. Our contribution provides new insights for understanding adversarial attacks on vision systems with language component. The proposed method employs two strategies for a comprehensive evaluation. The first examines if a neural image captioning system can be misled to output targeted image captions. The second analyzes the possibility of keywords into the predicted captions. Experiments show that our algorithm can craft effective adversarial images based on the CNN hidden layers to fool captioning framework. Moreover, we discover the proposed attack to be highly transferable. Our work leads to new robustness implications for neural image captioning.

READ FULL TEXT

page 2

page 8

research
12/06/2017

Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning

Modern neural image captioning systems typically adopt the encoder-decod...
research
06/11/2019

Mimic and Fool: A Task Agnostic Adversarial Attack

At present, adversarial attacks are designed in a task-specific fashion....
research
05/10/2019

Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables

In this work, we study the robustness of a CNN+RNN based image captionin...
research
06/13/2023

I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models

Modern image-to-text systems typically adopt the encoder-decoder framewo...
research
01/16/2020

Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks

In this paper, we propose Code-Bridged Classifier (CBC), a framework for...
research
12/21/2016

An Empirical Study of Language CNN for Image Captioning

Language Models based on recurrent neural networks have dominated recent...
research
01/04/2022

Interactive Attention AI to translate low light photos to captions for night scene understanding in women safety

There is amazing progress in Deep Learning based models for Image captio...

Please sign up or login with your details

Forgot password? Click here to reset