Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection

08/04/2018
by   Raghav Gurbaxani, et al.
0

Despite the recent advancements in deploying neural networks for image classification, it has been found that adversarial examples are able to fool these models leading them to misclassify the images. Since these models are now being widely deployed, we provide an insight on the threat of these adversarial examples by evaluating their characteristics and transferability to more complex models that utilize Image Classification as a subtask. We demonstrate the ineffectiveness of adversarial examples when applied to Instance Segmentation & Object Detection models. We show that this ineffectiveness arises from the inability of adversarial examples to withstand transformations such as scaling or a change in lighting conditions. Moreover, we show that there exists a small threshold below which the adversarial property is retained while applying these input transformations. Additionally, these attacks demonstrate weak cross-network transferability across neural network architectures, e.g. VGG16 and ResNet50, however, the attack may fool both the networks if passed sequentially through networks during its formation. The lack of scalability and transferability challenges the question of how adversarial images would be effective in the real world.

READ FULL TEXT

page 2

page 4

page 5

page 6

research
11/22/2021

Adversarial Examples on Segmentation Models Can be Easy to Transfer

Deep neural network-based image classification can be misled by adversar...
research
08/21/2019

Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples

Adversarial examples are artificially modified input samples which lead ...
research
11/22/2019

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

Neural networks are known to be vulnerable to carefully crafted adversar...
research
11/30/2018

Transferable Adversarial Attacks for Image and Video Object Detection

Adversarial examples have been demonstrated to threaten many computer vi...
research
11/22/2018

Distorting Neural Representations to Generate Highly Transferable Adversarial Examples

Deep neural networks (DNN) can be easily fooled by adding human impercep...
research
08/20/2018

Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples

Many deep learning algorithms can be easily fooled with simple adversari...
research
10/13/2016

Assessing Threat of Adversarial Examples on Deep Neural Networks

Deep neural networks are facing a potential security threat from adversa...

Please sign up or login with your details

Forgot password? Click here to reset