Assessing Threat of Adversarial Examples on Deep Neural Networks

10/13/2016
by   Abigail Graese, et al.
0

Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.

READ FULL TEXT

page 1

page 2

page 3

research
01/10/2019

Image Transformation can make Neural Networks more robust against Adversarial Examples

Neural networks are being applied in many tasks related to IoT with enco...
research
08/23/2017

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

Deep neural networks have been widely adopted in recent years, exhibitin...
research
08/21/2019

Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples

Adversarial examples are artificially modified input samples which lead ...
research
10/18/2018

A Training-based Identification Approach to VIN Adversarial Examples

With the rapid development of Artificial Intelligence (AI), the problem ...
research
08/04/2018

Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection

Despite the recent advancements in deploying neural networks for image c...
research
02/17/2020

On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples

The increasing use of deep neural networks (DNNs) has motivated a parall...
research
12/29/2020

With False Friends Like These, Who Can Have Self-Knowledge?

Adversarial examples arise from excessive sensitivity of a model. Common...

Please sign up or login with your details

Forgot password? Click here to reset