Taking a machine's perspective: Human deciphering of adversarial images

09/11/2018
by   Zhenglong Zhou, et al.
8

How similar is the human mind to the sophisticated machine-learning systems that mirror its performance? Models of object categorization based on convolutional neural networks (CNNs) have achieved human-level benchmarks in assigning known labels to novel images. These advances support transformative technologies such as autonomous vehicles and machine diagnosis; beyond this, they also serve as candidate models for the visual system itself -- not only in their output but perhaps even in their underlying mechanisms and principles. However, unlike human vision, CNNs can be "fooled" by adversarial examples -- carefully crafted images that appear as nonsense patterns to humans but are recognized as familiar objects by machines, or that appear as one object to humans and a different object to machines. This seemingly extreme divergence between human and machine classification challenges the promise of these new advances, both as applied image-recognition systems and also as models of the human mind. Surprisingly, however, little work has empirically investigated human classification of such adversarial stimuli: Does human and machine performance fundamentally diverge? Or could humans decipher such images and predict the machine's preferred labels? Here, we show that human and machine classification of adversarial stimuli are robustly related: In seven experiments on five prominent and diverse adversarial imagesets, human subjects reliably identified the machine's chosen label over relevant foils. This pattern persisted for images with strong antecedent identities, and even for images described as "totally unrecognizable to human eyes". We suggest that human intuition may be a more reliable guide to machine (mis)classification than has typically been imagined, and we explore the consequences of this result for minds and machines alike.

READ FULL TEXT

page 2

page 6

page 9

research
09/11/2018

Taking a machine's perspective: Humans can decipher adversarial images

How similar is the human mind to the sophisticated machine-learning syst...
research
09/01/2016

Deep Learning Human Mind for Automated Visual Classification

What if we could effectively read the mind and transfer human visual cap...
research
05/19/2018

Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels

Modern convolutional neural networks (CNNs) are able to achieve human-le...
research
06/29/2018

Training Humans and Machines

For many years, researchers in psychology, education, statistics, and ma...
research
11/13/2017

Modeling Human Categorization of Natural Images Using Deep Feature Representations

Over the last few decades, psychologists have developed sophisticated fo...
research
10/11/2018

Realistic Adversarial Examples in 3D Meshes

Highly expressive models such as deep neural networks (DNNs) have been w...
research
12/24/2016

Improving Human-Machine Cooperative Visual Search With Soft Highlighting

Advances in machine learning have produced systems that attain human-lev...

Please sign up or login with your details

Forgot password? Click here to reset