Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study

06/26/2017
by   Samuel Ritter, et al.
0

Deep neural networks (DNNs) have achieved unprecedented performance on a wide range of complex tasks, rapidly outpacing our understanding of the nature of their solutions. This has caused a recent surge of interest in methods for rendering modern neural systems more interpretable. In this work, we propose to address the interpretability problem in modern DNNs using the rich history of problem descriptions, theories and experimental methods developed by cognitive psychologists to study the human mind. To explore the potential value of these tools, we chose a well-established analysis from developmental psychology that explains how children learn word labels for objects, and applied that analysis to DNNs. Using datasets of stimuli inspired by the original cognitive psychology experiments, we find that state-of-the-art one shot learning models trained on ImageNet exhibit a similar bias to that observed in humans: they prefer to categorize objects according to shape rather than color. The magnitude of this shape bias varies greatly among architecturally identical, but differently seeded models, and even fluctuates within seeds throughout training, despite nearly equivalent classification performance. These results demonstrate the capability of tools from cognitive psychology for exposing hidden computational properties of DNNs, while concurrently providing us with a computational model for human word learning.

READ FULL TEXT
research
09/12/2017

Can Deep Neural Networks Match the Related Objects?: A Survey on ImageNet-trained Classification Models

Deep neural networks (DNNs) have shown the state-of-the-art level of per...
research
02/16/2022

A Developmentally-Inspired Examination of Shape versus Texture Bias in Machines

Early in development, children learn to extend novel category labels to ...
research
03/29/2023

Poster: Link between Bias, Node Sensitivity and Long-Tail Distribution in trained DNNs

Owing to their remarkable learning (and relearning) capabilities, deep n...
research
05/02/2020

A neural network walks into a lab: towards using deep nets as models for human behavior

What might sound like the beginning of a joke has become an attractive p...
research
08/08/2022

Abutting Grating Illusion: Cognitive Challenge to Neural Network Models

Even the state-of-the-art deep learning models lack fundamental abilitie...
research
03/12/2017

Improving Interpretability of Deep Neural Networks with Semantic Information

Interpretability of deep neural networks (DNNs) is essential since it en...
research
09/11/2023

Systematic Review of Experimental Paradigms and Deep Neural Networks for Electroencephalography-Based Cognitive Workload Detection

This article summarizes a systematic review of the electroencephalograph...

Please sign up or login with your details

Forgot password? Click here to reset