White Noise Analysis of Neural Networks

by   Ali Borji, et al.

A white noise analysis of modern deep neural networks is presented to unveil their biases at the whole network level or the single neuron level. Our analysis is based on two popular and related methods in psychophysics and neurophysiology namely classification images and spike triggered analysis. These methods have been widely used to understand the underlying mechanisms of sensory systems in humans and monkeys. We leverage them to investigate the inherent biases of deep neural networks and to obtain a first-order approximation of their functionality. We emphasize on CNNs since they are currently the state of the art methods in computer vision and are a decent model of human visual processing. In addition, we study multi-layer perceptrons, logistic regression, and recurrent neural networks. Experiments over four classic datasets, MNIST, Fashion-MNIST, CIFAR-10, and ImageNet, show that the computed bias maps resemble the target classes and when used for classification lead to an over twofold performance than the chance level. Further, we show that classification images can be used to attack a black-box classifier and to detect adversarial patch attacks. Finally, we utilize spike triggered averaging to derive the filters of CNNs and explore how the behavior of a network changes when neurons in different layers are modulated. Our effort illustrates a successful example of borrowing from neurosciences to study ANNs and highlights the importance of cross-fertilization and synergy across machine learning, deep learning, and computational neuroscience.


page 6

page 7

page 17

page 18

page 19

page 21

page 22

page 23


BreakingBED – Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

Deploying convolutional neural networks (CNNs) for embedded applications...

Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples

Spiking neural networks (SNNs) have attracted much attention for their h...

Fostering the Robustness of White-Box Deep Neural Network Watermarks by Neuron Alignment

The wide application of deep learning techniques is boosting the regulat...

Information Bottleneck Methods on Convolutional Neural Networks

Recent year, many researches attempt to open the black box of deep neura...

Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks

Neural networks have achieved remarkable performance in computer vision,...

LCANets++: Robust Audio Classification using Multi-layer Neural Networks with Lateral Competition

Audio classification aims at recognizing audio signals, including speech...

Diagnostic Visualization for Deep Neural Networks Using Stochastic Gradient Langevin Dynamics

The internal states of most deep neural networks are difficult to interp...

Please sign up or login with your details

Forgot password? Click here to reset