Input Switched Affine Networks: An RNN Architecture Designed for Interpretability

11/28/2016
by   Jakob N. Foerster, et al.
0

There exist many problem domains where the interpretability of neural network models is essential for deployment. Here we introduce a recurrent architecture composed of input-switched affine transformations - in other words an RNN without any explicit nonlinearities, but with input-dependent recurrent weights. This simple form allows the RNN to be analyzed via straightforward linear methods: we can exactly characterize the linear contribution of each input to the model predictions; we can use a change-of-basis to disentangle input, output, and computational hidden unit subspaces; we can fully reverse-engineer the architecture's solution to a simple task. Despite this ease of interpretation, the input switched affine network achieves reasonable performance on a text modeling tasks, and allows greater computational efficiency than networks with standard nonlinearities.

READ FULL TEXT

page 4

page 5

page 6

page 8

research
11/18/2017

MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks

We introduce MinimalRNN, a new recurrent neural network architecture tha...
research
11/06/2017

Neural Speed Reading via Skim-RNN

Inspired by the principles of speed reading, we introduce Skim-RNN, a re...
research
09/26/2017

Input-to-Output Gate to Improve RNN Language Models

This paper proposes a reinforcing method that refines the output layers ...
research
03/03/2016

Training Input-Output Recurrent Neural Networks through Spectral Methods

We consider the problem of training input-output recurrent neural networ...
research
06/09/2019

Attention-based Conditioning Methods for External Knowledge Integration

In this paper, we present a novel approach for incorporating external kn...
research
07/10/2017

Deep Bilateral Learning for Real-Time Image Enhancement

Performance is a critical challenge in mobile image processing. Given a ...
research
03/08/2023

On the Benefits of Biophysical Synapses

The approximation capability of ANNs and their RNN instantiations, is st...

Please sign up or login with your details

Forgot password? Click here to reset