Understanding Interlocking Dynamics of Cooperative Rationalization

10/26/2021
by   Mo Yu, et al.
0

Selective rationalization explains the prediction of complex neural networks by finding a small subset of the input that is sufficient to predict the neural model output. The selection mechanism is commonly integrated into the model itself by specifying a two-component cascaded system consisting of a rationale generator, which makes a binary selection of the input features (which is the rationale), and a predictor, which predicts the output based only on the selected features. The components are trained jointly to optimize prediction performance. In this paper, we reveal a major problem with such cooperative rationalization paradigm – model interlocking. Interlocking arises when the predictor overfits to the features selected by the generator thus reinforcing the generator's selection even if the selected rationales are sub-optimal. The fundamental cause of the interlocking problem is that the rationalization objective to be minimized is concave with respect to the generator's selection policy. We propose a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection. The generator now realizes both soft and hard attention over the features and these are fed into the two different predictors. While the generator still seeks to support the original predictor performance, it also minimizes a gap between the two predictors. As we will show theoretically, since the attention-based predictor exhibits a better convexity property, A2R can overcome the concavity barrier. Our experiments on two synthetic benchmarks and two real datasets demonstrate that A2R can significantly alleviate the interlock problem and find explanations that better align with human judgments. We release our code at https://github.com/Gorov/Understanding_Interlocking.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipshitz Restraint

A self-explaining rationalization model is generally constructed by a co...
research
10/29/2019

Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control

Selective rationalization has become a common mechanism to ensure that p...
research
05/27/2023

Unsupervised Selective Rationalization with Noise Injection

A major issue with using deep learning models in sensitive applications ...
research
03/22/2020

Invariant Rationalization

Selective rationalization improves neural network interpretability by id...
research
12/29/2012

Focus of Attention for Linear Predictors

We present a method to stop the evaluation of a prediction process when ...
research
12/12/2021

Learning with Subset Stacking

We propose a new algorithm that learns from a set of input-output pairs....
research
09/17/2022

FR: Folded Rationalization with a Unified Encoder

Conventional works generally employ a two-phase model in which a generat...

Please sign up or login with your details

Forgot password? Click here to reset