Rationalizing Predictions by Adversarial Information Calibration

01/15/2023
by   Lei Sha, et al.
0

Explaining the predictions of AI models is paramount in safety-critical applications, such as in legal or medical domains. One form of explanation for a prediction is an extractive rationale, i.e., a subset of features of an instance that lead the model to give its prediction on that instance. For example, the subphrase “he stole the mobile phone” can be an extractive rationale for the prediction of “Theft”. Previous works on generating extractive rationales usually employ a two-phase model: a selector that selects the most important features (i.e., the rationale) followed by a predictor that makes the prediction based exclusively on the selected features. One disadvantage of these works is that the main signal for learning to select features comes from the comparison of the answers given by the predictor to the ground-truth answers. In this work, we propose to squeeze more information from the predictor via an information calibration method. More precisely, we train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction. The first model is used as a guide for the second model. We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features. In addition, for natural language tasks, we propose a language-model-based regularizer to encourage the extraction of fluent rationales. Experimental results on a sentiment analysis task, a hate speech recognition task as well as on three tasks from the legal domain show the effectiveness of our approach to rationale extraction.

READ FULL TEXT
research
12/16/2020

Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration

Explaining the predictions of AI models is paramount in safety-critical ...
research
09/17/2022

FR: Folded Rationalization with a Unified Encoder

Conventional works generally employ a two-phase model in which a generat...
research
10/14/2021

Can Explanations Be Useful for Calibrating Black Box Models?

One often wants to take an existing, trained NLP model and use it on dat...
research
11/17/2022

LongFNT: Long-form Speech Recognition with Factorized Neural Transducer

Traditional automatic speech recognition (ASR) systems usually focus on ...
research
07/06/2023

When No-Rejection Learning is Optimal for Regression with Rejection

Learning with rejection is a prototypical model for studying the interac...
research
09/11/2023

Black-Box Analysis: GPTs Across Time in Legal Textual Entailment Task

The evolution of Generative Pre-trained Transformer (GPT) models has led...
research
05/31/2022

Variable importance without impossible data

The most popular methods for measuring importance of the variables in a ...

Please sign up or login with your details

Forgot password? Click here to reset