Producing radiologist-quality reports for interpretable artificial intelligence

06/01/2018
by   William Gale, et al.
0

Current approaches to explaining the decisions of deep learning systems for medical tasks have focused on visualising the elements that have contributed to each decision. We argue that such approaches are not enough to "open the black box" of medical decision making systems because they are missing a key component that has been used as a standard communication tool between doctors for centuries: language. We propose a model-agnostic interpretability method that involves training a simple recurrent neural network model to produce descriptive sentences to clarify the decision of deep learning classifiers. We test our method on the task of detecting hip fractures from frontal pelvic x-rays. This process requires minimal additional labelling despite producing text containing elements that the original deep learning classification model was not specifically trained to detect. The experimental results show that: 1) the sentences produced by our method consistently contain the desired information, 2) the generated sentences are preferred by doctors compared to current tools that create saliency maps, and 3) the combination of visualisations and generated text is better than either alone.

READ FULL TEXT
research
07/05/2021

Improving a neural network model by explanation-guided training for glioma classification based on MRI data

In recent years, artificial intelligence (AI) systems have come to the f...
research
10/21/2020

Explaining black-box text classifiers for disease-treatment information extraction

Deep neural networks and other intricate Artificial Intelligence (AI) mo...
research
10/09/2018

What made you do this? Understanding black-box decisions with sufficient input subsets

Local explanation frameworks aim to rationalize particular decisions mad...
research
09/19/2022

A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models

The widespread use of black-box AI models has raised the need for algori...
research
06/26/2018

A Theory of Diagnostic Interpretation in Supervised Classification

Interpretable deep learning is a fundamental building block towards safe...
research
11/30/2020

TimeSHAP: Explaining Recurrent Models through Sequence Perturbations

Recurrent neural networks are a standard building block in numerous mach...

Please sign up or login with your details

Forgot password? Click here to reset