Interpretable Mammographic Image Classification using Cased-Based Reasoning and Deep Learning

07/12/2021
by   Alina Jade Barnett, et al.
22

When we deploy machine learning models in high-stakes medical settings, we must ensure these models make accurate predictions that are consistent with known medical science. Inherently interpretable networks address this need by explaining the rationale behind each decision while maintaining equal or higher accuracy compared to black-box models. In this work, we present a novel interpretable neural network algorithm that uses case-based reasoning for mammography. Designed to aid a radiologist in their decisions, our network presents both a prediction of malignancy and an explanation of that prediction using known medical features. In order to yield helpful explanations, the network is designed to mimic the reasoning processes of a radiologist: our network first detects the clinically relevant semantic features of each image by comparing each new image with a learned set of prototypical image parts from the training images, then uses those clinical features to predict malignancy. Compared to other methods, our model detects clinical features (mass margins) with equal or higher accuracy, provides a more detailed explanation of its prediction, and is better able to differentiate the classification-relevant parts of the image.

READ FULL TEXT

page 2

page 5

page 9

page 10

research
03/23/2021

IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography

Interpretability in machine learning models is important in high-stakes ...
research
11/29/2021

Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes

Machine learning has been widely adopted in many domains, including high...
research
02/18/2020

A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation

Explainability is a gateway between Artificial Intelligence and society ...
research
06/15/2022

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Deep learning models have achieved remarkable success in different areas...
research
11/09/2022

Mapping the Ictal-Interictal-Injury Continuum Using Interpretable Machine Learning

IMPORTANCE: An interpretable machine learning model can provide faithful...
research
08/13/2023

Diagnostic Reasoning Prompts Reveal the Potential for Large Language Model Interpretability in Medicine

One of the major barriers to using large language models (LLMs) in medic...
research
05/05/2020

P2ExNet: Patch-based Prototype Explanation Network

Deep learning methods have shown great success in several domains as the...

Please sign up or login with your details

Forgot password? Click here to reset