Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection

06/21/2023
by   Mozhgan Salimiparsa, et al.
0

Interpreting machine learning models remains a challenge, hindering their adoption in clinical settings. This paper proposes leveraging Local Interpretable Model-Agnostic Explanations (LIME) to provide interpretable descriptions of black box classification models in high-stakes sepsis detection. By analyzing misclassified instances, significant features contributing to suboptimal performance are identified. The analysis reveals regions where the classifier performs poorly, allowing the calculation of error rates within these regions. This knowledge is crucial for cautious decision-making in sepsis detection and other critical applications. The proposed approach is demonstrated using the eICU dataset, effectively identifying and visualizing regions where the classifier underperforms. By enhancing interpretability, our method promotes the adoption of machine learning models in clinical practice, empowering informed decision-making and mitigating risks in critical scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2019

Global Aggregations of Local Explanations for Black Box models

The decision-making process of many state-of-the-art machine learning mo...
research
02/12/2018

Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

Interpretable machine learning models have received increasing interest ...
research
06/25/2018

Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory

As artificial intelligence is increasingly affecting all parts of societ...
research
04/05/2023

Physics-Inspired Interpretability Of Machine Learning Models

The ability to explain decisions made by machine learning models remains...
research
11/30/2020

Why model why? Assessing the strengths and limitations of LIME

When it comes to complex machine learning models, commonly referred to a...
research
06/03/2022

Additive MIL: Intrinsic Interpretability for Pathology

Multiple Instance Learning (MIL) has been widely applied in pathology to...
research
05/31/2021

An exact counterfactual-example-based approach to tree-ensemble models interpretability

Explaining the decisions of machine learning models is becoming a necess...

Please sign up or login with your details

Forgot password? Click here to reset