Provably Robust Model-Centric Explanations for Critical Decision-Making

10/26/2021
by   Cecilia G. Morales, et al.
0

We recommend using a model-centric, Boolean Satisfiability (SAT) formalism to obtain useful explanations of trained model behavior, different and complementary to what can be gleaned from LIME and SHAP, popular data-centric explanation tools in Artificial Intelligence (AI). We compare and contrast these methods, and show that data-centric methods may yield brittle explanations of limited practical utility. The model-centric framework, however, can offer actionable insights into risks of using AI models in practice. For critical applications of AI, split-second decision making is best informed by robust explanations that are invariant to properties of data, the capability offered by model-centric frameworks.

READ FULL TEXT

page 2

page 3

page 6

page 7

page 8

research
12/22/2022

Data-centric Artificial Intelligence

Data-centric artificial intelligence (data-centric AI) represents an eme...
research
05/01/2023

Generating Process-Centric Explanations to Enable Contestability in Algorithmic Decision-Making: Challenges and Opportunities

Human-AI decision making is becoming increasingly ubiquitous, and explan...
research
09/09/2021

Modelling GDPR-Compliant Explanations for Trustworthy AI

Through the General Data Protection Regulation (GDPR), the European Unio...
research
02/07/2023

Efficient XAI Techniques: A Taxonomic Survey

Recently, there has been a growing demand for the deployment of Explaina...
research
04/16/2021

Faithful and Plausible Explanations of Medical Code Predictions

Machine learning models that offer excellent predictive performance ofte...
research
01/19/2019

Explaining Explanations to Society

There is a disconnect between explanatory artificial intelligence (XAI) ...

Please sign up or login with your details

Forgot password? Click here to reset