Survey of explainable machine learning with visual and granular methods beyond quasi-explanations

09/21/2020
by   Boris Kovalerchuk, et al.
44

This paper surveys visual methods of explainability of Machine Learning (ML) with focus on moving from quasi-explanations that dominate in ML to domain-specific explanation supported by granular visuals. ML interpretation is fundamentally a human activity and visual methods are more readily interpretable. While efficient visual representations of high-dimensional data exist, the loss of interpretable information, occlusion, and clutter continue to be a challenge, which lead to quasi-explanations. We start with the motivation and the different definitions of explainability. The paper focuses on a clear distinction between quasi-explanations and domain specific explanations, and between explainable and an actually explained ML model that are critically important for the explainability domain. We discuss foundations of interpretability, overview visual interpretability and present several types of methods to visualize the ML models. Next, we present methods of visual discovery of ML models, with the focus on interpretable models, based on the recently introduced concept of General Line Coordinates (GLC). These methods take the critical step of creating visual explanations that are not merely quasi-explanations but are also domain specific visual explanations while these methods themselves are domain-agnostic. The paper includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches, and real-world case studies. The paper also covers traditional visual methods for understanding ML models, which include deep learning and time series models. We show that many of these methods are quasi-explanations and need further enhancement to become domain specific explanations. We conclude with outlining open problems and current research frontiers.

READ FULL TEXT

page 23

page 24

page 25

page 36

page 39

research
11/11/2022

Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal

Explainable artificial intelligence (XAI) provides explanations for not ...
research
06/01/2022

OmniXAI: A Library for Explainable AI

We introduce OmniXAI, an open-source Python library of eXplainable AI (X...
research
10/27/2020

Explainable Machine Learning for Public Policy: Use Cases, Gaps, and Research Directions

In Machine Learning (ML) models used for supporting decisions in high-st...
research
03/13/2020

Explainable Deep Classification Models for Domain Generalization

Conventionally, AI models are thought to trade off explainability for lo...
research
05/23/2020

Towards Analogy-Based Explanations in Machine Learning

Principles of analogical reasoning have recently been applied in the con...
research
07/03/2023

Fighting the disagreement in Explainable Machine Learning with consensus

Machine learning (ML) models are often valued by the accuracy of their p...
research
11/10/2022

Does the explanation satisfy your needs?: A unified view of properties of explanations

Interpretability provides a means for humans to verify aspects of machin...

Please sign up or login with your details

Forgot password? Click here to reset