Explaining with Greater Support: Weighted Column Sampling Optimization for q-Consistent Summary-Explanations

by   Chen Peng, et al.

Machine learning systems have been extensively used as auxiliary tools in domains that require critical decision-making, such as healthcare and criminal justice. The explainability of decisions is crucial for users to develop trust on these systems. In recent years, the globally-consistent rule-based summary-explanation and its max-support (MS) problem have been proposed, which can provide explanations for particular decisions along with useful statistics of the dataset. However, globally-consistent summary-explanations with limited complexity typically have small supports, if there are any. In this paper, we propose a relaxed version of summary-explanation, i.e., the q-consistent summary-explanation, which aims to achieve greater support at the cost of slightly lower consistency. The challenge is that the max-support problem of q-consistent summary-explanation (MSqC) is much more complex than the original MS problem, resulting in over-extended solution time using standard branch-and-bound solvers. To improve the solution time efficiency, this paper proposes the weighted column sampling (WCS) method based on solving smaller problems by sampling variables according to their simplified increase support (SIS) values. Experiments verify that solving MSqC with the proposed SIS-based WCS method is not only more scalable in efficiency, but also yields solutions with greater support and better global extrapolation effectiveness.


page 1

page 2

page 3

page 4


One Explanation to Rule them All – Ensemble Consistent Explanations

Transparency is a major requirement of modern AI based decision making s...

Computing Rule-Based Explanations by Leveraging Counterfactuals

Sophisticated machine models are increasingly used for high-stakes decis...

A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations

Lending decisions are usually made with proprietary models that provide ...

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

Recent years have seen a boom in interest in machine learning systems th...

An Interpretable Model with Globally Consistent Explanations for Credit Risk

We propose a possible solution to a public challenge posed by the Fair I...

A Framework of Explanation Generation toward Reliable Autonomous Robots

To realize autonomous collaborative robots, it is important to increase ...

SupRB: A Supervised Rule-based Learning System for Continuous Problems

We propose the SupRB learning system, a new Pittsburgh-style learning cl...

Please sign up or login with your details

Forgot password? Click here to reset