On Computing Probabilistic Explanations for Decision Trees

by   Marcelo Arenas, et al.

Formal XAI (explainable AI) is a growing area that focuses on computing explanations with mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of the most studied cases is that of explaining the choices taken by decision trees, as they are traditionally deemed as one of the most interpretable classes of models. Recent work has focused on studying the computation of "sufficient reasons", a kind of explanation in which given a decision tree T and an instance x, one explains the decision T(x) by providing a subset y of the features of x such that for any other instance z compatible with y, it holds that T(z) = T(x), intuitively meaning that the features in y are already enough to fully justify the classification of x by T. It has been argued, however, that sufficient reasons constitute a restrictive notion of explanation, and thus the community has started to study their probabilistic counterpart, in which one requires that the probability of T(z) = T(x) must be at least some value δ∈ (0, 1], where z is a random instance that is compatible with y. Our paper settles the computational complexity of δ-sufficient-reasons over decision trees, showing that both (1) finding δ-sufficient-reasons that are minimal in size, and (2) finding δ-sufficient-reasons that are minimal inclusion-wise, do not admit polynomial-time algorithms (unless P=NP). This is in stark contrast with the deterministic case (δ = 1) where inclusion-wise minimal sufficient-reasons are easy to compute. By doing this, we answer two open problems originally raised by Izza et al. On the positive side, we identify structural restrictions of decision trees that make the problem tractable, and show how SAT solvers might be able to tackle these problems in practical settings.


On the Explanatory Power of Decision Trees

Decision trees have long been recognized as models of choice in sensitiv...

Logic for Explainable AI

A central quest in explainable AI relates to understanding the decisions...

On Explaining Decision Trees

Decision trees (DTs) epitomize what have become to be known as interpret...

Consistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification models

To explain the decision of any model, we extend the notion of probabilis...

Provably efficient, succinct, and precise explanations

We consider the problem of explaining the predictions of an arbitrary bl...

On the Computation of Necessary and Sufficient Explanations

The complete reason behind a decision is a Boolean formula that characte...

On the Complexity of Enumerating Prime Implicants from Decision-DNNF Circuits

We consider the problem EnumIP of enumerating prime implicants of Boolea...

Please sign up or login with your details

Forgot password? Click here to reset