Interpretable Selection and Visualization of Features and Interactions Using Bayesian Forests

06/08/2015
by   Viktoriya Krakovna, et al.
0

It is becoming increasingly important for machine learning methods to make predictions that are interpretable as well as accurate. In many practical applications, it is of interest which features and feature interactions are relevant to the prediction task. We present a novel method, Selective Bayesian Forest Classifier, that strikes a balance between predictive power and interpretability by simultaneously performing classification, feature selection, feature interaction detection and visualization. It builds parsimonious yet flexible models using tree-structured Bayesian networks, and samples an ensemble of such models using Markov chain Monte Carlo. We build in feature selection by dividing the trees into two groups according to their relevance to the outcome of interest. Our method performs competitively on classification and feature selection benchmarks in low and high dimensions, and includes a visualization tool that provides insight into relevant features and interactions.

READ FULL TEXT

page 5

page 6

page 7

page 8

research
07/17/2023

Cross Feature Selection to Eliminate Spurious Interactions and Single Feature Dominance Explainable Boosting Machines

Interpretability is a crucial aspect of machine learning models that ena...
research
02/15/2023

Unboxing Tree Ensembles for interpretability: a hierarchical visualization tool and a multivariate optimal re-built tree

The interpretability of models has become a crucial issue in Machine Lea...
research
05/11/2021

Comparing interpretability and explainability for feature selection

A common approach for feature selection is to examine the variable impor...
research
07/11/2020

Feature Interactions in XGBoost

In this paper, we investigate how feature interactions can be identified...
research
09/04/2017

Random Subspace with Trees for Feature Selection Under Memory Constraints

Dealing with datasets of very high dimension is a major challenge in mac...
research
04/25/2020

A Bayesian machine scientist to aid in the solution of challenging scientific problems

Closed-form, interpretable mathematical models have been instrumental fo...
research
11/30/2022

Interpretability with full complexity by constraining feature information

Interpretability is a pressing issue for machine learning. Common approa...

Please sign up or login with your details

Forgot password? Click here to reset