Simplifying Random Forests: On the Trade-off between Interpretability and Accuracy

11/11/2019
by   Michael Rapp, et al.
0

We analyze the trade-off between model complexity and accuracy for random forests by breaking the trees up into individual classification rules and selecting a subset of them. We show experimentally that already a few rules are sufficient to achieve an acceptable accuracy close to that of the original model. Moreover, our results indicate that in many cases, this can lead to simpler models that clearly outperform the original ones.

READ FULL TEXT
research
08/19/2019

SIRUS: making random forests interpretable

State-of-the-art learning algorithms, such as random forests or neural n...
research
09/22/2021

Minimax Rates for STIT and Poisson Hyperplane Random Forests

In [12], Mourtada, Gaïffas and Scornet showed that, under proper tuning ...
research
10/19/2021

Improving the Accuracy-Memory Trade-Off of Random Forests Via Leaf-Refinement

Random Forests (RF) are among the state-of-the-art in many machine learn...
research
07/22/2017

pre: An R Package for Fitting Prediction Rule Ensembles

Prediction rule ensembles (PREs) are sparse collections of rules, offeri...
research
07/11/2020

Towards Robust Classification with Deep Generative Forests

Decision Trees and Random Forests are among the most widely used machine...
research
02/26/2021

MDA for random forests: inconsistency, and a practical solution via the Sobol-MDA

Variable importance measures are the main tools to analyze the black-box...
research
11/01/2019

Randomization as Regularization: A Degrees of Freedom Explanation for Random Forest Success

Random forests remain among the most popular off-the-shelf supervised ma...

Please sign up or login with your details

Forgot password? Click here to reset