Decision trees compensate for model misspecification

02/08/2023
by   Hugh Panton, et al.
0

The best-performing models in ML are not interpretable. If we can explain why they outperform, we may be able to replicate these mechanisms and obtain both interpretability and performance. One example are decision trees and their descendent gradient boosting machines (GBMs). These perform well in the presence of complex interactions, with tree depth governing the order of interactions. However, interactions cannot fully account for the depth of trees found in practice. We confirm 5 alternative hypotheses about the role of tree depth in performance in the absence of true interactions, and present results from experiments on a battery of datasets. Part of the success of tree models is due to their robustness to various forms of mis-specification. We present two methods for robust generalized linear models (GLMs) addressing the composite and mixed response scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2023

Improving the Validity of Decision Trees as Explanations

In classification and forecasting with tabular data, one often utilizes ...
research
07/11/2020

Feature Interactions in XGBoost

In this paper, we investigate how feature interactions can be identified...
research
02/14/2023

Bounds on Depth of Decision Trees Derived from Decision Rule Systems

Systems of decision rules and decision trees are widely used as a means ...
research
02/14/2021

Connecting Interpretability and Robustness in Decision Trees through Separation

Recent research has recognized interpretability and robustness as essent...
research
10/20/2022

Improving Data Quality with Training Dynamics of Gradient Boosting Decision Trees

Real world datasets contain incorrectly labeled instances that hamper th...
research
03/31/2023

DeforestVis: Behavior Analysis of Machine Learning Models with Surrogate Decision Stumps

As the complexity of machine learning (ML) models increases and the appl...
research
12/28/2018

Improving the Interpretability of Deep Neural Networks with Knowledge Distillation

Deep Neural Networks have achieved huge success at a wide spectrum of ap...

Please sign up or login with your details

Forgot password? Click here to reset