Better Model Selection with a new Definition of Feature Importance

by   Fan Fang, et al.

Feature importance aims at measuring how crucial each input feature is for model prediction. It is widely used in feature engineering, model selection and explainable artificial intelligence (XAI). In this paper, we propose a new tree-model explanation approach for model selection. Our novel concept leverages the Coefficient of Variation of a feature weight (measured in terms of the contribution of the feature to the prediction) to capture the dispersion of importance over samples. Extensive experimental results show that our novel feature explanation performs better than general cross validation method in model selection both in terms of time efficiency and accuracy performance.


page 1

page 2

page 3

page 4


A Nonconformity Approach to Model Selection for SVMs

We investigate the issue of model selection and the use of the nonconfor...

Evaluation of Model Selection for Kernel Fragment Recognition in Corn Silage

Model selection when designing deep learning systems for specific use-ca...

Distribution-free Deviation Bounds of Learning via Model Selection with Cross-validation Risk Estimation

Cross-validation techniques for risk estimation and model selection are ...

Fast Model-Selection through Adapting Design of Experiments Maximizing Information Gain

To perform model-selection efficiently, we must run informative experime...

Catching Up Faster by Switching Sooner: A Prequential Solution to the AIC-BIC Dilemma

Bayesian model averaging, model selection and its approximations such as...

Artificial Intelligence ordered 3D vertex importance

Ranking vertices of multidimensional networks is crucial in many areas o...

Nonnested model selection based on empirical likelihood

We propose an empirical likelihood ratio test for nonparametric model se...

Please sign up or login with your details

Forgot password? Click here to reset