GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

04/19/2022
by   Patrick Zschech, et al.
0

The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with caution as they only use approximations of the underlying ML model. Therefore, our paper investigates a series of intrinsically interpretable ML models and discusses their suitability for the IS community. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable. In our study, we evaluate the prediction qualities of five GAMs as compared to six traditional ML models and assess their visual outputs for model interpretability. On this basis, we investigate their merits and limitations and derive design implications for further improvements.

READ FULL TEXT
research
10/10/2022

Local Interpretable Model Agnostic Shap Explanations for machine learning models

With the advancement of technology for artificial intelligence (AI) base...
research
08/24/2022

Augmented cross-selling through explainable AI – a case from energy retailing

The advance of Machine Learning (ML) has led to a strong interest in thi...
research
03/21/2022

Optimizing Binary Decision Diagrams with MaxSAT for classification

The growing interest in explainable artificial intelligence (XAI) for cr...
research
05/06/2020

Interpretable Learning-to-Rank with Generalized Additive Models

Interpretability of learning-to-rank models is a crucial yet relatively ...
research
02/25/2021

On Interpretability and Similarity in Concept-Based Machine Learning

Machine Learning (ML) provides important techniques for classification a...
research
01/30/2022

Interpretable AI-based Large-scale 3D Pathloss Prediction Model for enabling Emerging Self-Driving Networks

In modern wireless communication systems, radio propagation modeling to ...
research
12/17/2021

Interpretable Data-Based Explanations for Fairness Debugging

A wide variety of fairness metrics and eXplainable Artificial Intelligen...

Please sign up or login with your details

Forgot password? Click here to reset