Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

10/15/2020
by   Ioannis Mollas, et al.
0

Interpretable machine learning is an emerging field providing solutions on acquiring insights into machine learning models' rationale. It has been put in the map of machine learning by suggesting ways to tackle key ethical and societal issues. However, existing techniques of interpretable machine learning are far from being comprehensible and explainable to the end user. Another key issue in this field is the lack of evaluation and selection criteria, making it difficult for the end user to choose the most appropriate interpretation technique for its use. In this study, we introduce a meta-explanation methodology that will provide truthful interpretations, in terms of feature importance, to the end user through argumentation. At the same time, this methodology can be used as an evaluation or selection tool for multiple interpretation techniques based on feature importance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2022

Truthful Meta-Explanations for Local Interpretability of Machine Learning Models

Automated Machine Learning-based systems' integration into a wide range ...
research
05/27/2021

Intellige: A User-Facing Model Explainer for Narrative Explanations

Predictive machine learning models often lack interpretability, resultin...
research
09/10/2019

NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks

The problem of explaining deep learning models, and model predictions ge...
research
07/03/2022

Interpretable by Design: Learning Predictors by Composing Interpretable Queries

There is a growing concern about typically opaque decision-making with h...
research
09/11/2020

Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems using Feature Importance Fusion

When machine learning supports decision-making in safety-critical system...
research
03/02/2021

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations

While the need for interpretable machine learning has been established, ...
research
04/13/2021

LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information

Artificial Intelligence (AI) has a tremendous impact on the unexpected g...

Please sign up or login with your details

Forgot password? Click here to reset