Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification

07/07/2021
by   Gonzalo Nápoles, et al.
0

Machine learning solutions for pattern classification problems are nowadays widely deployed in society and industry. However, the lack of transparency and accountability of most accurate models often hinders their meaningful and safe use. Thus, there is a clear need for developing explainable artificial intelligence mechanisms. There exist model-agnostic methods that summarize feature contributions, but their interpretability is limited to specific predictions made by black-box models. An open challenge is to develop models that have intrinsic interpretability and produce their own explanations, even for classes of models that are traditionally considered black boxes like (recurrent) neural networks. In this paper, we propose an LTCN-based model for interpretable pattern classification of structured data. Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process. For supporting the interpretability without affecting the performance, the model incorporates more flexibility through a quasi-nonlinear reasoning rule that allows controlling nonlinearity. Besides, we propose a recurrence-aware decision model that evades the issues posed by unique fixed points while introducing a deterministic learning method to compute the learnable parameters. The simulations show that our interpretable model obtains competitive performance when compared to the state-of-the-art white and black boxes.

READ FULL TEXT
research
09/29/2020

Explainable AI without Interpretable Model

Explainability has been a challenge in AI for as long as AI has existed....
research
05/20/2019

CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models

As artificial intelligence plays an increasingly important role in our s...
research
10/07/2022

Utilizing Explainable AI for improving the Performance of Neural Networks

Nowadays, deep neural networks are widely used in a variety of fields th...
research
07/21/2021

GLIME: A new graphical methodology for interpretable model-agnostic explanations

Explainable artificial intelligence (XAI) is an emerging new domain in w...
research
11/27/2020

Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

Social Networking Sites (SNS) are one of the most important ways of comm...
research
06/01/2021

Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models

Due to their black-box and data-hungry nature, deep learning techniques ...
research
04/11/2017

Interpretable Explanations of Black Boxes by Meaningful Perturbation

As machine learning algorithms are increasingly applied to high impact y...

Please sign up or login with your details

Forgot password? Click here to reset