EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case

04/24/2021
by   Natalia Díaz Rodríguez, et al.
9

The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience. In contrast, symbolic AI systems that convert concepts into rules or symbols – such as knowledge graphs – are easier to explain. However, they present lower generalisation and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. We tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process to serve as a sound basis for explainability. X-NeSyL methodology involves the concrete use of two notions of explanation at inference and training time respectively: 1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional CNN that makes use of symbolic representations, and 2) SHAP-Backprop, an explainable AI-informed training procedure that guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that our approach improves explainability and performance.

READ FULL TEXT

page 4

page 12

page 13

page 14

page 17

page 30

page 42

research
02/08/2022

Computing Rule-Based Explanations of Machine Learning Classifiers using Knowledge Graphs

The use of symbolic knowledge representation and reasoning as a way to r...
research
12/01/2019

Knowledge Infused Learning (K-IL): Towards Deep Incorporation of Knowledge in Deep Learning

Learning the underlying patterns in the data goes beyond instance-based ...
research
07/25/2023

Knowledge-enhanced Neuro-Symbolic AI for Cybersecurity and Privacy

Neuro-Symbolic Artificial Intelligence (AI) is an emerging and quickly a...
research
10/16/2020

Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

The recent series of innovations in deep learning (DL) have shown enormo...
research
10/15/2022

Providing Error Detection for Deep Learning Image Classifiers Using Self-Explainability

This paper proposes a self-explainable Deep Learning (SE-DL) system for ...
research
02/10/2020

Explainable Deep RDFS Reasoner

Recent research efforts aiming to bridge the Neural-Symbolic gap for RDF...
research
09/08/2022

Dr. Neurosymbolic, or: How I Learned to Stop Worrying and Accept Statistics

The symbolic AI community is increasingly trying to embrace machine lear...

Please sign up or login with your details

Forgot password? Click here to reset