Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

10/16/2020
by   Manas Gaur, et al.
47

The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

research
05/27/2019

Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction

Although "black box" models such as Artificial Neural Networks, Support ...
research
07/26/2023

TabR: Unlocking the Power of Retrieval-Augmented Tabular Deep Learning

Deep learning (DL) models for tabular data problems are receiving increa...
research
03/13/2022

Algebraic Learning: Towards Interpretable Information Modeling

Along with the proliferation of digital data collected using sensor tech...
research
02/05/2023

Explainable Machine Learning: The Importance of a System-Centric Perspective

The landscape in the context of several signal processing applications a...
research
05/17/2021

A Review on Explainability in Multimodal Deep Neural Nets

Artificial Intelligence techniques powered by deep neural nets have achi...

Please sign up or login with your details

Forgot password? Click here to reset