Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning

by   Artur d'Avila Garcez, et al.

Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems.


page 1

page 2

page 3

page 4


Neurosymbolic AI: The 3rd Wave

Current advances in Artificial Intelligence (AI) and Machine Learning (M...

A Semantic Framework for Neural-Symbolic Computing

Two approaches to AI, neural networks and symbolic systems, have been pr...

Neuro-symbolic Architectures for Context Understanding

Computational context understanding refers to an agent's ability to fuse...

A Metamodel and Framework for Artificial General Intelligence From Theory to Practice

This paper introduces a new metamodel-based knowledge representation tha...

Describing and Organizing Semantic Web and Machine Learning Systems in the SWeMLS-KG

In line with the general trend in artificial intelligence research to cr...

Explainable Machine Learning Control – robust control and stability analysis

Recently, the term explainable AI became known as an approach to produce...

Neuro-Symbolic Bi-Directional Translation – Deep Learning Explainability for Climate Tipping Point Research

In recent years, there has been an increase in using deep learning for c...

Please sign up or login with your details

Forgot password? Click here to reset