Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning

by   Haoyuan He, et al.
Nanjing University

Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in Neuro-Symbolic systems. However, some differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning. In this paper, we reveal that this bias, named Implication Bias is common in loss functions derived from fuzzy logic operators. Furthermore, we propose a simple yet effective method to transform the biased loss functions into Reduced Implication-bias Logic Loss (RILL) to address the above problem. Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions, especially when the knowledge base is incomplete, and keeps more robust than the compared methods when labelled data is insufficient.


page 1

page 2

page 3

page 4


Analyzing Differentiable Fuzzy Logic Operators

In recent years there has been a push to integrate symbolic AI and deep ...

Evaluating Relaxations of Logic for Neural Networks: A Comprehensive Study

Symbolic knowledge can provide crucial inductive bias for training neura...

logLTN: Differentiable Fuzzy Logic in the Logarithm Space

The AI community is increasingly focused on merging logic with deep lear...

Differentiable Fuzzy 𝒜ℒ𝒞: A Neural-Symbolic Representation Language for Symbol Grounding

Neural-symbolic computing aims at integrating robust neural learning and...

Analyzing Differentiable Fuzzy Implications

Combining symbolic and neural approaches has gained considerable attenti...

Learning and T-Norms Theory

Deep learning has been shown to achieve impressive results in several do...

Logic of Differentiable Logics: Towards a Uniform Semantics of DL

Differentiable logics (DL) have recently been proposed as a method of tr...

Please sign up or login with your details

Forgot password? Click here to reset