Stacked Structure Learning for Lifted Relational Neural Networks

by   Gustav Sourek, et al.
Czech Technical University in Prague
Cardiff University

Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks. While previous work has shown that using LRNNs can lead to state-of-the-art results in various ILP tasks, these results depended on hand-crafted rules. In this paper, we extend the framework of LRNNs with structure learning, thus enabling a fully automated learning process. Similarly to many ILP methods, our structure learning algorithm proceeds in an iterative fashion by top-down searching through the hypothesis space of all possible Horn clauses, considering the predicates that occur in the training examples as well as invented soft concepts entailed by the best weighted rules found so far. In the experiments, we demonstrate the ability to automatically induce useful hierarchical soft concepts leading to deep LRNNs with a competitive predictive power.


page 1

page 2

page 3

page 4


Lifted Relational Neural Networks

We propose a method combining relational-logic representations with neur...

Learning sparse relational transition models

We present a representation for describing transition models in complex ...

Learning Hierarchically-Structured Concepts II: Overlapping Concepts, and Networks With Feedback

We continue our study from Lynch and Mallmann-Trenn (Neural Networks, 20...

Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation

Fully test-time adaptation aims to adapt the network model based on sequ...

Evaluating the Progress of Deep Learning for Visual Relational Concepts

Convolutional Neural Networks (CNNs) have become the state of the art me...

Bootstrapping Concept Formation in Small Neural Networks

The question how neural systems (of humans) can perform reasoning is sti...

Please sign up or login with your details

Forgot password? Click here to reset