Learning logic programs by discovering where not to search

02/20/2022
by   Andrew Cropper, et al.
0

The goal of inductive logic programming (ILP) is to search for a hypothesis that generalises training examples and background knowledge (BK). To improve performance, we introduce an approach that, before searching for a hypothesis, first discovers `where not to search'. We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd. We use the constraints to bootstrap a constraint-driven ILP system. Our experiments on multiple domains (including program synthesis and inductive general game playing) show that our approach can substantially reduce learning times.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2021

Parallel Constraint-Driven Inductive Logic Programming

Multi-core machines are ubiquitous. However, most inductive logic progra...
research
05/05/2020

Learning programs by learning from failures

We introduce learning programs by learning from failures. In this approa...
research
08/24/2022

Constraint-driven multi-task learning

Inductive logic programming is a form of machine learning based on mathe...
research
06/13/2023

DreamDecompiler: Improved Bayesian Program Learning by Decompiling Amortised Knowledge

Solving program induction problems requires searching through an enormou...
research
02/18/2021

Learning Logic Programs by Explaining Failures

Scientists form hypotheses and experimentally test them. If a hypothesis...
research
12/28/2021

Learning Logic Programs From Noisy Failures

Inductive Logic Programming (ILP) is a form of machine learning (ML) whi...
research
10/26/2020

A Multistrategy Approach to Relational Knowledge Discovery in Databases

When learning from very large databases, the reduction of complexity is ...

Please sign up or login with your details

Forgot password? Click here to reset