ENIGMA: Efficient Learning-based Inference Guiding Machine

01/23/2017
by   Jan Jakubův, et al.
0

ENIGMA is a learning-based method for guiding given clause selection in saturation-based theorem provers. Clauses from many proof searches are classified as positive and negative based on their participation in the proofs. An efficient classification model is trained on this data, using fast feature-based characterization of the clauses . The learned model is then tightly linked with the core prover and used as a basis of a new parameterized evaluation heuristic that provides fast ranking of all generated clauses. The approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing a large increase of E's performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2019

ENIGMAWatch: ProofWatch Meets ENIGMA

In this work we describe a new learning-based proof guidance -- ENIGMAWa...
research
05/23/2022

HyperTree Proof Search for Neural Theorem Proving

We propose an online training procedure for a transformer-based automate...
research
09/11/2015

Premise Selection and External Provers for HOL4

Learning-assisted automated reasoning has recently gained popularity amo...
research
08/17/2011

Premise Selection for Mathematics by Corpus Analysis and Kernel Methods

Smart premise selection is essential when using automated reasoning as a...
research
05/30/2018

Automated proof synthesis for propositional logic with deep neural networks

This work explores the application of deep learning, a machine learning ...
research
02/26/2021

New Techniques that Improve ENIGMA-style Clause Selection Guidance

We re-examine the topic of machine-learned clause selection guidance in ...
research
07/14/2021

Fast and Slow Enigmas and Parental Guidance

We describe several additions to the ENIGMA system that guides clause se...

Please sign up or login with your details

Forgot password? Click here to reset