LF-checker: Machine Learning Acceleration of Bounded Model Checking for Concurrency Verification (Competition Contribution)

by   Tong Wu, et al.

We describe and evaluate LF-checker, a metaverifier tool based on machine learning. It extracts multiple features of the program under test and predicts the optimal configuration (flags) of a bounded model checker with a decision tree. Our current work is specialised in concurrency verification and employs ESBMC as a back-end verification engine. In the paper, we demonstrate that LF-checker achieves better results than the default configuration of the underlying verification engine.


page 1

page 2

page 3

page 4


Synthesis in Uclid5

We describe an integration of program synthesis into Uclid5, a formal mo...

Distributed Bounded Model Checking

Program verification is a resource-hungry task. This paper looks at the ...

An Overview of the HFL Model Checking Project

In this article, we give an overview of our project on higher-order prog...

Plankton: Scalable network configuration verification through model checking

Network configuration verification enables operators to ensure that the ...

Model-checking parametric lock-sharing systems against regular constraints

In parametric lock-sharing systems processes can spawn new processes to ...

Systematic Classification of Attackers via Bounded Model Checking

In this work, we study the problem of verification of systems in the pre...

Lightweight Online Learning for Sets of Related Problems in Automated Reasoning

We present Self-Driven Strategy Learning (sdsl), a lightweight online le...

Please sign up or login with your details

Forgot password? Click here to reset