DeepAI AI Chat
Log In Sign Up

Neural Network Verification in Control

by   Michael Everett, et al.

Learning-based methods could provide solutions to many of the long-standing challenges in control. However, the neural networks (NNs) commonly used in modern learning approaches present substantial challenges for analyzing the resulting control systems' safety properties. Fortunately, a new body of literature could provide tractable methods for analysis and verification of these high dimensional, highly nonlinear representations. This tutorial first introduces and unifies recent techniques (many of which originated in the computer vision and machine learning communities) for verifying robustness properties of NNs. The techniques are then extended to provide formal guarantees of neural feedback loops (e.g., closed-loop system with NN control policy). The provided tools are shown to enable closed-loop reachability analysis and robust deep reinforcement learning.


Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems

The increasing prevalence of neural networks (NNs) in safety-critical ap...

Reachability Analysis of Neural Feedback Loops

Neural Networks (NNs) can provide major empirical performance improvemen...

Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation

Deep neural networks are known to be fragile to small adversarial pertur...

OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems

Deep learning methods can be used to produce control policies, but certi...

Verifying a Cruise Control System using Simulink and SpaceEx

This article aims to provide a simple step-by-step guide highlighting th...

Hybrid Control Policy for Artificial Pancreas via Ensemble Deep Reinforcement Learning

Objective: The artificial pancreas (AP) has shown promising potential in...