DeepAI AI Chat
Log In Sign Up

Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems

by   Nicholas Rober, et al.

The increasing prevalence of neural networks (NNs) in safety-critical applications calls for methods to certify safe behavior. This paper presents a backward reachability approach for safety verification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While recent works have focused on forward reachability as a strategy for safety certification of NFLs, backward reachability offers advantages over the forward strategy, particularly in obstacle avoidance scenarios. Prior works have developed techniques for backward reachability analysis for systems without NNs, but the presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible. To overcome these challenges, we use existing forward NN analysis tools to efficiently find an over-approximation of the backprojection (BP) set, i.e., the set of states for which the NN control policy will drive the system to a given target set. We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs and propose computationally efficient strategies. We use numerical results from a variety of models to showcase the proposed algorithms, including a demonstration of safety certification for a 6D system.


page 1

page 2

page 4

page 8

page 9

page 10

page 11

page 13


Backward Reachability Analysis for Neural Feedback Loops

The increasing prevalence of neural networks (NNs) in safety-critical ap...

A Hybrid Partitioning Strategy for Backward Reachability of Neural Feedback Loops

As neural networks become more integrated into the systems that we depen...

Neural Network Verification in Control

Learning-based methods could provide solutions to many of the long-stand...

DRIP: Domain Refinement Iteration with Polytopes for Backward Reachability Analysis of Neural Feedback Loops

Safety certification of data-driven control techniques remains a major o...

Safety Verification for Neural Networks Based on Set-boundary Analysis

Neural networks (NNs) are increasingly applied in safety-critical system...

Verifying Safety of Neural Networks from Topological Perspectives

Neural networks (NNs) are increasingly applied in safety-critical system...

Failing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe

In this work, we consider the problem of learning a feed-forward neural ...