Failing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe

06/22/2021
by   Panagiotis Vlantis, et al.
12

In this work, we consider the problem of learning a feed-forward neural network (NN) controller to safely steer an arbitrarily shaped planar robot in a compact and obstacle-occluded workspace. Unlike existing methods that depend strongly on the density of data points close to the boundary of the safe state space to train NN controllers with closed-loop safety guarantees, we propose an approach that lifts such assumptions on the data that are hard to satisfy in practice and instead allows for graceful safety violations, i.e., of a bounded magnitude that can be spatially controlled. To do so, we employ reachability analysis methods to encapsulate safety constraints in the training process. Specifically, to obtain a computationally efficient over-approximation of the forward reachable set of the closed-loop system, we partition the robot's state space into cells and adaptively subdivide the cells that contain states which may escape the safe set under the trained control law. To do so, we first design appropriate under- and over-approximations of the robot's footprint to adaptively subdivide the configuration space into cells. Then, using the overlap between each cell's forward reachable set and the set of infeasible robot configurations as a measure for safety violations, we introduce penalty terms into the loss function that penalize this overlap in the training process. As a result, our method can learn a safe vector field for the closed-loop system and, at the same time, provide numerical worst-case bounds on safety violation over the whole configuration space, defined by the overlap between the over-approximation of the forward reachable set of the closed-loop system and the set of unsafe states. Moreover, it can control the tradeoff between computational complexity and tightness of these bounds. Finally, we provide a simulation study that verifies the efficacy of the proposed scheme.

READ FULL TEXT

page 1

page 4

page 9

page 10

page 11

research
09/20/2022

Differentiable Safe Controller Design through Control Barrier Functions

Learning-based controllers, such as neural network (NN) controllers, can...
research
01/05/2021

Efficient Reachability Analysis of Closed-Loop Systems with Neural Network Controllers

Neural Networks (NNs) can provide major empirical performance improvemen...
research
09/16/2022

Computing Forward Reachable Sets for Nonlinear Adaptive Multirotor Controllers

In multirotor systems, guaranteeing safety while considering unknown dis...
research
09/28/2022

Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems

The increasing prevalence of neural networks (NNs) in safety-critical ap...
research
09/03/2021

Provably Safe Model-Based Meta Reinforcement Learning: An Abstraction-Based Approach

While conventional reinforcement learning focuses on designing agents th...
research
12/09/2022

DRIP: Domain Refinement Iteration with Polytopes for Backward Reachability Analysis of Neural Feedback Loops

Safety certification of data-driven control techniques remains a major o...
research
07/08/2022

NExG: Provable and Guided State Space Exploration of Neural Network Control Systems using Sensitivity Approximation

We propose a new technique for performing state space exploration of clo...

Please sign up or login with your details

Forgot password? Click here to reset