Safe Control with Neural Network Dynamic Models

by   Tianhao Wei, et al.

Safety is critical in autonomous robotic systems. A safe control law ensures forward invariance of a safe set (a subset in the state space). It has been extensively studied regarding how to derive a safe control law with a control-affine analytical dynamic model. However, in complex environments and tasks, it is challenging and time-consuming to obtain a principled analytical model of the system. In these situations, data-driven learning is extensively used and the learned models are encoded in neural networks. How to formally derive a safe control law with Neural Network Dynamic Models (NNDM) remains unclear due to the lack of computationally tractable methods to deal with these black-box functions. In fact, even finding the control that minimizes an objective for NNDM without any safety constraint is still challenging. In this work, we propose MIND-SIS (Mixed Integer for Neural network Dynamic model with Safety Index Synthesis), the first method to derive safe control laws for NNDM. The method includes two parts: 1) SIS: an algorithm for the offline synthesis of the safety index (also called as barrier function), which uses evolutionary methods and 2) MIND: an algorithm for online computation of the optimal and safe control signal, which solves a constrained optimization using a computationally efficient encoding of neural networks. It has been theoretically proved that MIND-SIS guarantees forward invariance and finite convergence. And it has been numerically validated that MIND-SIS achieves safe and optimal control of NNDM. From our experiments, the optimality gap is less than 10^-8, and the safety constraint violation is 0.


page 7

page 14


Control Barrier Functionals: Safety-critical Control for Time Delay Systems

This work presents a theoretical framework for the safety-critical contr...

Geometry of Radial Basis Neural Networks for Safety Biased Approximation of Unsafe Regions

Barrier function-based inequality constraints are a means to enforce saf...

Guaranteed Conformance of Neurosymbolic Models to Natural Constraints

Deep neural networks have emerged as the workhorse for a large section o...

Safety-Critical Optimal Control for Robotic Manipulators in A Cluttered Environment

Designing safety-critical control for robotic manipulators is challengin...

Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models

Safety is one of the biggest concerns to applying reinforcement learning...

The Safety Filter: A Unified View of Safety-Critical Control in Autonomous Systems

Recent years have seen significant progress in the realm of robot autono...

Receding-Constraint Model Predictive Control using a Learned Approximate Control-Invariant Set

In recent years, advanced model-based and data-driven control methods ar...

Please sign up or login with your details

Forgot password? Click here to reset