Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation

08/18/2019
by   Yuh-Shyang Wang, et al.
0

Deep neural networks are known to be fragile to small adversarial perturbations. This issue becomes more critical when a neural network is interconnected with a physical system in a closed loop. In this paper, we show how to combine recent works on neural network certification tools (which are mainly used in static settings such as image classification) with robust control theory to certify a neural network policy in a control loop. Specifically, we give a sufficient condition and an algorithm to ensure that the closed loop state and control constraints are satisfied when the persistent adversarial perturbation is l-infinity norm bounded. Our method is based on finding a positively invariant set of the closed loop dynamical system, and thus we do not require the differentiability or the continuity of the neural network policy. Along with the verification result, we also develop an effective attack strategy for neural network control systems that outperforms exhaustive Monte-Carlo search significantly. We show that our certification algorithm works well on learned models and achieves 5 times better result than the traditional Lipschitz-based method to certify the robustness of a neural network policy on a cart pole control problem.

READ FULL TEXT

page 9

page 10

research
03/30/2021

Learning Robust Feedback Policies from Demonstrations

In this work we propose and analyze a new framework to learn feedback co...
research
09/30/2021

Neural Network Verification in Control

Learning-based methods could provide solutions to many of the long-stand...
research
06/21/2021

Policy Smoothing for Provably Robust Reinforcement Learning

The study of provable adversarial robustness for deep neural network (DN...
research
11/01/2022

ReachLipBnB: A branch-and-bound method for reachability analysis of neural autonomous systems using Lipschitz bounds

We propose a novel Branch-and-Bound method for reachability analysis of ...
research
02/03/2021

Towards Robust Neural Networks via Close-loop Control

Despite their success in massive engineering applications, deep neural n...
research
02/10/2021

Towards Certifying ℓ_∞ Robustness using Neural Networks with ℓ_∞-dist Neurons

It is well-known that standard neural networks, even with a high classif...
research
06/26/2022

Self-Healing Robust Neural Networks via Closed-Loop Control

Despite the wide applications of neural networks, there have been increa...

Please sign up or login with your details

Forgot password? Click here to reset