DualApp: Tight Over-Approximation for Neural Network Robustness Verification via Under-Approximation

by   Yiting Wu, et al.

The robustness of neural networks is fundamental to the hosting system's reliability and security. Formal verification has been proven to be effective in providing provable robustness guarantees. To improve the verification scalability, over-approximating the non-linear activation functions in neural networks by linear constraints is widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. As over-approximations inevitably introduce overestimation, many efforts have been dedicated to defining the tightest possible approximations. Recent studies have however showed that the existing so-called tightest approximations are superior to each other. In this paper we identify and report an crucial factor in defining tight approximations, namely the approximation domains of activation functions. We observe that existing approaches only rely on overestimated domains, while the corresponding tight approximation may not necessarily be tight on its actual domain. We propose a novel under-approximation-guided approach, called dual-approximation, to define tight over-approximations and two complementary under-approximation algorithms based on sampling and gradient descent. The overestimated domain guarantees the soundness while the underestimated one guides the tightness. We implement our approach into a tool called DualApp and extensively evaluate it on a comprehensive benchmark of 84 collected and trained neural networks with different architectures. The experimental results show that DualApp outperforms the state-of-the-art approximation-based approaches, with up to 71.22 improvement to the verification result.


A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation

The robustness of deep neural networks (DNNs) is crucial to the hosting ...

Certifying Robustness of Convolutional Neural Networks with Tight Linear Approximation

The robustness of neural network classifiers is becoming important in th...

Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks

The robustness of deep neural networks is crucial to modern AI-enabled s...

Approximating Activation Functions

ReLU is widely seen as the default choice for activation functions in ne...

On Preimage Approximation for Neural Networks

Neural network verification mainly focuses on local robustness propertie...

LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions

The most scalable approaches to certifying neural network robustness dep...

NeuroDiff: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation

As neural networks make their way into safety-critical systems, where mi...

Please sign up or login with your details

Forgot password? Click here to reset