Abstract Universal Approximation for Neural Networks

07/12/2020
by   Zi Wang, et al.
9

With growing concerns about the safety and robustness of neural networks, a number of researchers have successfully applied abstract interpretation with numerical domains to verify properties of neural networks. Why do numerical domains work for neural-network verification? We present a theoretical result that demonstrates the power of numerical domains, namely, the simple interval domain, for analysis of neural networks. Our main theorem, which we call the abstract universal approximation (AUA) theorem, generalizes the recent result by Baader et al. [2020] for ReLU networks to a rich class of neural networks. The classical universal approximation theorem says that, given function f, for any desired precision, there is a neural network that can approximate f. The AUA theorem states that for any function f, there exists a neural network whose abstract interpretation is an arbitrarily close approximation of the collecting semantics of f. Further, the network may be constructed using any well-behaved activation function—sigmoid, tanh, parametric ReLU, ELU, and more—making our result quite general. The implication of the AUA theorem is that there exist provably correct neural networks: Suppose, for instance, that there is an ideal robust image classifier represented as function f. The AUA theorem tells us that there exists a neural network that approximates f and for which we can automatically construct proofs of robustness using the interval abstract domain. Our work sheds light on the existence of provably correct neural networks, using arbitrary activation functions, and establishes intriguing connections between well-known theoretical properties of neural networks and abstract interpretation using numerical domains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset