Error bounds for approximations with deep ReLU networks

10/03/2016
by   Dmitry Yarotsky, et al.
0

We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz functions we describe adaptive depth-6 network architectures more efficient than the standard shallow architecture.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset