Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces

11/25/2022
by   Jonathan W. Siegel, et al.
0

We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev space W^s(L_q(Ω)) on a bounded domain Ω, where the error is measured in L_p(Ω). This problem is important for studying the application of neural networks in scientific computing and has previously been solved only in the case p=q=∞. Our contribution is to provide a solution for all 1≤ p,q≤∞ and s > 0. Our results show that deep ReLU networks significantly outperform classical methods of approximation, but that this comes at the cost of parameters which are not encodable.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset