No one-hidden-layer neural network can represent multivariable functions

06/19/2020
by   Masayo Inoue, et al.
0

In a function approximation with a neural network, an input dataset is mapped to an output index by optimizing the parameters of each hidden-layer unit. For a unary function, we present constraints on the parameters and its second derivative by constructing a continuum version of a one-hidden-layer neural network with the rectified linear unit (ReLU) activation function. The network is accurately implemented because the constraints decrease the degrees of freedom of the parameters. We also explain the existence of a smooth binary function that cannot be precisely represented by any such neural network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro