Universal Approximation Theorem for Neural Networks

02/19/2021
by   Takato Nishijima, et al.
0

Is there any theoretical guarantee for the approximation ability of neural networks? The answer to this question is the "Universal Approximation Theorem for Neural Networks". This theorem states that a neural network is dense in a certain function space under an appropriate setting. This paper is a comprehensive explanation of the universal approximation theorem for feedforward neural networks, its approximation rate problem (the relation between the number of intermediate units and the approximation error), and Barron space in Japanese.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2022

Extending the Universal Approximation Theorem for a Broad Class of Hypercomplex-Valued Neural Networks

The universal approximation theorem asserts that a single hidden layer n...
research
07/12/2020

Abstract Universal Approximation for Neural Networks

With growing concerns about the safety and robustness of neural networks...
research
11/18/2022

Universal Property of Convolutional Neural Networks

Universal approximation, whether a set of functions can approximate an a...
research
03/21/2023

Universal Approximation Property of Hamiltonian Deep Neural Networks

This paper investigates the universal approximation capabilities of Hami...
research
02/11/2016

A Universal Approximation Theorem for Mixture of Experts Models

The mixture of experts (MoE) model is a popular neural network architect...
research
05/26/2023

Universal Approximation and the Topological Neural Network

A topological neural network (TNN), which takes data from a Tychonoff to...
research
05/07/2019

The strong approximation theorem and computing with linear groups

We obtain a computational realization of the strong approximation theore...

Please sign up or login with your details

Forgot password? Click here to reset