Neural networks with superexpressive activations and integer weights

05/20/2021
by   Aleksandr Beknazaryan, et al.
0

An example of an activation function σ is given such that networks with activations {σ, ⌊·⌋}, integer weights and a fixed architecture depending on d approximate continuous functions on [0,1]^d. The range of integer weights required for ε-approximation of Hölder continuous functions is derived, which leads to a convergence rate of order n^-2β/2β+dlog_2n for neural network regression estimation of unknown β-Hölder continuous function with given n samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2021

Elementary superexpressive activations

We call a finite family of activation functions superexpressive if any m...
research
06/27/2022

Expressive power of binary and ternary neural networks

We show that deep sparse ReLU networks with ternary weights and deep ReL...
research
12/30/2021

A Unified and Constructive Framework for the Universality of Neural Networks

One of the reasons why many neural networks are capable of replicating c...
research
03/29/2023

Optimal approximation of C^k-functions using shallow complex-valued neural networks

We prove a quantitative result for the approximation of functions of reg...
research
06/12/2022

Universality and approximation bounds for echo state networks with random weights

We study the uniform approximation of echo state networks with randomly ...
research
10/20/2022

Global Convergence of SGD On Two Layer Neural Nets

In this note we demonstrate provable convergence of SGD to the global mi...
research
10/15/2019

Neural tangent kernels, transportation mappings, and universal approximation

This paper establishes rates of universal approximation for the shallow ...

Please sign up or login with your details

Forgot password? Click here to reset