Expressive power of binary and ternary neural networks

06/27/2022
by   Aleksandr Beknazaryan, et al.
0

We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate β-Hölder functions on [0,1]^d. Also, for any interval [a,b)⊂ℝ, continuous functions on [0,1]^d can be approximated by networks of depth 2 with binary activation function 1_[a,b).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2023

Deep Network Approximation: Beyond ReLU to Diverse Activation Functions

This paper explores the expressive power of deep neural networks for a d...
research
02/10/2018

Optimal approximation of continuous functions by very deep ReLU networks

We prove that deep ReLU neural networks with conventional fully-connecte...
research
04/09/2019

Approximation in L^p(μ) with deep ReLU neural networks

We discuss the expressive power of neural networks which use the non-smo...
research
05/31/2023

On the Expressive Power of Neural Networks

In 1989 George Cybenko proved in a landmark paper that wide shallow neur...
research
06/04/2020

Network size and weights size for memorization with two-layers neural networks

In 1988, Eric B. Baum showed that two-layers neural networks with thresh...
research
05/09/2023

SkelEx and BoundEx: Natural Visualization of ReLU Neural Networks

Despite their limited interpretability, weights and biases are still the...
research
05/20/2021

Neural networks with superexpressive activations and integer weights

An example of an activation function σ is given such that networks with ...

Please sign up or login with your details

Forgot password? Click here to reset