Integral representations of shallow neural network with Rectified Power Unit activation function

12/20/2021
by   Ahmed Abdeljawad, et al.
0

In this effort, we derive a formula for the integral representation of a shallow neural network with the Rectified Power Unit activation function. Mainly, our first result deals with the univariate case of representation capability of RePU shallow networks. The multidimensional result in this paper characterizes the set of functions that can be represented with bounded norm and possibly unbounded width.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2019

Neural network integral representations with the ReLU activation function

We derive a formula for neural network integral representations on the s...
research
06/25/2020

Q-NET: A Formula for Numerical Integration of a Shallow Feed-forward Neural Network

Numerical integration is a computational procedure that is widely encoun...
research
05/24/2019

Greedy Shallow Networks: A New Approach for Constructing and Training Neural Networks

We present a novel greedy approach to obtain a single layer neural netwo...
research
08/17/2022

Shallow neural network representation of polynomials

We show that d-variate polynomials of degree R can be represented on [0,...
research
12/28/2021

Reduced Softmax Unit for Deep Neural Network Accelerators

The Softmax activation layer is a very popular Deep Neural Network (DNN)...
research
04/06/2018

A comparison of deep networks with ReLU activation function and linear spline-type methods

Deep neural networks (DNNs) generate much richer function spaces than sh...
research
11/15/2020

hyper-sinh: An Accurate and Reliable Function from Shallow to Deep Learning in TensorFlow and Keras

This paper presents the 'hyper-sinh', a variation of the m-arcsinh activ...

Please sign up or login with your details

Forgot password? Click here to reset