Neural networks with linear threshold activations: structure and algorithms

11/15/2021
by   Sammy Khalife, et al.
0

In this article we present new results on neural networks with linear threshold activation functions. We precisely characterize the class of functions that are representable by such neural networks and show that 2 hidden layers are necessary and sufficient to represent any function representable in the class. This is a surprising result in the light of recent exact representability investigations for neural networks using other popular activation functions like rectified linear units (ReLU). We also give precise bounds on the sizes of the neural networks required to represent any function in the class. Finally, we design an algorithm to solve the empirical risk minimization (ERM) problem to global optimality for these neural networks with a fixed architecture. The algorithm's running time is polynomial in the size of the data sample, if the input dimension and the size of the network architecture are considered fixed constants. The algorithm is unique in the sense that it works for any architecture with any number of layers, whereas previous polynomial time globally optimal algorithms work only for very restricted classes of architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2020

Memory capacity of neural networks with threshold and ReLU activations

Overwhelming theoretical and empirical evidence shows that mildly overpa...
research
10/07/2018

Principled Deep Neural Network Training through Linear Programming

Deep Learning has received significant attention due to its impressive p...
research
08/01/2023

Descriptive complexity for neural networks via Boolean networks

We investigate the descriptive complexity of a class of neural networks ...
research
03/07/2018

Neural network feedback controller for inertial platform

The paper describes an algorithm for the synthesis of neural networks to...
research
08/03/2023

Memory capacity of two layer neural networks with smooth activations

Determining the memory capacity of two-layer neural networks with m hidd...
research
05/31/2021

Towards Lower Bounds on the Depth of ReLU Neural Networks

We contribute to a better understanding of the class of functions that i...
research
11/08/2017

Learning Non-overlapping Convolutional Neural Networks with Multiple Kernels

In this paper, we consider parameter recovery for non-overlapping convol...

Please sign up or login with your details

Forgot password? Click here to reset