DANTE: Deep AlterNations for Training nEural networks

02/01/2019
by   Sneha Kudugunta, et al.
0

We present DANTE, a novel method for training neural networks using the alternating minimization principle. DANTE provides an alternate perspective to traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convexity to cast training a neural network as a bi-quasi-convex optimization problem. We show that for neural network configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE can also be extended to networks with multiple hidden layers. In experiments on standard datasets, neural networks trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of the solution, as well as training speed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2023

Globally Optimal Training of Neural Networks with Threshold Activation Functions

Threshold activation functions are highly preferable in neural networks ...
research
10/05/2014

On the Computational Efficiency of Training Neural Networks

It is well-known that neural networks are computationally hard to train....
research
03/25/2021

Training Neural Networks Using the Property of Negative Feedback to Inverse a Function

With high forward gain, a negative feedback system has the ability to pe...
research
07/12/2018

Training Neural Networks Using Features Replay

Training a neural network using backpropagation algorithm requires passi...
research
02/21/2023

Unification of popular artificial neural network activation functions

We present a unified representation of the most popular neural network a...
research
04/29/2022

Wide and Deep Neural Networks Achieve Optimality for Classification

While neural networks are used for classification tasks across domains, ...
research
05/24/2019

Greedy Shallow Networks: A New Approach for Constructing and Training Neural Networks

We present a novel greedy approach to obtain a single layer neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset