Distributed Training of Deep Neural Networks with Theoretical Analysis: Under SSP Setting

12/09/2015
by   Abhimanu Kumar, et al.
0

We propose a distributed approach to train deep neural networks (DNNs), which has guaranteed convergence theoretically and great scalability empirically: close to 6 times faster on instance of ImageNet data set when run with 6 machines. The proposed scheme is close to optimally scalable in terms of number of machines, and guaranteed to converge to the same optima as the undistributed setting. The convergence and scalability of the distributed setting is shown empirically across different datasets (TIMIT and ImageNet) and machine learning tasks (image classification and phoneme extraction). The convergence analysis provides novel insights into this complex learning scheme, including: 1) layerwise convergence, and 2) convergence of the weights in probability.

READ FULL TEXT
research
09/22/2016

Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability

This paper presents a theoretical analysis and practical evaluation of t...
research
05/10/2022

A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks

In distributed training of deep neural networks or Federated Learning (F...
research
06/13/2022

Distributed Adversarial Training to Robustify Deep Neural Networks at Scale

Current deep neural networks (DNNs) are vulnerable to adversarial attack...
research
04/21/2021

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

Large-scale distributed training of Deep Neural Networks (DNNs) on state...
research
02/07/2022

Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution

Deep neural networks (DNNs) have shown great success in many machine lea...
research
01/07/2021

On the Convergence of Tsetlin Machines for the XOR Operator

The Tsetlin Machine (TM) is a novel machine learning algorithm with seve...
research
08/30/2022

Convergence Rates of Training Deep Neural Networks via Alternating Minimization Methods

Training deep neural networks (DNNs) is an important and challenging opt...

Please sign up or login with your details

Forgot password? Click here to reset