Oscars: Adaptive Semi-Synchronous Parallel Model for Distributed Deep Learning with Global View
Deep learning has become an indispensable part of life, such as face recognition, NLP, etc., but the training of deep model has always been a challenge, and in recent years, the complexity of training data and models has shown explosive growth, so the training method is gradually transformed into distributed training. Classical synchronization strategy can guarantee accuracy but frequent communication can lead to a slow training speed, although asynchronous strategy training speed but can not guarantee the accuracy, and in the face of the training of the heterogeneous cluster, the above work is not efficient to work, on the one hand, can cause serious waste of resources, on the other hand, frequent communication also made slow training speed, so this paper proposes a semi-synchronous training strategy based on local-SDG, effectively improve the utilization efficiency of heterogeneous resources cluster and reduce communication overhead, to accelerate the training and ensure the accuracy of the model.
READ FULL TEXT