DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling

01/20/2021
by   Shangming Cai, et al.
10

To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal scheduling for all cases compared to competing strategies while the model accuracy remains untouched.

READ FULL TEXT

page 1

page 4

page 9

page 10

research
01/27/2022

Data-Quality Based Scheduling for Federated Edge Learning

FEderated Edge Learning (FEEL) has emerged as a leading technique for pr...
research
03/22/2020

HierTrain: Fast Hierarchical Edge AI Learning with Hybrid Parallelism in Mobile-Edge-Cloud Computing

Nowadays, deep neural networks (DNNs) are the core enablers for many eme...
research
02/18/2021

Data-Aware Device Scheduling for Federated Edge Learning

Federated Edge Learning (FEEL) involves the collaborative training of ma...
research
02/07/2020

Delay-Optimal Distributed Edge Computing in Wireless Edge Networks

By integrating edge computing with parallel computing, distributed edge ...
research
11/22/2020

Wireless Distributed Edge Learning: How Many Edge Devices Do We Need?

We consider distributed machine learning at the wireless edge, where a p...
research
10/05/2019

Data-Importance Aware User Scheduling for Communication-Efficient Edge Machine Learning

With the prevalence of intelligent mobile applications, edge learning is...
research
07/22/2022

Distributed Deep Learning Inference Acceleration using Seamless Collaboration in Edge Computing

This paper studies inference acceleration using distributed convolutiona...

Please sign up or login with your details

Forgot password? Click here to reset