Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions

by   Haizhou Du, et al.

Distributed machine learning (DML) over time-varying networks can be an enabler for emerging decentralized ML applications such as autonomous driving and drone fleeting. However, the commonly used weighted arithmetic mean model aggregation function in existing DML systems can result in high model loss, low model accuracy, and slow convergence speed over time-varying networks. To address this issue, in this paper, we propose a novel non-linear class of model aggregation functions to achieve efficient DML over time-varying networks. Instead of taking a linear aggregation of neighboring models as most existing studies do, our mechanism uses a nonlinear aggregation, a weighted power-p mean (WPM), as the aggregation function of local models from neighbors. The subsequent optimizing steps are taken using mirror descent defined by a Bregman divergence that maintains convergence to optimality. In this paper, we analyze properties of the WPM and rigorously prove convergence properties of our aggregation mechanism. Additionally, through extensive experiments, we show that when p > 1, our design significantly improves the convergence speed of the model and the scalability of DML under time-varying networks compared with arithmetic mean aggregation functions, with little additional computation overhead.


page 1

page 2

page 3

page 4


Aggregation in the Mirror Space (AIMS): Fast, Accurate Distributed Machine Learning in Military Settings

Distributed machine learning (DML) can be an important capability for mo...

Machine learning based iterative learning control for non-repetitive time-varying systems

The repetitive tracking task for time-varying systems (TVSs) with non-re...

Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks

Decentralized optimization with time-varying networks is an emerging par...

Distributed payoff allocation in coalitional games via time varying paracontractions

We present a partial operator-theoretic characterization of approachabil...

Distributed Learning over Networks with Graph-Attention-Based Personalization

In conventional distributed learning over a network, multiple agents col...

Privacy-Preserving Object Detection Localization Using Distributed Machine Learning: A Case Study of Infant Eyeblink Conditioning

Distributed machine learning is becoming a popular model-training method...

Asymptotic elimination of partially continuous aggregation functions in directed graphical models

In Statistical Relational Artificial Intelligence, a branch of AI and ma...

Please sign up or login with your details

Forgot password? Click here to reset