DJAM: distributed Jacobi asynchronous method for learning personal models

03/26/2018
by   Inês Almeida, et al.
0

Processing data collected by a network of agents often boils down to solving an optimization problem. The distributed nature of these problems calls for methods that are, themselves, distributed. While most collaborative learning problems require agents to reach a common (or consensus) model, there are situations in which the consensus solution may not be optimal. For instance, agents may want to reach a compromise between agreeing with their neighbors and minimizing a personal loss function. We present DJAM, a Jacobi-like distributed algorithm for learning personalized models. This method is implementation-friendly: it has no hyperparameters that need tuning, it is asynchronous, and its updates only require single-neighbor interactions. We prove that DJAM converges with probability one to the solution, provided that the personal loss functions are strongly convex and have Lipschitz gradient. We then give evidence that DJAM is on par with state-of-the-art methods: our method reaches a solution with error similar to the error of a carefully tuned ADMM in about the same number of single-neighbor interactions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2019

Analysis of Distributed ADMM Algorithm for Consensus Optimization in Presence of Node Error

Alternating Direction Method of Multipliers (ADMM) is a popular convex o...
research
09/22/2019

Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks

In this article, a distributed optimization problem for minimizing a sum...
research
02/24/2018

A Block-wise, Asynchronous and Distributed ADMM Algorithm for General Form Consensus Optimization

Many machine learning models, including those with non-smooth regularize...
research
09/21/2023

Distributed Conjugate Gradient Method via Conjugate Direction Tracking

We present a distributed conjugate gradient method for distributed optim...
research
03/16/2019

A Provably Communication-Efficient Asynchronous Distributed Inference Method for Convex and Nonconvex Problems

This paper proposes and analyzes a communication-efficient distributed o...
research
09/01/2023

Online Distributed Learning over Random Networks

The recent deployment of multi-agent systems in a wide range of scenario...
research
11/19/2018

Distributed Learning of Average Belief Over Networks Using Sequential Observations

This paper addresses the problem of distributed learning of average beli...

Please sign up or login with your details

Forgot password? Click here to reset