: Calibrating Global and Local Models via Federated Learning Beyond Consensus

by   Amrit Singh Bedi, et al.

In federated learning (FL), the objective of collaboratively learning a global model through aggregation of model updates across devices tends to oppose the goal of personalization via local information. In this work, we calibrate this tradeoff in a quantitative manner through a multi-criterion optimization-based framework, which we cast as a constrained program: the objective for a device is its local objective, which it seeks to minimize while satisfying nonlinear constraints that quantify the proximity between the local and the global model. By considering the Lagrangian relaxation of this problem, we develop an algorithm that allows each node to minimize its local component of Lagrangian through queries to a first-order gradient oracle. Then, the server executes Lagrange multiplier ascent steps followed by a Lagrange multiplier-weighted averaging step. We call this instantiation of the primal-dual method Federated Learning Beyond Consensus (). Theoretically, we establish that converges to a first-order stationary point at rates that matches the state of the art, up to an additional error term that depends on the tolerance parameter that arises due to the proximity constraints. Overall, the analysis is a novel characterization of primal-dual methods applied to non-convex saddle point problems with nonlinear constraints. Finally, we demonstrate that balances the global and local model test accuracy metrics across a suite of datasets (Synthetic, MNIST, CIFAR-10, Shakespeare), achieving competitive performance with the state of the art.


page 1

page 2

page 3

page 4


Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity

Federated learning (FL) has emerged as a popular technique for distribut...

FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data

Federated Learning (FL) has become a popular paradigm for learning from ...

Federated Composite Optimization

Federated Learning (FL) is a distributed learning paradigm which scales ...

DFedADMM: Dual Constraints Controlled Model Inconsistency for Decentralized Federated Learning

To address the communication burden issues associated with federated lea...

Fine-tuning is Fine in Federated Learning

We study the performance of federated learning algorithms and their vari...

PersA-FL: Personalized Asynchronous Federated Learning

We study the personalized federated learning problem under asynchronous ...

From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization

In the setting of federated optimization, where a global model is aggreg...

Please sign up or login with your details

Forgot password? Click here to reset