DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization

12/05/2022
by   Peiwen Qiu, et al.
0

Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e.g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks. However, to work with the limited computation and communication capabilities of edge networks, a major challenge in developing decentralized bilevel optimization techniques is to lower sample and communication complexities. This motivates us to develop a new decentralized bilevel optimization called DIAMOND (decentralized single-timescale stochastic approximation with momentum and gradient-tracking). The contributions of this paper are as follows: i) our DIAMOND algorithm adopts a single-loop structure rather than following the natural double-loop structure of bilevel optimization, which offers low computation and implementation complexity; ii) compared to existing approaches, the DIAMOND algorithm does not require any full gradient evaluations, which further reduces both sample and computational complexities; iii) through a careful integration of momentum information and gradient tracking techniques, we show that the DIAMOND algorithm enjoys 𝒪(ϵ^-3/2) in sample and communication complexities for achieving an ϵ-stationary solution, both of which are independent of the dataset sizes and significantly outperform existing works. Extensive experiments also verify our theoretical findings.

READ FULL TEXT
research
07/27/2022

INTERACT: Achieving Low Sample and Communication Complexities in Decentralized Bilevel Learning over Networks

In recent years, decentralized bilevel optimization problems have receiv...
research
05/04/2021

GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning

Decentralized nonconvex optimization has received increasing attention i...
research
03/05/2023

PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities

Recently, min-max optimization problems have received increasing attenti...
research
12/06/2021

MDPGT: Momentum-based Decentralized Policy Gradient Tracking

We propose a novel policy gradient method for multi-agent reinforcement ...
research
06/22/2022

Decentralized Gossip-Based Stochastic Bilevel Optimization over Communication Networks

Bilevel optimization have gained growing interests, with numerous applic...
research
09/08/2021

Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis

Actor-critic (AC) algorithms have been widely adopted in decentralized m...
research
07/20/2023

Exploiting Structure for Optimal Multi-Agent Bayesian Decentralized Estimation

A key challenge in Bayesian decentralized data fusion is the `rumor prop...

Please sign up or login with your details

Forgot password? Click here to reset