Dynamic Regret Analysis of Safe Distributed Online Optimization for Convex and Non-convex Problems

02/23/2023
by   Ting-Jui Chang, et al.
0

This paper addresses safe distributed online optimization over an unknown set of linear safety constraints. A network of agents aims at jointly minimizing a global, time-varying function, which is only partially observable to each individual agent. Therefore, agents must engage in local communications to generate a safe sequence of actions competitive with the best minimizer sequence in hindsight, and the gap between the two sequences is quantified via dynamic regret. We propose distributed safe online gradient descent (D-Safe-OGD) with an exploration phase, where all agents estimate the constraint parameters collaboratively to build estimated feasible sets, ensuring the action selection safety during the optimization phase. We prove that for convex functions, D-Safe-OGD achieves a dynamic regret bound of O(T^2/3√(log T) + T^1/3C_T^*), where C_T^* denotes the path-length of the best minimizer sequence. We further prove a dynamic regret bound of O(T^2/3√(log T) + T^2/3C_T^*) for certain non-convex problems, which establishes the first dynamic regret bound for a safe distributed algorithm in the non-convex setting.

READ FULL TEXT
research
11/14/2021

Safe Online Convex Optimization with Unknown Linear Safety Constraints

We study the problem of safe online convex optimization, where the actio...
research
09/09/2016

Distributed Online Optimization in Dynamic Environments Using Mirror Descent

This work addresses decentralized online optimization in non-stationary ...
research
09/21/2022

Distributed Online Non-convex Optimization with Composite Regret

Regret has been widely adopted as the metric of choice for evaluating th...
research
05/30/2022

Non-convex online learning via algorithmic equivalence

We study an algorithmic equivalence technique between nonconvex gradient...
research
11/12/2019

A Distributed Online Convex Optimization Algorithm with Improved Dynamic Regret

In this paper, we consider the problem of distributed online convex opti...
research
07/06/2022

Online Bilevel Optimization: Regret Analysis of Online Alternating Gradient Methods

Online optimization is a well-established optimization paradigm that aim...
research
09/13/2021

Zeroth-order non-convex learning via hierarchical dual averaging

We propose a hierarchical version of dual averaging for zeroth-order onl...

Please sign up or login with your details

Forgot password? Click here to reset