Delayed Stochastic Algorithms for Distributed Weakly Convex Optimization

01/30/2023
by   Wenzhi Gao, et al.
0

This paper studies delayed stochastic algorithms for weakly convex optimization in a distributed network with workers connected to a master node. More specifically, we consider a structured stochastic weakly convex objective function which is the composition of a convex function and a smooth nonconvex function. Recently, Xu et al. 2022 showed that an inertial stochastic subgradient method converges at a rate of 𝒪(τ/√(K)), which suffers a significant penalty from the maximum information delay τ. To alleviate this issue, we propose a new delayed stochastic prox-linear () method in which the master performs the proximal update of the parameters and the workers only need to linearly approximate the inner smooth function. Somewhat surprisingly, we show that the delays only affect the high order term in the complexity rate and hence, are negligible after a certain number of iterations. Moreover, to further improve the empirical performance, we propose a delayed extrapolated prox-linear () method which employs Polyak-type momentum to speed up the algorithm convergence. Building on the tools for analyzing , we also develop improved analysis of delayed stochastic subgradient method (). In particular, for general weakly convex problems, we show that convergence of only depends on the expected delay.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset