Incremental Methods for Weakly Convex Optimization

07/26/2019
by   Xiao Li, et al.
0

We consider incremental algorithms for solving weakly convex optimization problems, a wide class of (possibly nondifferentiable) nonconvex optimization problems. We will analyze incremental (sub)-gradient descent, incremental proximal point algorithm and incremental prox-linear algorithm in this paper. We show that the convergence rate of the three incremental algorithms is O(k^-1/4) under weakly convex setting. This extends the convergence theory of incremental methods from convex optimization to nondifferentiable nonconvex regime. When the weakly convex function satisfies an additional regularity condition called sharpness, we show that all the three incremental algorithms with a geometrical diminishing stepsize and an appropriate initialization converge linearly to the optimal solution set. We conduct experiments on robust matrix sensing and robust phase retrieval to illustrate the superior convergence property of the three incremental methods.

READ FULL TEXT

page 20

page 21

page 23

research
06/30/2022

Randomized Coordinate Subgradient Method for Nonsmooth Optimization

Nonsmooth optimization finds wide applications in many engineering field...
research
12/15/2017

Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice

We introduce a generic scheme for accelerating gradient-based optimizati...
research
08/30/2023

A Unified Analysis for the Subgradient Methods Minimizing Composite Nonconvex, Nonsmooth and Non-Lipschitz Functions

In this paper we propose a proximal subgradient method (Prox-SubGrad) fo...
research
04/28/2020

Distributed Projected Subgradient Method for Weakly Convex Optimization

The stochastic subgradient method is a widely-used algorithm for solving...
research
03/14/2022

A Linearly Convergent Douglas-Rachford Splitting Solver for Markovian Information-Theoretic Optimization Problems

In this work, we propose solving the Information bottleneck (IB) and Pri...
research
09/20/2021

Sharp global convergence guarantees for iterative nonconvex optimization: A Gaussian process perspective

We consider a general class of regression models with normally distribut...

Please sign up or login with your details

Forgot password? Click here to reset