Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization

02/16/2019
by   Feihu Huang, et al.
0

Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems. However, in some machine learning problems such as the bandit model and the black-box learning problem, proximal gradient method could fail because the explicit gradients of these problems are difficult or infeasible to obtain. The gradient-free (zeroth-order) method can address these problems because only the objective function values are required in the optimization. Recently, the first zeroth-order proximal stochastic algorithm was proposed to solve the nonconvex nonsmooth problems. However, its convergence rate is O(1/√(T)) for the nonconvex problems, which is significantly slower than the best convergence rate O(1/T) of the zeroth-order stochastic algorithm, where T is the iteration number. To fill this gap, in the paper, we propose a class of faster zeroth-order proximal stochastic methods with the variance reduction techniques of SVRG and SAGA, which are denoted as ZO-ProxSVRG and ZO-ProxSAGA, respectively. In theoretical analysis, we address the main challenge that an unbiased estimate of the true gradient does not hold in the zeroth-order case, which was required in previous theoretical analysis of both SVRG and SAGA. Moreover, we prove that both ZO-ProxSVRG and ZO-ProxSAGA algorithms have O(1/T) convergence rates. Finally, the experimental results verify that our algorithms have a faster convergence rate than the existing zeroth-order proximal stochastic algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2019

Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization

Alternating direction method of multipliers (ADMM) is a popular optimiza...
research
05/25/2018

Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization

As application demands for zeroth-order (gradient-free) optimization acc...
research
10/21/2022

The Stochastic Proximal Distance Algorithm

Stochastic versions of proximal methods have gained much attention in st...
research
12/09/2020

Enhancing Parameter-Free Frank Wolfe with an Extra Subproblem

Aiming at convex optimization under structural constraints, this work in...
research
10/25/2018

SpiderBoost: A Class of Faster Variance-reduced Algorithms for Nonconvex Optimization

There has been extensive research on developing stochastic variance redu...
research
04/09/2022

Distributed Evolution Strategies for Black-box Stochastic Optimization

This work concerns the evolutionary approaches to distributed stochastic...

Please sign up or login with your details

Forgot password? Click here to reset