Convex Optimization with Nonconvex Oracles

11/07/2017
by   Oren Mangoubi, et al.
0

In machine learning and optimization, one often wants to minimize a convex objective function F but can only evaluate a noisy approximation F̂ to it. Even though F is convex, the noise may render F̂ nonconvex, making the task of minimizing F intractable in general. As a consequence, several works in theoretical computer science, machine learning and optimization have focused on coming up with polynomial time algorithms to minimize F under conditions on the noise F(x)-F̂(x) such as its uniform-boundedness, or on F such as strong convexity. However, in many applications of interest, these conditions do not hold. Here we show that, if the noise has magnitude α F(x) + β for some α, β > 0, then there is a polynomial time algorithm to find an approximate minimizer of F. In particular, our result allows for unbounded noise and generalizes those of Applegate and Kannan, and Zhang, Liang and Charikar, who proved similar results for the bounded noise case, and that of Belloni et al. who assume that the noise grows in a very specific manner and that F is strongly convex. Turning our result on its head, one may also view our algorithm as minimizing a nonconvex function F̂ that is promised to be related to a convex function F as above. Our algorithm is a "simulated annealing" modification of the stochastic gradient Langevin Markov chain and gradually decreases the temperature of the chain to approach the global minimizer. Analyzing such an algorithm for the unbounded noise model and a general convex function turns out to be challenging and requires several technical ideas that might be of independent interest in deriving non-asymptotic bounds for other simulated annealing based algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2017

Catalyst Acceleration for Gradient-Based Non-Convex Optimization

We introduce a generic scheme to solve nonconvex optimization problems u...
research
02/13/2018

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

We analyze stochastic gradient algorithms for optimizing nonconvex, nons...
research
01/22/2019

Optimal Finite-Sum Smooth Non-Convex Optimization with SARAH

The total complexity (measured as the total number of gradient computati...
research
04/12/2022

An Algebraically Converging Stochastic Gradient Descent Algorithm for Global Optimization

We propose a new stochastic gradient descent algorithm for finding the g...
research
07/20/2019

Distributed Global Optimization by Annealing

The paper considers a distributed algorithm for global minimization of a...
research
09/16/2022

Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization

In this paper, we propose a new zero order optimization method called mi...
research
02/08/2020

Curvature of Feasible Sets in Offline and Online Optimization

It is known that the curvature of the feasible set in convex optimizatio...

Please sign up or login with your details

Forgot password? Click here to reset