Minimizing Nonconvex Population Risk from Rough Empirical Risk

03/25/2018
by   Chi Jin, et al.
0

Population risk---the expectation of the loss over the sampling mechanism---is always of primary interest in machine learning. However, learning algorithms only have access to empirical risk, which is the average loss over training examples. Although the two risks are typically guaranteed to be pointwise close, for applications with nonconvex nonsmooth losses (such as modern deep networks), the effects of sampling can transform a well-behaved population risk into an empirical risk with a landscape that is problematic for optimization. The empirical risk can be nonsmooth, and it may have many additional local minima. This paper considers a general optimization framework which aims to find approximate local minima of a smooth nonconvex function F (population risk) given only access to the function value of another function f (empirical risk), which is pointwise close to F (i.e., F-f_∞<ν). We propose a simple algorithm based on stochastic gradient descent (SGD) on a smoothed version of f which is guaranteed to find an ϵ-second-order stationary point if ν< O(ϵ^1.5/d), thus escaping all saddle points of F and all the additional local minima introduced by f. We also provide an almost matching lower bound showing that our SGD-based approach achieves the optimal trade-off between ν and ϵ, as well as the optimal dependence on problem dimension d, among all algorithms making a polynomial number of queries. As a concrete example, we show that our results can be directly used to give sample complexities for learning a ReLU unit, whose empirical risk is nonsmooth.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset