Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study
The resurgence of deep learning, as a highly effective machine learning paradigm, has brought back to life the old optimization question of non-convexity. Indeed, the challenges related to the large-scale nature of many modern machine learning applications are severely exacerbated by the inherent non-convexity in the underlying models. In this light, efficient optimization algorithms which can be effectively applied to such large-scale and non-convex learning problems are highly desired. In doing so, however, the bulk of research has been almost completely restricted to the class of 1st-order algorithms. This is despite the fact that employing the curvature information, e.g., in the form of Hessian, can indeed help with obtaining effective methods with desirable convergence properties for non-convex problems, e.g., avoiding saddle-points and convergence to local minima. The conventional wisdom, in the machine learning community is that the application of 2nd-order methods, i.e., those that employ Hessian as well as gradient information, can be highly inefficient. Consequently, 1st-order algorithms, such as stochastic gradient descent (SGD), have been at the center-stage for solving such machine learning problems. Here, we aim at addressing this misconception by considering efficient and stochastic variants of Newton's method, namely, sub-sampled trust-region and cubic regularization, whose theoretical convergence properties have recently been established in [Xu 2017]. Using a variety of experiments, we empirically evaluate the performance of these methods for solving non-convex machine learning applications. In doing so, we highlight the shortcomings of 1st-order methods, e.g., high sensitivity to hyper-parameters such as step-size and undesirable behavior near saddle-points, and showcase the advantages of employing curvature information as effective remedy.
READ FULL TEXT