Mirror Natural Evolution Strategies

08/01/2023
by   Haishan Ye, et al.
0

The zeroth-order optimization has been widely used in machine learning applications. However, the theoretical study of the zeroth-order optimization focus on the algorithms which approximate (first-order) gradients using (zeroth-order) function value difference at a random direction. The theory of algorithms which approximate the gradient and Hessian information by zeroth-order queries is much less studied. In this paper, we focus on the theory of zeroth-order optimization which utilizes both the first-order and second-order information approximated by the zeroth-order queries. We first propose a novel reparameterized objective function with parameters (μ, Σ). This reparameterized objective function achieves its optimum at the minimizer and the Hessian inverse of the original objective function respectively, but with small perturbations. Accordingly, we propose a new algorithm to minimize our proposed reparameterized objective, which we call (mirror descent natural evolution strategy). We show that the estimated covariance matrix of converges to the inverse of Hessian matrix of the objective function with a convergence rate 𝒪(1/k), where k is the iteration number and 𝒪(·) hides the constant and log terms. We also provide the explicit convergence rate of and how the covariance matrix promotes the convergence rate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset