Consensus-based optimization methods converge globally in mean-field law

03/28/2021
by   Massimo Fornasier, et al.
0

In this paper we study consensus-based optimization (CBO), which is a multi-agent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that CBO always performs a gradient descent of the squared Euclidean distance to the global minimizer, we derive a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a very large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring minimal assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a by-product of the analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset