Accounting for Gaussian Process Imprecision in Bayesian Optimization
Bayesian optimization (BO) with Gaussian processes (GP) as surrogate models is widely used to optimize analytically unknown and expensive-to-evaluate functions. In this paper, we propose Prior-mean-RObust Bayesian Optimization (PROBO) that outperforms classical BO on specific problems. First, we study the effect of the Gaussian processes' prior specifications on classical BO's convergence. We find the prior's mean parameters to have the highest influence on convergence among all prior components. In response to this result, we introduce PROBO as a generalization of BO that aims at rendering the method more robust towards prior mean parameter misspecification. This is achieved by explicitly accounting for GP imprecision via a prior near-ignorance model. At the heart of this is a novel acquisition function, the generalized lower confidence bound (GLCB). We test our approach against classical BO on a real-world problem from material science and observe PROBO to converge faster. Further experiments on multimodal and wiggly target functions confirm the superiority of our method.
READ FULL TEXT