Tight Regret Bounds for Bayesian Optimization in One Dimension

05/30/2018
by   Jonathan Scarlett, et al.
0

We consider the problem of Bayesian optimization (BO) in one dimension, under a Gaussian process prior and Gaussian sampling noise. We provide a theoretical analysis showing that, under fairly mild technical assumptions on the kernel, the best possible cumulative regret up to time T behaves as Ω(√(T)) and O(√(T T)). This gives a tight characterization up to a √( T) factor, and includes the first non-trivial lower bound for noisy BO. Our assumptions are satisfied, for example, by the squared exponential and Matérn-ν kernels, with the latter requiring ν > 2. Our results certify the near-optimality of existing bounds (Srinivas et al., 2009) for the SE kernel, while proving them to be strictly suboptimal for the Matérn kernel with ν > 2.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro