A supermartingale approach to Gaussian process based sequential design of experiments

by   Julien Bect, et al.

Gaussian process (GP) models have become a well-established frameworkfor the adaptive design of costly experiments, and notably of computerexperiments. GP-based sequential designs have been found practicallyefficient for various objectives, such as global optimization(estimating the global maximum or maximizer(s) of a function),reliability analysis (estimating a probability of failure) or theestimation of level sets and excursion sets. In this paper, we dealwith convergence properties of an important class of sequential designapproaches, known as stepwise uncertainty reduction (SUR) strategies.Our approach relies on the key observation that the sequence ofresidual uncertainty measures, in SUR strategies, is generally asupermartingale with respect to the filtration generated by theobservations. We study the existence of SUR strategies and establishgeneric convergence results for a broad class thereof. We alsointroduce a special class of uncertainty measures defined in terms ofregular loss functions, which makes it easier to check that ourconvergence results apply in particular cases. Applications of thelatter include proofs of convergence for the two main SUR strategiesproposed by Bect, Ginsbourger, Li, Picheny and Vazquez (Stat. Comp.,2012). To the best of our knowledge, these are the first convergenceproofs for GP-based sequential design algorithms dedicated to theestimation of excursions sets and their measure. Coming to globaloptimization algorithms, we also show that the knowledge gradientstrategy can be cast in the SUR framework with an uncertaintyfunctional stemming from a regular loss, resulting in furtherconvergence results. We finally establish a new proof of convergencefor the expected improvement algorithm, which is the first proof forthis algorithm that applies to any GP with continuous sample paths.


page 1

page 2

page 3

page 4


Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization

The expected improvement (EI) algorithm is one of the most popular strat...

Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process Bandit Optimization

Can one parallelize complex exploration exploitation tradeoffs? As an ex...

Gaussian Process Optimization with Mutual Information

In this paper, we analyze a generic algorithm scheme for sequential glob...

Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domain by Adaptive Discretization

Gaussian process optimization is a successful class of algorithms (e.g. ...

Gaussian Process Landmarking on Manifolds

As a means of improving analysis of biological shapes, we propose a gree...

On Active Learning for Gaussian Process-based Global Sensitivity Analysis

This paper explores the application of active learning strategies to ada...

Entropy-based adaptive design for contour finding and estimating reliability

In reliability analysis, methods used to estimate failure probability ar...

Please sign up or login with your details

Forgot password? Click here to reset