Posterior Variance Analysis of Gaussian Processes with Application to Average Learning Curves

by   Armin Lederer, et al.

The posterior variance of Gaussian processes is a valuable measure of the learning error which is exploited in various applications such as safe reinforcement learning and control design. However, suitable analysis of the posterior variance which captures its behavior for finite and infinite number of training data is missing. This paper derives a novel bound for the posterior variance function which requires only local information because it depends only on the number of training samples in the proximity of a considered test point. Furthermore, we prove sufficient conditions which ensure the convergence of the posterior variance to zero. Finally, we demonstrate that the extension of our bound to an average learning bound outperforms existing approaches.


page 1

page 2

page 3

page 4


Uniform Error and Posterior Variance Bounds for Gaussian Process Regression with Application to Safe Control

In application areas where data generation is expensive, Gaussian proces...

Sharp Calibrated Gaussian Processes

While Gaussian processes are a mainstay for various engineering and scie...

Posterior and Computational Uncertainty in Gaussian Processes

Gaussian processes scale prohibitively with the size of the dataset. In ...

Upgrading from Gaussian Processes to Student's-T Processes

Gaussian process priors are commonly used in aerospace design for perfor...

Characterizing Deep Gaussian Processes via Nonlinear Recurrence Systems

Recent advances in Deep Gaussian Processes (DGPs) show the potential to ...

The variation of the posterior variance and Bayesian sample size determination

We investigate a new criterion for Bayesian sample size determination th...

Representing Additive Gaussian Processes by Sparse Matrices

Among generalized additive models, additive Matérn Gaussian Processes (G...

Please sign up or login with your details

Forgot password? Click here to reset