Generic Inference in Latent Gaussian Process Models

09/02/2016
by   Edwin V. Bonilla, et al.
0

We develop an automated variational method for inference in models with Gaussian process (GP) priors and general likelihoods. The method supports multiple outputs and multiple latent functions and does not require detailed knowledge of the conditional likelihood, only needing its evaluation as a black-box function. Using a mixture of Gaussians as the variational distribution, we show that the evidence lower bound and its gradients can be estimated efficiently using empirical expectations over univariate Gaussian distributions. Furthermore, the method is scalable to large datasets which is achieved by using an augmented prior via the inducing-variable approach underpinning most sparse GP approximations, along with parallel computation and stochastic optimization. We evaluate our method with experiments on small datasets, medium-scale datasets and a large dataset, showing its competitiveness under different likelihood models and sparsity levels. Moreover, we analyze learning in our model under batch and stochastic settings, and study the effect of optimizing the inducing inputs. Finally, in the large-scale experiment, we investigate the problem of predicting airline delays and show that our method is on par with the state-of-the-art hard-coded approach for scalable GP regression.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset