Efficient estimation of divergence-based sensitivity indices with Gaussian process surrogates

04/08/2019
by   A. W. Eggels, et al.
0

We consider the estimation of sensitivity indices based on divergence measures such as Kullback-Leibler divergence. For sensitivity analysis of complex models, these divergence-based indices can be estimated by Monte-Carlo sampling (MCS) in combination with kernel density estimation (KDE). In a direct approach, the complex model must be evaluated at every input point generated by MCS, resulting in samples in the input-output space that can be used for density estimation. However, if the computational cost of the complex model strongly limits the number of model evaluations, this direct method gives large errors. A recent method uses polynomial dimensional decomposition (PDD), which assumes the input variables are independent. To avoid the assumption of independent inputs, we propose to use Gaussian process (GP) surrogates to increase the number of samples in the combined input-output space. By enlarging this sample set, the KDE becomes more accurate, leading to improved estimates. We investigate two estimators: one in which only the GP mean is used, and one which also accounts for the GP prediction variance. We assess the performance of both estimators, demonstrating they outperform the PDD-based method. We find the estimator based on the GP mean of the Gaussian process performs best.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset