Penalized Projected Kernel Calibration for Computer Models

03/01/2021
by   Yan Wang, et al.
0

Projected kernel calibration is known to be theoretically superior, its loss function is abbreviated as PK loss function. In this work, we prove the uniform convergence of PK loss function and show that (1) when the sample size is large, any local minimum point and local maximum point of the L_2 loss between the true process and the computer models is a local minimum point of the PK loss function; (2) all the local minimum values of the PK loss function converge to the same value. These theoretical results imply that it is extremely hard for the projected kernel calibration to identify the global minimum point of the L_2 loss which is defined as the optimal value of the calibration parameters. To solve this problem, a frequentist method, called the penalized projected kernel calibration method is proposed. As a frequentist method, the proposed method is proved to be semi-parametric efficient. On the other hand, the proposed method has a natural bayesian version, which allows users to calculate the credible region of the calibration parameters without using a large sample approximation. Through extensive simulation studies and a real-world case study, we show that the proposed calibration can accurately estimate the calibration parameters, and compare favorably to alternative calibration methods regardless of the sample size.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro