A simple data-driven method to optimise the penalty strengths of penalised models and its application to non-parametric smoothing

06/08/2022
by   Jens Thomas, et al.
0

Information of interest can often only be extracted from data by model fitting. When the functional form of such a model can not be deduced from first principles, one has to make a choice between different possible models. A common approach in such cases is to minimise the information loss in the model by trying to reduce the number of fit variables (or the model flexibility, respectively) as much as possible while still yielding an acceptable fit to the data. Model selection via the Akaike Information Criterion (AIC) provides such an implementation of Occam's razor. We argue that the same principles can be applied to optimise the penalty-strength of a penalised maximum-likelihood model. However, while in typical applications AIC is used to choose from a finite, discrete set of maximum-likelihood models the penalty optimisation requires to select out of a continuum of candidate models and these models violate the maximum-likelihood condition. We derive a generalised information criterion AICp that encompasses this case. It naturally involves the concept of effective free parameters which is very flexible and can be applied to any model, be it linear or non-linear, parametric or non-parametric, and with or without constraint equations on the parameters. We show that the generalised AICp allows an optimisation of any penalty-strength without the need of separate Monte-Carlo simulations. As an example application, we discuss the optimisation of the smoothing in non-parametric models which has many applications in astrophysics, like in dynamical modeling, spectral fitting or gravitational lensing.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset