Convergence rates for Penalised Least Squares Estimators in PDE-constrained regression problems

09/24/2018
by   Richard Nickl, et al.
0

We consider PDE constrained nonparametric regression problems in which the parameter f is the unknown coefficient function of a second order elliptic partial differential operator L_f, and the unique solution u_f of the boundary value problem L_fu=g_1 on O, u=g_2 on ∂ O, is observed corrupted by additive Gaussian white noise. Here O is a bounded domain in R^d with smooth boundary ∂ O, and g_1, g_2 are given functions defined on O, ∂ O, respectively. Concrete examples include L_fu=Δ u-2fu (Schrödinger equation with attenuation potential f) and L_fu=div (f∇ u) (divergence form equation with conductivity f). In both cases, the parameter space F={f∈ H^α( O)| f > 0}, α>0, where H^α( O) is the usual order α Sobolev space, induces a set of non-linearly constrained regression functions {u_f: f ∈ F}. We study Tikhonov-type penalised least squares estimators f̂ for f. The penalty functionals are of squared Sobolev-norm type and thus f̂ can also be interpreted as a Bayesian `MAP'-estimator corresponding to some Gaussian process prior. We derive rates of convergence of f̂ and of u_f̂, to f, u_f, respectively. We prove that the rates obtained are minimax-optimal in prediction loss. Our bounds are derived from a general convergence rate result for non-linear inverse problems whose forward map satisfies a mild modulus of continuity condition, a result of independent interest that is applicable also to linear inverse problems, illustrated in an example with the Radon transform.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset