Quasi-convergence of an implementation of optimal balance by backward-forward nudging

06/27/2022
by   G. Tuba Masur, et al.
0

Optimal balance is a non-asymptotic numerical method to compute a point on the slow manifold for certain two-scale dynamical systems. It works by solving a modified version of the system as a boundary value problem in time, where the nonlinear terms are adiabatically ramped up from zero to the fully nonlinear dynamics. A dedicated boundary value solver, however, is often not directly available. The most natural alternative is a nudging solver, where the problem is repeatedly solved forward and backward in time and the respective boundary conditions are restored whenever one of the temporal end points is visited. In this paper, we show quasi-convergence of this scheme in the sense that the termination residual of the nudging iteration is as small as the asymptotic error of the method itself, i.e., under appropriate assumptions exponentially small. This confirms that optimal balance in its nudging formulation is an effective algorithm. Further, it shows that the boundary value problem formulation of optimal balance is well posed up at most a residual error as small as the asymptotic error of the method itself. The key step in our proof is a careful two-component Gronwall inequality.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset