Error analysis for small-sample, high-variance data: Cautions for bootstrapping and Bayesian bootstrapping
Recent advances in molecular simulations allow the direct evaluation of kinetic parameters such as rate constants for protein folding or unfolding. However, these calculations are usually computationally expensive and even significant computing resources may result in a small number of independent rate estimates spread over many orders of magnitude. Such small, high-variance samples are not readily amenable to analysis using the standard uncertainty ("standard error of the mean") because unphysical negative limits of confidence intervals result. Bootstrapping, a natural alternative guaranteed to yield a confidence interval within the minimum and maximum values, also exhibits a striking systematic bias of the lower confidence limit. As we show, bootstrapping artifactually assigns high probability to improbably low mean values. A second alternative, the Bayesian bootstrap strategy, does not suffer from the same deficit and is more logically consistent with the type of confidence interval desired, but must be used with caution nevertheless. Neither standard nor Bayesian bootstrapping can overcome the intrinsic challenge of under-estimating the mean from small, high-variance samples. Our report is based on extensive re-analysis of multiple estimates for rate constants obtained from independent atomistic simulations. Although we only analyze rate constants, similar considerations may apply to other types of high-variance calculations, such as may occur in highly non-linear averages like the Jarzynski relation.
READ FULL TEXT