Sensitivity analysis in longitudinal clinical trials via distributional imputation
Missing data is inevitable in longitudinal clinical trials. Conventionally, the missing at random assumption is assumed to handle missingness, which however is unverifiable empirically. Thus, sensitivity analysis is critically important to assess the robustness of the study conclusions against untestable assumptions. Toward this end, regulatory agencies often request using imputation models such as return-to-baseline, control-based, and washout imputation. Multiple imputation is popular in sensitivity analysis; however, it may be inefficient and result in an unsatisfying interval estimation by Rubin's combining rule. We propose distributional imputation (DI) in sensitivity analysis, which imputes each missing value by samples from its target imputation model given the observed data. Drawn on the idea of Monte Carlo integration, the DI estimator solves the mean estimating equations of the imputed dataset. It is fully efficient with theoretical guarantees. Moreover, we propose weighted bootstrap to obtain a consistent variance estimator, taking into account the variabilities due to model parameter estimation and target parameter estimation. The finite-sample performance of DI inference is assessed in the simulation study. We apply the proposed framework to an antidepressant longitudinal clinical trial involving missing data to investigate the robustness of the treatment effect. Our proposed DI approach detects a statistically significant treatment effect in both the primary analysis and sensitivity analysis under certain prespecified sensitivity models in terms of the average treatment effect, the risk difference, and the quantile treatment effect in lower quantiles of the responses, uncovering the benefit of the test drug for curing depression.
READ FULL TEXT