Augmented Outcome-weighted Learning for Optimal Treatment Regimes

11/29/2017
by   Xin Zhou, et al.
0

Precision medicine is of considerable interest in clinical, academic and regulatory parties. The key to precision medicine is the optimal treatment regime. Recently, Zhou et al. (2017) developed residual weighted learning (RWL) to construct the optimal regime that directly optimize the clinical outcome. However, this method involves computationally intensive non-convex optimization, which cannot guarantee a global solution. Furthermore, this method does not possess fully semiparametrical efficiency. In this article, we propose augmented outcome-weighted learning (AOL). The method is built on a doubly robust augmented inverse probability weighted estimator, and hence constructs semiparametrically efficient regimes. Our proposed AOL is closely related to RWL. The weights are obtained from counterfactual residuals, where negative residuals are reflected to positive and accordingly their treatment assignments are switched to opposites. Convex loss functions are thus applied to guarantee a global solution and to reduce computations. We show that AOL is universally consistent, i.e., the estimated regime of AOL converges the Bayes regime when the sample size approaches infinity, without knowing any specifics of the distribution of the data. We also propose variable selection methods for linear and nonlinear regimes, respectively, to further improve performance. The performance of the proposed AOL methods is illustrated in simulation studies and in an analysis of the Nefazodone-CBASP clinical trial data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset