DeepAI AI Chat
Log In Sign Up

On Post-Selection Inference in A/B Tests

10/09/2019
by   Alex Deng, et al.
Microsoft
0

When a large number of simultaneous statistical inferences are conducted, unbiased estimators become biased if we purposefully select a subset of results to draw conclusions based on certain selection criteria. This happens a lot in A/B tests when there are too many metrics and segments to choose from, and only statistically significant results are considered. This paper proposes two different approaches, one based on supervised learning techniques, and the other based on empirical Bayes. We claim these two views can be unified and conduct large scale simulation and empirical study to benchmark our proposals with different existing methods. Results show our methods make substantial improvement for both point estimation and confidence interval coverage.

READ FULL TEXT
08/05/2022

Valid post-selection inference in Robust Q-learning

Constructing an optimal adaptive treatment strategy becomes complex when...
10/25/2018

Optimal post-selection inference for sparse signals: a nonparametric empirical-Bayes approach

A large body of recent Bayesian work has focused on the question of how ...
06/17/2018

Nonparametric Empirical Bayes Simultaneous Estimation for Multiple Variances

The shrinkage estimation has proven to be very useful when dealing with ...
07/02/2015

Classical vs. Bayesian methods for linear system identification: point estimators and confidence sets

This paper compares classical parametric methods with recently developed...
01/30/2019

On Possibility and Impossibility of Multiclass Classification with Rejection

We investigate the problem of multiclass classification with rejection, ...
09/14/2022

The Fragility of Multi-Treebank Parsing Evaluation

Treebank selection for parsing evaluation and the spurious effects that ...