DeepAI AI Chat
Log In Sign Up

On Post-Selection Inference in A/B Tests

by   Alex Deng, et al.

When a large number of simultaneous statistical inferences are conducted, unbiased estimators become biased if we purposefully select a subset of results to draw conclusions based on certain selection criteria. This happens a lot in A/B tests when there are too many metrics and segments to choose from, and only statistically significant results are considered. This paper proposes two different approaches, one based on supervised learning techniques, and the other based on empirical Bayes. We claim these two views can be unified and conduct large scale simulation and empirical study to benchmark our proposals with different existing methods. Results show our methods make substantial improvement for both point estimation and confidence interval coverage.


Valid post-selection inference in Robust Q-learning

Constructing an optimal adaptive treatment strategy becomes complex when...

Optimal post-selection inference for sparse signals: a nonparametric empirical-Bayes approach

A large body of recent Bayesian work has focused on the question of how ...

Nonparametric Empirical Bayes Simultaneous Estimation for Multiple Variances

The shrinkage estimation has proven to be very useful when dealing with ...

Classical vs. Bayesian methods for linear system identification: point estimators and confidence sets

This paper compares classical parametric methods with recently developed...

On Possibility and Impossibility of Multiclass Classification with Rejection

We investigate the problem of multiclass classification with rejection, ...

The Fragility of Multi-Treebank Parsing Evaluation

Treebank selection for parsing evaluation and the spurious effects that ...