Tests and estimation strategies associated to some loss functions

03/27/2020
by   Yannick Baraud, et al.
0

We consider the problem of estimating the joint distribution of n independent random variables. Our approach is based on a family of candidate probabilities that we shall call a model and which is chosen to either contain the true distribution of the data or at least to provide a good approximation of it with respect to some loss function. The aim of the present paper is to describe a general estimation strategy that allows to adapt to both the specific features of the model and the choice of the loss function in view of designing an estimator with good estimation properties. The losses we have in mind are based on the total variation, Hellinger, Wasserstein and L_p-distances to name a few. We show that the risk of the resulting estimator with respect to the loss function can be bounded by the sum of an approximation term accounting for the loss between the true distribution and the model and a complexity term that corresponds to the bound we would get if this distribution did belong to the model. Our results hold under mild assumptions on the true distribution of the data and are based on exponential deviation inequalities that are non-asymptotic and involve explicit constants. When the model reduces to two distinct probabilities, we show how our estimation strategy leads to a robust test whose errors of first and second kinds only depend on the losses between the true distribution and the two tested probabilities.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset