Finite-sample risk bounds for maximum likelihood estimation with arbitrary penalties

12/29/2017
by   W. D. Brinda, et al.
0

The MDL two-part coding index of resolvability provides a finite-sample upper bound on the statistical risk of penalized likelihood estimators over countable models. However, the bound does not apply to unpenalized maximum likelihood estimation or procedures with exceedingly small penalties. In this paper, we point out a more general inequality that holds for arbitrary penalties. In addition, this approach makes it possible to derive exact risk bounds of order 1/n for iid parametric models, which improves on the order ( n)/n resolvability bounds. We conclude by discussing implications for adaptive estimation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2019

A new inequality for maximum likelihood estimation in statistical models with latent variables

Maximum-likelihood estimation (MLE) is arguably the most important tool ...
research
08/02/2022

Maximum pseudo-likelihood estimation in copula models for small weakly dependent samples

Maximum pseudo-likelihood (MPL) is a semiparametric estimation method of...
research
04/11/2023

A Data-Driven State Aggregation Approach for Dynamic Discrete Choice Models

We study dynamic discrete choice models, where a commonly studied proble...
research
11/21/2010

Stochastic blockmodels with growing number of classes

We present asymptotic and finite-sample results on the use of stochastic...
research
08/29/2020

Statistical Analysis of Multi-Relational Network Recovery

In this paper, we develop asymptotic theories for a class of latent vari...
research
03/17/2020

A Unified View of Label Shift Estimation

Label shift describes the setting where although the label distribution ...
research
09/24/2018

Implicit Maximum Likelihood Estimation

Implicit probabilistic models are models defined naturally in terms of a...

Please sign up or login with your details

Forgot password? Click here to reset