Admissible Decision Rule

What is an Admissible Decision Rule?

An admissible decision rule in the context of statistical decision theory is a rule or strategy that cannot be bested by any other decision rule on the basis of a loss function. In other words, a decision rule is considered admissible if there is no other rule that performs better in all situations. If such a superior rule exists, the original rule would be considered inadmissible, as there would be at least one alternative that leads to a more favorable outcome every time.

Understanding Decision Rules

Decision rules are fundamental to decision theory, which is concerned with identifying the values, uncertainties, and other issues relevant in a given decision, its rationality, and the resulting optimal decision. In statistical decision theory, a decision rule is a function that maps an observation (or a set of observations) to an appropriate action. Decision rules are used in a variety of statistical applications, including hypothesis testing, parameter estimation, and selection of the best course of action in business and economics.

Criteria for Admissibility

The concept of admissibility is closely related to the idea of a loss function, which quantifies the cost associated with making a decision that deviates from the true state of nature. A decision rule is said to be admissible if there is no other rule that yields a lower expected loss (or risk) across all possible states of nature. The expected loss is calculated by integrating the loss function over the distribution of the data.

For a decision rule to be admissible, it must not be uniformly worse than another rule. Uniformly worse means that for every possible parameter value, the expected loss of the inadmissible rule is greater than or equal to that of the other rule, with strict inequality for at least one parameter value.

Examples of Admissible Decision Rules

One classic example of an admissible decision rule is the maximum likelihood estimator (MLE) in certain parametric statistical models. Under regular conditions, the MLE is asymptotically efficient, meaning that as the sample size grows, it achieves the lowest possible variance among all unbiased estimators. This property often leads to the MLE being admissible for large samples.

Another example is the James-Stein estimator in the context of estimating the mean of a multivariate normal distribution. The James-Stein estimator demonstrates that, under certain conditions, a shrinkage estimator can dominate the sample mean estimator, which is admissible in one or two dimensions but inadmissible in three or more dimensions.

Practical Implications of Admissibility

In practice, the concept of admissibility helps statisticians and decision-makers avoid using decision rules that are suboptimal. It provides a theoretical foundation for comparing different statistical procedures and choosing the one that minimizes the potential loss. However, the determination of admissibility can be complex, especially in high-dimensional problems or under non-standard conditions.

It's also important to note that admissibility does not guarantee that a decision rule is the best in a practical sense. There may be multiple admissible rules, and the choice among them may depend on other considerations, such as computational efficiency, robustness, or prior information about the parameter being estimated.


Admissible decision rules play a critical role in statistical decision theory by ensuring that the chosen rule is not uniformly worse than any alternative. While the concept is rooted in theoretical considerations, it has practical applications in guiding the selection of statistical methods that minimize the expected loss in decision-making processes. However, the mere admissibility of a decision rule does not necessarily make it the most appropriate choice in all practical scenarios, and other factors must often be considered.

Please sign up or login with your details

Forgot password? Click here to reset