Some Results on Tighter Bayesian Lower Bounds on the Mean-Square Error
In random parameter estimation, Bayesian lower bounds (BLBs) for the mean-square error have been noticed to not be tight in a number of cases, even when the sample size, or the signal-to-noise ratio, grow to infinity. In this paper, we study alternative forms of BLBs obtained from a covariance inequality, where the inner product is based on the a posteriori instead of the joint probability density function. We hence obtain a family of BLBs, which is shown to form a counterpart at least as tight as the well-known Weiss-Weinstein family of BLBs, and we extend it to the general case of vector parameter estimation. Conditions for equality between these two families are provided. Focusing on the Bayesian Cramér-Rao bound (BCRB), a definition of efficiency is proposed relatively to its tighter form, and efficient estimators are described for various types of common estimation problems, e.g., scalar, exponential family model parameter estimation. Finally, an example is provided, for which the classical BCRB is known to not be tight, while we show its tighter form is, based on formal proofs of asymptotic efficiency of Bayesian estimators. This analysis is finally corroborated by numerical results.
READ FULL TEXT