Normalized Power Prior Bayesian Analysis
The elicitation of power priors, based on the availability of historical data, is realized by raising the likelihood function of the historical data to a fractional power δ, which quantifies the degree of discounting of the historical information in making inference with the current data. When δ is not pre-specified and is treated as random, it can be estimated from the data using Bayesian updating paradigm. However, in the original form of the joint power prior Bayesian approach, certain positive constants before the likelihood of the historical data could be multiplied when different settings of sufficient statistics are employed. This would change the power priors with different constants, and hence the likelihood principle is violated. In this article, we investigate a normalized power prior approach which obeys the likelihood principle and is a modified form of the joint power prior. The optimality properties of the normalized power prior in the sense of minimizing the weighted Kullback-Leibler divergence is investigated. By examining the posteriors of several commonly used distributions, we show that the discrepancy between the historical and the current data can be well quantified by the power parameter under the normalized power prior setting. Efficient algorithms to compute the scale factor is also proposed. In addition, we illustrate the use of the normalized power prior Bayesian analysis with three data examples, and provide an implementation with an R package NPP.
READ FULL TEXT