Minimax Optimal Additive Functional Estimation with Discrete Distribution
This paper addresses a problem of estimating an additive functional given n i.i.d. samples drawn from a discrete distribution P=(p_1,...,p_k) with alphabet size k. The additive functional is defined as θ(P;ϕ)=∑_i=1^kϕ(p_i) for a function ϕ, which covers the most of the entropy-like criteria. The minimax optimal risk of this problem has been already known for some specific ϕ, such as ϕ(p)=p^α and ϕ(p)=-p p. However, there is no generic methodology to derive the minimax optimal risk for the additive function estimation problem. In this paper, we reveal the property of ϕ that characterizes the minimax optimal risk of the additive functional estimation problem; this analysis is applicable to general ϕ. More precisely, we reveal that the minimax optimal risk of this problem is characterized by the divergence speed of the function ϕ.
READ FULL TEXT