Truncated Self-Product Measures in Edge-Exchangeable Networks
Edge-exchangeable probabilistic network models generate edges as an i.i.d. sequence from a discrete measure, providing a simple means for statistical inference of latent network properties. The measure is often constructed using the self-product of a realization from a Bayesian nonparametric (BNP) discrete prior; but unlike in standard BNP models, the self-product measure prior is not conjugate the likelihood, hindering the development of exact inference algorithms. Approximate inference via finite truncation of the discrete measure is a straightforward alternative, but incurs an unknown approximation error. In this paper, we develop theoretical bounds on the error of finite truncation in random self-product-measure-based models. We apply the theory to edge-exchangeable networks, demonstrating that the truncation error for dense graphs decreases geometrically with the truncation level, but that the truncation error for sparse graphs decreases much more slowly. This implies that high truncation levels—and corresponding high computational cost—are needed to handle sparse graphs in practice. Simulations of commonly used edge exchangeable graph models confirm the theoretical results in both sparse and dense settings.
READ FULL TEXT