Improving the Efficiency of Approximate Inference for Probabilistic Logical Models by means of Program Specialization
We consider the task of performing probabilistic inference with probabilistic logical models. Many algorithms for approximate inference with such models are based on sampling. From a logic programming perspective, sampling boils down to repeatedly calling the same queries on a knowledge base composed of a static part and a dynamic part. The larger the static part, the more redundancy there is in these repeated calls. This is problematic since inefficient sampling yields poor approximations. We show how to apply logic program specialization to make sampling-based inference more efficient. We develop an algorithm that specializes the definitions of the query predicates with respect to the static part of the knowledge base. In experiments on real-world data we obtain speedups of up to an order of magnitude, and these speedups grow with the data-size.
READ FULL TEXT