On Projectivity in Markov Logic Networks

by   Sagar Malhotra, et al.

Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Many works have noticed that MLNs, like many other relational models, do not admit consistent marginal inference over varying domain sizes. Furthermore, MLNs learnt on a certain domain do not generalize to new domains of varied sizes. In recent works, connections have emerged between domain size dependence, lifted inference and learning from sub-sampled domains. The central idea to these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference, removing any dependence on the domain size. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special model in this class of MLNs, namely Relational Block Model (RBM). We show that, in terms of data likelihood maximization, RBM is the best possible projective MLN in the two-variable fragment. Finally, we show that RBMs also admit consistent parameter learning over sub-sampled domains.


page 1

page 2

page 3

page 4


A Complete Characterization of Projectivity for Statistical Relational Models

A generative probabilistic model for relational data consists of a famil...

Inference, Learning, and Population Size: Projectivity for SRL Models

A subtle difference between propositional and relational data is that in...

Projectivity revisited

The behaviour of statistical relational representations across different...

Structure Learning of Contextual Markov Networks using Marginal Pseudo-likelihood

Markov networks are popular models for discrete multivariate systems whe...

Domain Aware Markov Logic Networks

Combining logic and probability has been a long standing goal of AI. Mar...

Flexible sampling of discrete data correlations without the marginal distributions

Learning the joint dependence of discrete variables is a fundamental pro...

Please sign up or login with your details

Forgot password? Click here to reset