Learning implicitly in reasoning in PAC-Semantics
We consider the problem of answering queries about formulas of propositional logic based on background knowledge partially represented explicitly as other formulas, and partially represented as partially obscured examples independently drawn from a fixed probability distribution, where the queries are answered with respect to a weaker semantics than usual -- PAC-Semantics, introduced by Valiant (2000) -- that is defined using the distribution of examples. We describe a fairly general, efficient reduction to limited versions of the decision problem for a proof system (e.g., bounded space treelike resolution, bounded degree polynomial calculus, etc.) from corresponding versions of the reasoning problem where some of the background knowledge is not explicitly given as formulas, only learnable from the examples. Crucially, we do not generate an explicit representation of the knowledge extracted from the examples, and so the "learning" of the background knowledge is only done implicitly. As a consequence, this approach can utilize formulas as background knowledge that are not perfectly valid over the distribution---essentially the analogue of agnostic learning here.
READ FULL TEXT