Interactive Learning of Acyclic Conditional Preference Networks
Learning of user preferences, as represented by, for example, Conditional Preference Networks (CP-nets), has become a core issue in AI research. Recent studies investigate learning of CP-nets from randomly chosen examples or from membership and equivalence queries. To assess the optimality of learning algorithms as well as to better understand the combinatorial structure of classes of CP-nets, it is helpful to calculate certain learning-theoretic information complexity parameters. This paper determines bounds on or exact values of some of the most central information complexity parameters, namely the VC dimension, the (recursive) teaching dimension, the self-directed learning complexity, and the optimal mistake bound, for classes of acyclic CP-nets. We further provide an algorithm that learns tree-structured CP-nets from membership queries. Using our results on complexity parameters, we assess the optimality of our algorithm as well as that of another query learning algorithm for acyclic CP-nets presented in the literature. Our algorithm is near-optimal, and can, under certain assumptions be adapted to the case when the membership oracle is faulty.
READ FULL TEXT