Not Logged In

Learning Useful Horn Approximations

Full Text: horn.ps PS

While the task of answering queries from an arbitrary propositional theory is intractable in general, it can typically be performed efficiently if the theory is Horn. This suggests that it may be more efficient to answer queries using a "Horn approximation"; ie, a horn theory that is semantically similar to the original theory. The utility of any such approximation depends on how often it produces answers to the queries that the system actually encounters; we therefore seek an approximation whose expected "coverage" is maximal. Unfortunately, there are several obstacles to achieving this goal in practice: (i) The optimal approximation depends on the query distribution, which is typically not known a priori; (ii) identifying the optimal approximation is intractable, even given the query distribution; and (iii) the optimal approximation might be too large to guarantee tractable inference. This paper presents an approach that overcomes (or side-steps) each of these obstacles. We define a learning process, AdComp, that uses observed queries to estimate the query distribution "online", and then uses these estimates to hill-climb, efficiently, in the space of size-bounded Horn approximations, until reaching one that is, with provably high probability, effectively at a local optimum.

Citation

R. Greiner, D. Schuurmans. "Learning Useful Horn Approximations". Knowledge Representation and Reasoning (KR), Cambridge, United States, October 1992.

Keywords: machine learning, Horn approximation, PALO, PAC
Category: In Conference
Web Links: KR

BibTeX

@incollection{Greiner+Schuurmans:KR92,
  author = {Russ Greiner and Dale Schuurmans},
  title = {Learning Useful Horn Approximations},
  booktitle = {Knowledge Representation and Reasoning (KR)},
  year = 1992,
}

Last Updated: November 19, 2019
Submitted by Sabina P

University of Alberta Logo AICML Logo