Not Logged In

A Generalized Loop Correction Method for Approximate Inference in Graphical Models

Full Text: 304.pdf PDF

Belief Propagation (BP) is one of the most popular methods for inference in probabilistic graphical models. BP is guaranteed to return the correct answer for tree structures, but can be incorrect or non-convergent for loopy graphical models. Recently, several new approximate inference algorithms based on cavity distribution have been proposed. These methods can account for the e ect of loops by incorporating the dependency between BP messages. Alternatively, region- based approximations (that lead to methods such as Generalized Belief Propagation) improve upon BP by considering interactions within small clusters of variables, thus taking small loops within these clusters into account. This paper introduces an approach, Generalized Loop Correction (GLC), that bene ts from both of these types of loop correction. We show how GLC relates to these two families of inference methods, then provide empirical evidence that GLC works effectively in general, and can be signi cantly more accurate than both correction schemes.

Citation

S. Ravanbakhsh, C. Yu, R. Greiner. "A Generalized Loop Correction Method for Approximate Inference in Graphical Models". International Conference on Machine Learning (ICML), (ed: John Langford, Joelle Pineau), pp 543-550, July 2012.

Keywords: probabilistic graphical models, inference, approximate inference
Category: In Conference
Web Links: ICML

BibTeX

@incollection{Ravanbakhsh+al:ICML12,
  author = {Siamak Ravanbakhsh and Chun-Nam Yu and Russ Greiner},
  title = {A Generalized Loop Correction Method for Approximate Inference in
    Graphical Models},
  Editor = {John Langford, Joelle Pineau},
  Pages = {543-550},
  booktitle = {International Conference on Machine Learning (ICML)},
  year = 2012,
}

Last Updated: February 12, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo