Not Logged In

Bayesian Error-Bars for Belief Net Inference

Full Text: ErrBar.ps PS
Other Attachments: ErrBar.pdf PDF

A Bayesian Belief Network (BN) is a model of a joint distribution over a finite set of variables, with a DAG structure to represent the immediate dependencies between the variables, and a set of parameters (aka CPTables) to represent the local conditional probabilities of a node, given each assignment to its parents. In many situations, the parameters are themselves treated as random variables --- reflecting the uncertainty remaining after drawing on knowledge of domain experts and/or observing data generated by the network. A distribution over the CPtable parameters induces a distribution for the response the BN will return to any ``What is P(H | E)?'' query. This paper investigates the distribution of this response, shows that it is asymptotically normal, and derives closed-form expressions for its mean and asymptotic variance. We show that this computation has the same complexity as simply computing the (mean value of the) response --- ie, O(n * exp(w)), where n is the number of variables and w is the effective tree width. We also provide empirical evidence showing that the error-bars computed from our estimates are fairly accurate in practice, over a wide range of belief net structures and queries.

Citation

T. Van Allen, R. Greiner, P. Hooper. "Bayesian Error-Bars for Belief Net Inference". Conference on Uncertainty in Artificial Intelligence (UAI), Seattle, Washington, USA, August 2001.

Keywords: error bars, belief nets, inference, probabilistic graphical models, theoretical, empirical, machine learning
Category: In Conference

BibTeX

@incollection{VanAllen+al:UAI01,
  author = {Tim Van Allen and Russ Greiner and Peter Hooper},
  title = {Bayesian Error-Bars for Belief Net Inference},
  booktitle = {Conference on Uncertainty in Artificial Intelligence (UAI)},
  year = 2001,
}

Last Updated: April 24, 2007
Submitted by Christian Smith

University of Alberta Logo AICML Logo