Budgeted Distribution Learning of Belief Net Parameters
Full Text:
BDL_ICML2010 (1).pdf
Most learning algorithms assume that a
training dataset is given initially. We address
the common situation where data is not
available initially, but can be obtained, at a
cost. We focus on learning Bayesian belief
networks (BNs) over discrete variables. As
such BNs are models of probabilistic distributions,
we consider the "generative" challenge
of learning the parameters for a fixed
structure, that best match the true distribution.
We focus on the budgeted learning setting,
where there is a known fixed cost ci for
acquiring the value of the ith feature for any
specified instance, and a known total budget
to spend acquiring all information. After formally
defining this problem from a Bayesian
perspective, we first consider non-sequential
algorithms that must decide, before seeing
any results, which features of which instances
to probe. We show this is NP-hard, even if
all variables are independent, then prove that
the greedy allocation algorithm iga is optimal
here when the costs are uniform, but can
otherwise be sub-optimal. We then show that
general (sequential) policies perform better
than non-sequential, and explore the challenges
of learning the parameters for general
belief networks in this sequential setting,
describing conditions for when the obvious
round-robin algorithm will, versus will not,
work optimally. We also explore the effectiveness
of this and various other heuristic
algorithms.
Citation
L. Li,
B. Poczos,
C. Szepesvari,
R. Greiner.
"Budgeted Distribution Learning of Belief Net Parameters".
International Conference on Machine Learning (ICML), June 2010.
Keywords: |
machine learning, Budgeted Learning, Parameters, Bayesian networks |
Category: |
In Conference |
Web Links: |
Information page |
BibTeX
@incollection{Li+al:ICML10,
author = {Liuyang Spike Li and Barnabas Poczos and Csaba Szepesvari and Russ
Greiner},
title = {Budgeted Distribution Learning of Belief Net Parameters},
booktitle = {International Conference on Machine Learning (ICML)},
year = 2010,
}
Last Updated: June 15, 2012
Submitted by Russ Greiner