Maximum Margin Bayesian Networks
We consider the problem of learning Bayesian
network classifiers that maximize the margin
over a set of classification variables. We find
that this problem is harder for Bayesian networks
than for undirected graphical models like maximum
margin Markov networks. The main difficulty
is that the parameters in a Bayesian network
must satisfy additional normalization constraints
that an undirected graphical model need
not respect. These additional constraints complicate
the optimization task. Nevertheless, we derive
an effective training algorithm that solves the
maximum margin training problem for a range
of Bayesian network topologies, and converges
to an approximate solution for arbitrary network
topologies. Experimental results show that the
method can demonstrate improved generalization
performance over Markov networks when
the directed graphical structure encodes relevant
knowledge. In practice, the training technique allows
one to combine prior knowledge expressed
as a directed (causal) model with state of the art
discriminative learning methods.
Citation
Y. Guo,
D. Wilkinson,
D. Schuurmans.
"Maximum Margin Bayesian Networks".
Conference on Uncertainty in Artificial Intelligence (UAI), Edinburgh, Scotland, January 2005.
Keywords: |
Bayesian, networks, machine learning |
Category: |
In Conference |
BibTeX
@incollection{Guo+al:UAI05,
author = {Yuhong Guo and Dana Wilkinson and Dale Schuurmans},
title = {Maximum Margin Bayesian Networks},
booktitle = {Conference on Uncertainty in Artificial Intelligence (UAI)},
year = 2005,
}
Last Updated: April 24, 2007
Submitted by William Thorne