Not Logged In

Data Perturbation for Escaping Local Maxima in Learning

Full Text: ENFS1.pdf PDF

Almost all machine learning algorithms---be they for regres­ sion, classification or density estimation---seek hypotheses that optimize a score on training data. In most interesting cases, however, full global optimization is not feasible and local search techniques are used to discover reasonable solu­ tions. Unfortunately, the quality of the local maxima reached depends on initialization and is often weaker than the global maximum. In this paper, we present a simple approach for combining global search with local optimization to discover improved hypotheses in general machine learning problems. The main idea is to escape local maxima by perturbing the training data to create plausible new ascent directions, rather than perturbing hypotheses directly. Specifically, we consider example­reweighting strategies that are reminiscent of boost­ ing and other ensemble learning methods, but applied in a different way with a different goal: to produce a single hy­ pothesis that achieves a good score on training and test data. To evaluate the performance of our algorithms we consider a number of problems in learning Bayesian networks from data, including discrete training problems (structure search), con­ tinuous training problems (parametric EM, non­linear logistic regression), and mixed training problems (Structural EM)--- on both synthetic and real­world data. In each case, we obtain state of the art performance on both training and test data.

Citation

G. Elidan, M. Ninio, N. Friedman, D. Schuurmans. "Data Perturbation for Escaping Local Maxima in Learning". National Conference on Artificial Intelligence (AAAI), Edmonton Alberta, July 2002.

Keywords: perturbation, machine learning
Category: In Conference

BibTeX

@incollection{Elidan+al:AAAI02,
  author = {Gal Elidan and Matan Ninio and Nir Friedman and Dale Schuurmans},
  title = {Data Perturbation for Escaping Local Maxima in Learning},
  booktitle = {National Conference on Artificial Intelligence (AAAI)},
  year = 2002,
}

Last Updated: June 01, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo