Not Logged In

Regularized Greedy Importance Sampling

Full Text: AA34.pdf PDF

Greedy importance sampling is an unbiased estimation technique that re­ duces the variance of standard importance sampling by explicitly search­ ing for modes in the estimation objective. Previous work has demon­ strated the feasibility of implementing this method and proved that the technique is unbiased in both discrete and continuous domains. In this paper we present a reformulation of greedy importance sampling that eliminates the free parameters from the original estimator, and introduces a new regularization strategy that further reduces variance without com­ promising unbiasedness. The resulting estimator is shown to be effective for difficult estimation problems arising in Markov random field infer­ ence. In particular, improvements are achieved over standard MCMC estimators when the distribution has multiple peaked modes.

Citation

F. Southey, D. Schuurmans, A. Ghodsi. "Regularized Greedy Importance Sampling". Neural Information Processing Systems (NIPS), Vancouver, British Columbia, Canada, January 2002.

Keywords: greedy, importance, machine learning
Category: In Conference

BibTeX

@incollection{Southey+al:NIPS02,
  author = {Finnegan Southey and Dale Schuurmans and Ali Ghodsi},
  title = {Regularized Greedy Importance Sampling},
  booktitle = {Neural Information Processing Systems (NIPS)},
  year = 2002,
}

Last Updated: June 01, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo