Not Logged In

Adaptive Monte Carlo via bandit allocation

Full Text: neufeld14.pdf PDF

We consider the problem of sequentially choosing between a set of unbiased Monte Carlo estimators to minimize the mean-squared-error (MSE) of a final combined estimate. By reducing this task to a stochastic multi-armed bandit problem, we show that well developed allocation strategies can be used to achieve an MSE that approaches that of the best estimator chosen in retrospect. We then extend these developments to a scenario where alternative estimators have different, possibly stochastic, costs. The outcome is a new set of adaptive Monte Carlo strategies that provide stronger guarantees than previous approaches while offering practical advantages.

Citation

J. Neufeld, A. Gyorgy, C. Szepesvari, D. Schuurmans. "Adaptive Monte Carlo via bandit allocation". International Conference on Machine Learning (ICML), (ed: Eric P. Xing, Tony Jebara), pp 1944-1952, June 2014.

Keywords:  
Category: In Conference
Web Links: PMLR

BibTeX

@incollection{Neufeld+al:ICML14,
  author = {James Neufeld and Andras Gyorgy and Csaba Szepesvari and Dale
    Schuurmans},
  title = {Adaptive Monte Carlo via bandit allocation},
  Editor = {Eric P. Xing, Tony Jebara},
  Pages = {1944-1952},
  booktitle = {International Conference on Machine Learning (ICML)},
  year = 2014,
}

Last Updated: February 19, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo