Not Logged In

Strategy Evaluation in Extensive Games with Importance Sampling

Full Text: paper.pdf PDF

Typically agent evaluation is done through Monte Carlo estimation. However, stochastic agent decisions and stochastic outcomes can make this approach inefficient, requiring many samples for an accurate estimate. We present a new technique that can be used to simultaneously evaluate many strategies while playing a single strategy in the context of an extensive game. This technique is based on importance sampling, but utilizes two new mechanisms for significantly reducing variance in the estimates. We demonstrate its effectiveness in the domain of poker, where stochasticity makes traditional evaluation problematic.

Citation

M. Bowling, M. Johanson, N. Burch, D. Szafron. "Strategy Evaluation in Extensive Games with Importance Sampling". International Conference on Machine Learning (ICML), (ed: Andrew McCallum and Sam Roweis), pp 72--79, July 2008.

Keywords:  
Category: In Conference

BibTeX

@incollection{Bowling+al:ICML08,
  author = {Michael Bowling and Michael Johanson and Neil Burch and Duane
    Szafron},
  title = {Strategy Evaluation in Extensive Games with Importance Sampling},
  Editor = {Andrew McCallum and Sam Roweis},
  Pages = {72--79},
  booktitle = {International Conference on Machine Learning (ICML)},
  year = 2008,
}

Last Updated: August 19, 2009
Submitted by Michael Johanson

University of Alberta Logo AICML Logo