Not Logged In

Eligibility Traces for Off-Policy Policy Evaluation

Full Text: PSS-00.pdf PDF

Eligibility traces have been shown to speed reinforcement learning, to make it more robust to hidden states, and to provide a link between Monte Carlo and temporal-difference methods. Here we generalize eligibility traces to off-policy learning, in which one learns about a policy different from the policy that generates the data. Off-policy methods can greatly multiply learning, as many policies can be learned about from the same data stream, and have been identified as particularly useful for learning about subgoals and temporally extended macro-actions. In this paper we consider the off-policy version of the policy evaluation problem, for which only one eligibility trace algorithm is known, a Monte Carlo method. We analyze and compare this and four new eligibility trace algorithms, emphasizing their relationships to the classical statistical technique known as importance sampling. Our main results are 1) to establish the consistency and bias properties of the new methods and 2) to empirically rank the new methods, showing improvement over one-step and Monte Carlo methods. Our results are restricted to model-free, table-lookup methods and to offline updating (at the end of each episode) although several of the algorithms could be applied more generally.

Citation

D. Precup, R. Sutton, S. Singh. "Eligibility Traces for Off-Policy Policy Evaluation". International Conference on Machine Learning (ICML), Stanford University, pp 759-766, January 2000.

Keywords: bias, properties, classical, trace, machine learning
Category: In Conference

BibTeX

@incollection{Precup+al:ICML00,
  author = {Doina Precup and Richard S. Sutton and Satinder Singh},
  title = {Eligibility Traces for Off-Policy Policy Evaluation},
  Pages = {759-766},
  booktitle = {International Conference on Machine Learning (ICML)},
  year = 2000,
}

Last Updated: May 31, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo