Not Logged In

Reinforcement Learning With Replacing Eligibility Traces

The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the replacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the offline TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that the method corresponding to replacing traces is closely related to the maximum likelihood solution for these tasks, and that its mean squared error is always lower in the long run. Computational results confirm these analyses and show that they are applicable more generally. In particular, we show that replacing traces significantly improve performance and reduce parameter sensitivity on the Mountain-Car task, a full reinforcement-learning problem with a continuous state space, when using a feature-based function approximator.

Citation

S. Singh, R. Sutton. "Reinforcement Learning With Replacing Eligibility Traces". Machine Learning Journal (MLJ), (22), pp 123-158, January 1996.

Keywords: squared, error, Mountain-Car, machine learning
Category: In Journal

BibTeX

@article{Singh+Sutton:MLJ96,
  author = {Satinder Singh and Richard S. Sutton},
  title = {Reinforcement Learning With Replacing Eligibility Traces},
  Number = "22",
  Pages = {123-158},
  journal = {Machine Learning Journal (MLJ)},
  year = 1996,
}

Last Updated: April 24, 2007
Submitted by Nelson Loyola

University of Alberta Logo AICML Logo