Not Logged In

A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning

One of the main obstacles to successful application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many larger scale applications, online computation and the use of function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. For temporal-difference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learning-rate parameter. There are no methods available that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporal-difference learning methods in real world problems.

Citation

M. White, A. White. "A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning". Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), (ed: Catholijn M. Jonker, Stacy Marsella, John Thangarajah, Karl Tuyls), pp 557-565, May 2016.

Keywords: Reinforcement learning, temporal difference learning, offpolicy learning
Category: In Conference
Web Links: ACM Digital Library

BibTeX

@incollection{White+White:AAMAS16,
  author = {Martha White and Adam White},
  title = {A Greedy Approach to Adapting the Trace Parameter for Temporal
    Difference Learning},
  Editor = {Catholijn M. Jonker, Stacy Marsella, John Thangarajah, Karl Tuyls},
  Pages = {557-565},
  booktitle = {Joint Conference on Autonomous Agents and Multi-Agent Systems
    (AAMAS)},
  year = 2016,
}

Last Updated: February 24, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo