Not Logged In

Investigating Practical Linear Temporal Difference Learning

Off-policy reinforcement learning has many applications including: learning from demonstration, learning multiple goal seeking policies in parallel, and representing predictive knowledge. Recently there has been an proliferation of new policy-evaluation algorithms that fill a longstanding algorithmic void in reinforcement learning: combining robustness to off-policy sampling, function approximation, linear complexity, and temporal difference (TD) updates. This paper contains two main contributions. First, we derive two new hybrid TD policy-evaluation algorithms, which fill a gap in this collection of algorithms. Second, we perform an empirical comparison to elicit which of these new linear TD methods should be preferred in different situations, and make concrete suggestions about practical use.

Citation

A. White, M. White. "Investigating Practical Linear Temporal Difference Learning". Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), (ed: Catholijn M. Jonker, Stacy Marsella, John Thangarajah, Karl Tuyls), pp 494-502, May 2016.

Keywords: Reinforcement learning, temporal difference learning, offpolicy learning
Category: In Conference
Web Links: ACM Digital Library

BibTeX

@incollection{White+White:AAMAS16,
  author = {Adam White and Martha White},
  title = {Investigating Practical Linear Temporal Difference Learning},
  Editor = {Catholijn M. Jonker, Stacy Marsella, John Thangarajah, Karl Tuyls},
  Pages = {494-502},
  booktitle = {Joint Conference on Autonomous Agents and Multi-Agent Systems
    (AAMAS)},
  year = 2016,
}

Last Updated: February 24, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo