Not Logged In

Accelerated Gradient Temporal Difference Learning

Full Text: 14460-66919-1-PB.pdf PDF

The family of temporal difference (TD) methods span a spectrum from computationally frugal linear methods like TD(λ) to data efficient least squares methods. Least square methods make the best use of available data directly computing the TD solution and thus do not require tuning a typically highly sensitive learning rate parameter, but require quadratic computation and storage. Recent algorithmic developments have yielded several sub-quadratic methods that use an approximation to the least squares TD solution, but incur bias. In this paper, we propose a new family of accelerated gradient TD (ATD) methods that (1) provide similar data efficiency benefits to least-squares methods, at a fraction of the computation and storage (2) significantly reduce parameter sensitivity compared to linear TD methods, and (3) are asymptotically unbiased. We illustrate these claims with a proof of convergence in expectation and experiments on several benchmark domains and a large-scale industrial energy allocation domain.

Citation

Y. Pan, A. White, M. White. "Accelerated Gradient Temporal Difference Learning". National Conference on Artificial Intelligence (AAAI), San Francisco, USA, (ed: Satinder P. Singh, Shaul Markovitch), pp 2464-2470, February 2017.

Keywords: Reinforcement learning, temporal difference learning, least squares, prediction
Category: In Conference
Web Links: AAAI

BibTeX

@incollection{Pan+al:AAAI17,
  author = {Yangchen Pan and Adam White and Martha White},
  title = {Accelerated Gradient Temporal Difference Learning},
  Editor = {Satinder P. Singh, Shaul Markovitch},
  Pages = {2464-2470},
  booktitle = {National Conference on Artificial Intelligence (AAAI)},
  year = 2017,
}

Last Updated: February 24, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo