Not Logged In

Using Predictive Representations to Improve Generalization in Reinforcement Learning

The predictive representations hypothesis holds that particularly good generalization will result from representing the state of the world in terms of predictions about possible future experience. This hypothesis has been a central motivation behind recent research in, for example, PSRs and TD networks. In this paper we present the first explicit investigation of this hypothesis. We show in a reinforcement-learning example (a grid-world navigation task) that a predictive representation in tabular form can learn much faster than both the tabular explicit-state representation and a tabular history-based method.

Citation

E. Rafols, M. Ring, R. Sutton, B. Tanner. "Using Predictive Representations to Improve Generalization in Reinforcement Learning". International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh, Scotland, August 2005.

Keywords: PSRs, hypothesis, tabular, machine learning
Category: In Conference

BibTeX

@incollection{Rafols+al:IJCAI05,
  author = {Eddie J. Rafols and Mark B. Ring and Richard S. Sutton and Brian
    Tanner},
  title = {Using Predictive Representations to Improve Generalization in
    Reinforcement Learning},
  booktitle = {International Joint Conference on Artificial Intelligence
    (IJCAI)},
  year = 2005,
}

Last Updated: April 25, 2007
Submitted by Christian Smith

University of Alberta Logo AICML Logo