Not Logged In

Model-Based Reinforcement Learning With An Approximate, Learned Model

Full Text: kuvayev-sutton.pdf PDF

Model-based reinforcement learning, in which a model of the environment's dynamics is learned and used to supplement direct learning from experience, has been proposed as a general approach to learning and planning. We present the first experiments with this idea in which the model of the environment's dynamics is both approximate and learned online. These experiments involve the Mountain Car task, which requires approximation of both value function and model because it has continuous state variables. We used models of the simplest possible form, state-aggregation or grid models, and CMACs to represent the value function. We find that model-based methods do indeed perform better than model-free reinforcement learning on this task, but only slightly.

Citation

L. Kuvayev, R. Sutton. "Model-Based Reinforcement Learning With An Approximate, Learned Model". Yale Workshop on Adaptive and Learning Systems, pp 101-105, January 1996.

Keywords: CMAC, state aggregation, grid models, machine learning
Category: In Workshop

BibTeX

@misc{Kuvayev+Sutton:YaleWorkshoponAdaptiveandLearningSystems96,
  author = {Leonid Kuvayev and Richard S. Sutton},
  title = {Model-Based Reinforcement Learning With An Approximate, Learned
    Model},
  Pages = {101-105},
  booktitle = {},
  year = 1996,
}

Last Updated: May 31, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo