Not Logged In

Experiments With Reinforcement Learning in Problems With Continuous State and Action Spaces

Full Text: SSR-98.pdf PDF

A key element in the solution of reinforcement learning problems is the value function. The purpose of this function is to measure the long-term utility or value of any given state and it is important because an agent can use it to decide what to do next. A common problem in reinforcement learning when applied to systems having continuous states and action spaces is that the value function must operate with a domain consisting of real-valued variables, which means that it should be able to represent the value of infinitely many state and action pairs. For this reason, function approximators are used to represent the value function when a close-form solution of the optimal policy is not available. In this paper, we extend a previously proposed reinforcement learning algorithm so that it can be used with function approximators that generalize the value of individual experiences across both, state and action spaces. In particular, we discuss the benefits of using sparse coarse-coded function approximators to represent value functions and describe in detail three implementations: CMAC, instance-based, and case-based. Additionally, we discuss how function approximators having different degrees of resolution in different regions of the state and action spaces may influence the performance and learning efficiency of the agent. We propose a simple and modular technique that can be used to implement function approximators with non-uniform degrees of resolution so that it can represent the value function with higher accuracy in important regions of the state and action spaces. We performed extensive experiments in the double-integrator and pendulum swing up systems to demonstrate the proposed ideas.

Citation

J. Santamaria, R. Sutton, A. Ram. "Experiments With Reinforcement Learning in Problems With Continuous State and Action Spaces". Adaptive Behavior, 2, pp 163-218, January 1998.

Keywords: approximators, resolution, double-integrator, machine learning
Category: In Journal

BibTeX

@article{Santamaria+al:AdaptiveBehavior98,
  author = {Juan C. Santamaria and Richard S. Sutton and Ashwin Ram},
  title = {Experiments With Reinforcement Learning in Problems With Continuous
    State and Action Spaces},
  Volume = "2",
  Pages = {163-218},
  journal = {Adaptive Behavior},
  year = 1998,
}

Last Updated: March 12, 2007
Submitted by AICML Admin Assistant

University of Alberta Logo AICML Logo