Not Logged In

Theoretical Results on Reinforcement Learning With Temporally Abstract Options

Full Text: precup98theoretical.pdf PDF

We present new theoretical results on planning within the framework of temporally abstract reinforcement learning (Precup & Sutton, 1997; Sutton, 1995). Temporal abstraction is a key step in any decision making system that involves planning and prediction. In temporally abstract reinforcement learning, the agent is allowed to choose among 'options', whole courses of action that may be temporally extended, stochastic, and contingent on previous events. Examples of options include closed-loop policies such as picking up an object, as well as primitive actions such as joint torques. Knowledge about the consequences of options is represented by special structures called multi-time models. In this paper we focus on the theory of planning with multi-time models. We define new Bellman equations that are satisfied for sets of multi-time models. As a consequence, multi-time models can be used interchangeably with models of primitive actions in a variety of well-known planning methods including value iteration, policy improvement and policy iteration.

Citation

D. Precup, R. Sutton, S. Singh. "Theoretical Results on Reinforcement Learning With Temporally Abstract Options". European Conference on Machine Learning (ECML), Chemnitz, Germany, pp 382-393, January 1998.

Keywords: interchangeably, consequences, options, machine learning
Category: In Conference

BibTeX

@incollection{Precup+al:ECML98,
  author = {Doina Precup and Richard S. Sutton and Satinder Singh},
  title = {Theoretical Results on Reinforcement Learning With Temporally
    Abstract Options},
  Pages = {382-393},
  booktitle = {European Conference on Machine Learning (ECML)},
  year = 1998,
}

Last Updated: May 31, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo