Not Logged In

Bounding the Suboptimality of Reusing Subproblems

Full Text: nips98.pdf PDF

We are interested in the problem of determining a course of action to achieve a desired objective in a nondeterministic environment. Markov decision processes (MDPs) provide a framework for representing this action selection problem, and there are a number of algorithms that learn optimal policies within this formulation. This framework has also been used to study state space abstraction, problem decomposition, and policy reuse. These techniques sacrifice optimality of their solution for improved learning speed. In this paper we examine the suboptimality of reusing policies that are solutions to subproblems. This is done within a restricted class of MDPs, namely those where non-zero reward is received only upon reaching a goal state. We introduce the definition of a subproblem within this class and provide motivation for how reuse of subproblem solutions can speed up learning. The contribution of this paper is the derivation of a tight bound on the loss in optimality from this reuse. We examine a bound that is based on Bellman error, which applies to all MDPs, but does not provide us with a tight bound. We contribute our own theoretical result that gives an empirically tight bound on this suboptimality.

Citation

M. Bowling, M. Veloso. "Bounding the Suboptimality of Reusing Subproblems". International Joint Conference on Artificial Intelligence (IJCAI), pp 1340-1345, August 1999.

Keywords:  
Category: In Conference

BibTeX

@incollection{Bowling+Veloso:IJCAI99,
  author = {Michael Bowling and Manuela Veloso},
  title = {Bounding the Suboptimality of Reusing Subproblems},
  Pages = {1340-1345},
  booktitle = {International Joint Conference on Artificial Intelligence
    (IJCAI)},
  year = 1999,
}

Last Updated: April 24, 2007
Submitted by Nelson Loyola

University of Alberta Logo AICML Logo