Not Logged In

Continuous-Time Hierarchical Reinforcement Learning

Full Text: icml01.pdf PDF

Hierarchical reinforcement learning (RL) is a general framework which studies how to exploit the structure of actions and tasks to accelerate policy learning in large domains. Prior work in hierarchical RL, such as the MAXQ method, has been limited to the discrete-time discounted reward semiMarkov decision process (SMDP) model. This paper generalizes the MAXQ method to continuous-time discounted and average reward SMDP models. We describe two hierarchical reinforcement learning algorithms

Citation

M. Ghavamzadeh, S. Mahadevan. "Continuous-Time Hierarchical Reinforcement Learning". International Conference on Machine Learning (ICML), Williams College, pp 186-193, July 2001.

Keywords:  
Category: In Conference

BibTeX

@incollection{Ghavamzadeh+Mahadevan:ICML01,
  author = {Mohammad Ghavamzadeh and Sridhar Mahadevan},
  title = {Continuous-Time Hierarchical Reinforcement Learning},
  Pages = {186-193},
  booktitle = {International Conference on Machine Learning (ICML)},
  year = 2001,
}

Last Updated: June 11, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo