Not Logged In

Macro-Actions in Reinforcement Learning: An Empirical Analysis

Full Text: mcgovern98macroactions.pdf PDF

Several researchers have proposed reinforcement learning methods that obtain advantages in learning by using temporally extended actions, or macro-actions, but none has carefully analyzed what these advantages are. In this paper, we separate and analyze two advantages of using macro-actions in reinforcement learning: the effect on exploratory behavior, independent of learning, and the effect on the speed with which the learning process propagates accurate value information. We empirically measure the separate contributions of these two effects in gridworld and simulated robotic environments. In these environments, both effects were significant, but the effect of value propagation was larger. We also compare the accelerations of value propagation due to macro-actions and eligibility traces in the gridworld environment. Although eligibility traces increased the rate of convergence to the optimal value function compared to learning with macro-actions but without eligibility traces, eligibility traces did not permit the optimal policy to be learned as quickly as it was using macro-actions.

Citation

A. McGovern, R. Sutton. "Macro-Actions in Reinforcement Learning: An Empirical Analysis". Technical Report, January 1998.

Keywords: macro actions, traces, robotic, environment, machine learning
Category: Technical Report

BibTeX

@manual{McGovern+Sutton:98,
  author = {Amy McGovern and Richard S. Sutton},
  title = {Macro-Actions in Reinforcement Learning: An Empirical Analysis},
  year = 1998,
}

Last Updated: May 31, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo