Not Logged In

Focus of Attention in Reinforcement Learning

Full Text: FocusAttn-LiBulitkoGreiner.2007.pdf PDF

Classification-based reinforcement learning (RL) methods have recently been pro- posed as an alternative to the traditional value-function based methods. These methods use a classifier to represent a policy, where the input (features) to the classifier is the state and the output (class label) for that state is the desired action. The reinforcement-learning community knows that focusing on more important states can lead to improved performance. In this paper, we investigate the idea of focused learning in the context of classification-based RL. Specifically, we define a useful notation of state importance, which we use to prove rigorous bounds on policy loss. Furthermore, we show that a classification-based RL agent may behave arbitrarily poorly if it treats all states as equally important.

Citation

L. Li, V. Bulitko, R. Greiner. "Focus of Attention in Reinforcement Learning". Journal of Universal Computer Science, 13(9), pp 24, October 2007.

Keywords: machine learning, reinforcement learning, AICML, focus of attention
Category: In Journal

BibTeX

@article{Li+al:J.UCS07,
  author = {Lihong Li and Vadim Bulitko and Russ Greiner},
  title = {Focus of Attention in Reinforcement Learning},
  Volume = "13",
  Number = "9",
  Pages = {24},
  journal = {Journal of Universal Computer Science},
  year = 2007,
}

Last Updated: November 13, 2007
Submitted by Russ Greiner

University of Alberta Logo AICML Logo