Not Logged In

Planning with Expectation Models

Full Text: 0506.pdf PDF

Distribution and sample models are two popular model choices in model-based reinforcement learning (MBRL). However, learning these models can be intractable, particularly when the state and action spaces are large. Expectation models, on the other hand, are relatively easier to learn due to their compactness and have also been widely used for deterministic environments. For stochastic environments, it is not obvious how expectation models can be used for planning as they only partially characterize a distribution. In this paper, we propose a sound way of using approximate expectation models for MBRL. In particular, we 1) show that planning with an expectation model is equivalent to planning with a distribution model if the state value function is linear in state features, 2) analyze two common parametrization choices for approximating the expectation: linear and non-linear expectation models, 3) propose a sound model-based policy evaluation algorithm and present its convergence results, and 4) empirically demonstrate the effectiveness of the proposed planning algorithm.

Citation

Y. Wan, M. Zaheer, R. Sutton, A. White, M. White. "Planning with Expectation Models". International Joint Conference on Artificial Intelligence (IJCAI), (ed: Sarit Kraus), pp 3649-3655, August 2019.

Keywords: Machine Learning, Reinforcement Learning, Planning and Scheduling, Planning Algorithms
Category: In Conference
Web Links: IJCAI
  DOI:

BibTeX

@incollection{Wan+al:IJCAI19,
  author = {Yi Wan and Muhammad Zaheer and Richard S. Sutton and Adam White and
    Martha White},
  title = {Planning with Expectation Models},
  Editor = {Sarit Kraus},
  Pages = {3649-3655},
  booktitle = {International Joint Conference on Artificial Intelligence
    (IJCAI)},
  year = 2019,
}

Last Updated: February 24, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo