Not Logged In

Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path

Full Text:  PDF

We consider batch reinforcement learning problems in continuous space, expected total discounted-reward Markovian Decision Problems. As opposed to previous theoretical work, we consider the case when the training data consists of a single sample path (trajectory) of some behaviour policy.In particular, we do not assume access to a generative model of the environment.The algorithm studied is fitted Q-iteration where in successive iterations the $Q$-functions of the intermediate policies are obtained by means of minimizing a novel Bellman-residual type error.PAC-style polynomial bounds are derived on the number of samples needed to guarantee near-optimal performancewhere the bound depends on the mixing rate of the trajectory, the smoothness properties of the underlying Markovian Decision Problem, the approximation power and capacity of the function set used.

Citation

A. Antos, C. Szepesvari, R. Munos. "Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path". Machine Learning Journal (MLJ), June 2007.

Keywords: machine learning
Category: In Journal

BibTeX

@article{Antos+al:MLJ07,
  author = {Andras Antos and Csaba Szepesvari and Remi Munos},
  title = {Learning near-optimal policies with Bellman-residual minimization
    based fitted policy iteration and a single sample path},
  journal = {Machine Learning Journal (MLJ)},
  year = 2007,
}

Last Updated: June 03, 2007
Submitted by Staurt H. Johnson

University of Alberta Logo AICML Logo