Not Logged In

Maxmin Q-learning: Controlling the Estimation Bias of Q-learning

Full Text: maxmin_q_learning_controlling_the_estimation_bias_of_q_learning.pdf PDF

Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called emph{Maxmin Q-learning}, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.

Citation

Q. Lan, Y. Pan, A. Fyshe, M. White. "Maxmin Q-learning: Controlling the Estimation Bias of Q-learning". International Conference on Learning Representations, pp n/a, April 2020.

Keywords: reinforcement learning, bias and variance reduction
Category: In Conference
Web Links: OpenReview

BibTeX

@incollection{Lan+al:ICLR20,
  author = {Qingfeng Lan and Yangchen Pan and Alona Fyshe and Martha White},
  title = {Maxmin Q-learning: Controlling the Estimation Bias of Q-learning},
  Pages = {n/a},
  booktitle = {International Conference on Learning Representations},
  year = 2020,
}

Last Updated: September 10, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo