Not Logged In

On Principled Entropy Exploration in Policy Optimization

Full Text: 0434.pdf PDF

In this paper, we investigate Exploratory Conservative Policy Optimization (ECPO), a policy optimization strategy that improves exploration behavior while assuring monotonic progress in a principled objective. ECPO conducts maximum entropy exploration within a mirror descent framework, but updates policies using reversed KL projection. This formulation bypasses undesirable mode seeking behavior and avoids premature convergence to sub-optimal policies, while still supporting strong theoretical properties such as guaranteed policy improvement. Experimental evaluations demonstrate that the proposed method significantly improves practical exploration and surpasses the empirical performance of state-of-the art policy optimization methods in a set of benchmark tasks.

Citation

J. Mei, C. Xiao, R. Huang, D. Schuurmans, M. Müller. "On Principled Entropy Exploration in Policy Optimization". International Joint Conference on Artificial Intelligence (IJCAI), (ed: Sarit Kraus), pp 3130-3136, August 2019.

Keywords: Machine Learning: Reinforcement Learning Machine Learning Applications: Applications of Reinforcement Learning
Category: In Conference
Web Links: DOI
  IJCAI

BibTeX

@incollection{Mei+al:IJCAI19,
  author = {Jincheng Mei and Chenjun Xiao and Ruitong Huang and Dale Schuurmans
    and Martin Müller},
  title = {On Principled Entropy Exploration in Policy Optimization},
  Editor = {Sarit Kraus},
  Pages = {3130-3136},
  booktitle = {International Joint Conference on Artificial Intelligence
    (IJCAI)},
  year = 2019,
}

Last Updated: June 29, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo