Not Logged In

Team learning from human demonstration with coordination confidence

Among an array of techniques proposed to speed-up reinforcement learning (RL), learning from human demonstration has a proven record of success. A related technique, called Human-Agent Transfer, and its confidence-based derivatives have been successfully applied to single-agent RL. This article investigates their application to collaborative multi-agent RL problems. We show that a first-cut extension may leave room for improvement in some domains, and propose a new algorithm called coordination confidence (CC). CC analyzes the difference in perspectives between a human demonstrator (global view) and the learning agents (local view) and informs the agents’ action choices when the difference is critical and simply following the human demonstration can lead to miscoordination. We conduct experiments in three domains to investigate the performance of CC in comparison with relevant baselines.

Citation

B. Banerjee, S. Vittanala, M. Taylor. "Team learning from human demonstration with coordination confidence". The Knowledge Engineering Review, 34, pp e12, November 2019.

Keywords:  
Category: In Journal
Web Links: Cambridge

BibTeX

@article{Banerjee+al:19,
  author = {Bikramjit Banerjee and Syamala Vittanala and Matthew E. Taylor},
  title = {Team learning from human demonstration with coordination confidence},
  Volume = "34",
  Pages = {e12},
  journal = {The Knowledge Engineering Review},
  year = 2019,
}

Last Updated: February 08, 2021
Submitted by Sabina P

University of Alberta Logo AICML Logo