Not Logged In

Pre-training with non-expert human demonstration for deep reinforcement learning

Deep reinforcement learning (deep RL) has achieved superior performance in complex sequential tasks by using deep neural networks as function approximators to learn directly from raw input images. However, learning directly from raw images is data inefficient. The agent must learn feature representation of complex states in addition to learning a policy. As a result, deep RL typically suffers from slow learning speeds and often requires a prohibitively large amount of training time and data to reach reasonable performance, making it inapplicable to real-world settings where data are expensive. In this work, we improve data efficiency in deep RL by addressing one of the two learning goals, feature learning. We leverage supervised learning to pre-train on a small set of non-expert human demonstrations and empirically evaluate our approach using the asynchronous advantage actor-critic algorithms in the Atari domain. Our results show significant improvements in learning speed, even when the provided demonstration is noisy and of low quality.

Citation

G. Jr., Y. Du, M. Taylor. "Pre-training with non-expert human demonstration for deep reinforcement learning". The Knowledge Engineering Review, 34, pp e10, November 2019.

Keywords:  
Category: In Journal
Web Links: Cambridge

BibTeX

@article{Jr.+al:19,
  author = {Gabriel V. de la Cruz Jr. and Yunshu Du and Matthew E. Taylor},
  title = {Pre-training with non-expert human demonstration for deep
    reinforcement learning},
  Volume = "34",
  Pages = {e10},
  journal = {The Knowledge Engineering Review},
  year = 2019,
}

Last Updated: February 08, 2021
Submitted by Sabina P

University of Alberta Logo AICML Logo