Providing Uncertainty-Based Advice for Deep Reinforcement Learning
Full Text: 7229-Article Text-10458-1-10-20200526.pdfThe sample-complexity of Reinforcement Learning (RL) techniques still represents a challenge for scaling up RL to unsolved domains. One way to alleviate this problem is to leverage samples from the policy of a demonstrator to learn faster. However, advice is normally limited, hence advice should ideally be directed to states where the agent is uncertain on the best action to be applied. In this work, we propose Requesting Confidence-Moderated Policy advice (RCMP), an action-advising framework where the agent asks for advice when its uncertainty is high. We describe a technique to estimate the agent uncertainty with minor modifications in standard value-based RL methods. RCMP is shown to perform better than several baselines in the Atari Pong domain.
Citation
F. Silva, P. Hernandez-Leal, B. Kartal, M. Taylor. "Providing Uncertainty-Based Advice for Deep Reinforcement Learning". National Conference on Artificial Intelligence (AAAI), pp 13913-13914, February 2020.Keywords: | |
Category: | In Conference |
Web Links: | doi |
AAAI |
BibTeX
@incollection{Silva+al:AAAI20, author = {Felipe Leno Da Silva and Pablo Hernandez-Leal and Bilal Kartal and Matthew E. Taylor}, title = {Providing Uncertainty-Based Advice for Deep Reinforcement Learning}, Pages = {13913-13914}, booktitle = {National Conference on Artificial Intelligence (AAAI)}, year = 2020, }Last Updated: February 05, 2021
Submitted by Sabina P