Not Logged In

Learning purposeful behaviour in the absence of rewards

Full Text: 1605.07700.pdf PDF

Artificial intelligence is commonly defined as the ability to achieve goals in the world. In the reinforcement learning framework, goals are encoded as reward functions that guide agent behaviour, and the sum of observed rewards provide a notion of progress. However, some domains have no such reward signal, or have a reward signal so sparse as to appear absent. Without reward feedback, agent behaviour is typically random, often dithering aimlessly and lacking intentionality. In this paper we present an algorithm capable of learning purposeful behaviour in the absence of rewards. The algorithm proceeds by constructing temporally extended actions (options), through the identification of purposes that are "just out of reach" of the agent's current behaviour. These purposes establish intrinsic goals for the agent to learn, ultimately resulting in a suite of behaviours that encourage the agent to visit different parts of the state space. Moreover, the approach is particularly suited for settings where rewards are very sparse, and such behaviours can help in the exploration of the environment until reward is observed.

Citation

M. Machado, M. Bowling. "Learning purposeful behaviour in the absence of rewards". Workshop on Abstraction in Reinforcement Learning, June 2016.

Keywords:  
Category: In Workshop

BibTeX

@misc{Machado+Bowling:16,
  author = {Marlos C. Machado and Michael Bowling},
  title = {Learning purposeful behaviour in the absence of rewards},
  booktitle = {Workshop on Abstraction in Reinforcement Learning},
  year = 2016,
}

Last Updated: October 29, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo