Not Logged In

Learning Sparse Representations in Reinforcement Learning with Sparse Coding

Full Text: 0287.pdf PDF

A variety of representation learning approaches have been investigated for reinforcement learning; much less attention, however, has been given to investigating the utility of sparse coding. Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objectives that result in discriminative representations. In this work, we develop a supervised sparse coding objective for policy evaluation. Despite the non-convexity of this objective, we prove that all local minima are global minima, making the approach amenable to simple optimization strategies. We empirically show that it is key to use a supervised objective, rather than the more straightforward unsupervised sparse coding approach. We then compare the learned representations to a canonical fixed sparse representation, called tile-coding, demonstrating that the sparse coding representation outperforms a wide variety of tile-coding representations.

Citation

L. Le, R. Kumaraswamy, M. White. "Learning Sparse Representations in Reinforcement Learning with Sparse Coding". International Joint Conference on Artificial Intelligence (IJCAI), (ed: Carles Sierra), pp 2067-2073, August 2017.

Keywords: Machine Learning, Feature Selection/Construction, Reinforcement Learning
Category: In Conference
Web Links: DOI
  IJCAI

BibTeX

@incollection{Le+al:IJCAI17,
  author = {Lei Le and Raksha Kumaraswamy and Martha White},
  title = {Learning Sparse Representations in Reinforcement Learning with
    Sparse Coding},
  Editor = {Carles Sierra},
  Pages = {2067-2073},
  booktitle = {International Joint Conference on Artificial Intelligence
    (IJCAI)},
  year = 2017,
}

Last Updated: February 25, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo