Not Logged In

Scalable and sound low rank tensor learning

Many real-world data arise naturally as tensors. Equipped with a low rank prior, learning algorithms can benefit from exploiting the rich dependency encoded in a tensor. Despite its prevalence in low-rank matrix learning, trace norm ceases to be tractable in tensors and therefore most existing works resort to matrix unfolding. Although some theoretical guarantees are available, these approaches may lose valuable structure information and are not scalable in general. To address this problem, we propose directly optimizing the tensor trace norm by approximating its dual spectral norm, and we show that the approximation bounds can be efficiently converted to the original problem via the generalized conditional gradient algorithm. The resulting approach is scalable to large datasets, and matches state-of-the-art recovery guarantees. Experimental results on tensor completion and multitask learning confirm the superiority of the proposed method.

Citation

H. Cheng, Y. Yu, X. Zhang, E. Xing, D. Schuurmans. "Scalable and sound low rank tensor learning". Artificial Intelligence and Statistics, (ed: Arthur Gretton, Christian C. Robert), pp 1114-1123, May 2016.

Keywords:  
Category: In Conference
Web Links: PMLR

BibTeX

@incollection{Cheng+al:AISTATS16,
  author = {Hao Cheng and Yaoliang Yu and Xinhua Zhang and Eric Xing and Dale
    Schuurmans},
  title = {Scalable and sound low rank tensor learning},
  Editor = {Arthur Gretton, Christian C. Robert},
  Pages = {1114-1123},
  booktitle = {Artificial Intelligence and Statistics},
  year = 2016,
}

Last Updated: February 14, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo