Not Logged In

Convex Multi-view Subspace Learning

Full Text: 4632-convex-multi-view-subspace-learning.pdf PDF

Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction. However, in many applications, data is obtained from multiple sources rather than a single source (e.g. an object might be viewed by cameras at different angles, or a document might consist of text and images). The conditional independence of separate sources imposes constraints on their shared latent representation, which, if respected, can improve the quality of the learned low dimensional representation. In this paper, we present a convex formulation of multi-view subspace learning that enforces conditional independence while reducing dimensionality. For this formulation, we develop an efficient algorithm that recovers an optimal data reconstruction by exploiting an implicit convex regularizer, then recovers the corresponding latent representation and reconstruction model, jointly and optimally. Experiments illustrate that the proposed method produces high quality results.

Citation

M. White, Y. Yu, X. Zhang, D. Schuurmans. "Convex Multi-view Subspace Learning". NIPS Workshop on Machine Learning and Games, (ed: Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Leon Bottou, Kilian Q. Weinberger), pp 1682-1690, December 2012.

Keywords:  
Category: In Conference
Web Links: NeurIPS

BibTeX

@incollection{White+al:NIPS12,
  author = {Martha White and Yaoliang Yu and Xinhua Zhang and Dale Schuurmans},
  title = {Convex Multi-view Subspace Learning},
  Editor = {Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C.
    Burges, Leon Bottou, Kilian Q. Weinberger},
  Pages = {1682-1690},
  booktitle = {NIPS Workshop on  Machine Learning and Games},
  year = 2012,
}

Last Updated: February 25, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo