Not Logged In

Robust Learning Under Uncertain Test Distributions: Relating Covariate Shift to Model Misspecification

Full Text: jwen_icml2014.pdf PDF
Other Attachments: jwen_icml2014_paper.pdf [PDF] PDF

Many learning situations involve learning the conditional distribution p(y|x) when the training instances are drawn from the training distribution p_tr(x), even though it will later be used to predict for instances drawn from a different test distribution p_te(x). Most current approaches focus on learning how to reweigh the training examples, to make them resemble the test distribution. However, reweighing does not always help, because (we show that) the test error also depends on the correctness of the underlying model class. This paper analyses this situation by viewing the problem of learning under changing distributions as a game between a learner and an adversary. We characterize when such reweighing is needed, and also provide an algorithm, robust covariate shift adjustment (RCSA), that provides relevant weights. Our empirical studies, on UCI datasets and a real-world cancer prognostic prediction dataset, show that our analysis applies, and that our RCSA works effectively.

Citation

J. Wen, C. Yu, R. Greiner. "Robust Learning Under Uncertain Test Distributions: Relating Covariate Shift to Model Misspecification". International Conference on Machine Learning (ICML), pp 631-639, June 2014.

Keywords: machine learning, model misspecification, covariate shift
Category: In Conference
Web Links: JMLR

BibTeX

@incollection{Wen+al:ICML14,
  author = {Junfeng Wen and Chun-Nam Yu and Russ Greiner},
  title = {Robust Learning Under Uncertain Test Distributions: Relating
    Covariate Shift to Model Misspecification},
  Pages = {631-639},
  booktitle = {International Conference on Machine Learning (ICML)},
  year = 2014,
}

Last Updated: February 12, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo