Not Logged In

Learning mixture models with the regularized maximum entropy principle

Full Text: ieeenn04.ps.ps PS

We present a new approach to estimating mixture models based on a resent inference principle we have proposed: the latent maximum entropy principle(LME) is different from Jaynes' maximum entropy principle, standard maximum likelihood and maximum a posterior probability estimation. We demonstrate the LME principle by deriving new algorithums for mixture model estimation, and show how robust new variants of the EM algorithum can be developed. We show that a regularized version of LME, namely RLME, is effective at estimating mixture models. It generally yeilds better results than plain LME, which in turn is often better than maximum likelihood and maximum a posterior estimation particularly when inferring latent variable models from small amounts of data.

Citation

S. Wang, D. Schuurmans, F. Peng, Y. Zhao. "Learning mixture models with the regularized maximum entropy principle". IEEE Transactions on Neural Networks, 15(4), pp 903-916, January 2005.

Keywords: maximum entropy, micture models, latent variables, iterative scaling, machine learning
Category: In Journal

BibTeX

@article{Wang+al:IEEETransactionsonNeuralNetworks05,
  author = {Shaojun Wang and Dale Schuurmans and Fuchun Peng and Yunxin Zhao},
  title = {Learning mixture models with the regularized maximum entropy
    principle},
  Volume = "15",
  Number = "4",
  Pages = {903-916},
  journal = {IEEE Transactions on Neural Networks},
  year = 2005,
}

Last Updated: June 06, 2007
Submitted by Nelson Loyola

University of Alberta Logo AICML Logo