Not Logged In

Fast Distribution-Specific Learning

Other Attachments: fast-disp-spec.pdf [PDF] PDF

PAC-learning results are often criticized for demanding impractically large training samples. The common wisdom is that these large samples follow from the worst case nature of the analysis, and therefore PAC-learning, though desirable, must not be a practical goal. We however consider an alternative view: perhaps these large sample sizes are due to the learning strategies used, which make inefficient use of the available training data. To demonstrate this, we consider sequential learning strategies that observe training examples one-at-a-time, and autonomously decide when to stop training. We present several such algorithms for distribution-specific learning, and prove that they can require far fewer training examples (on average) than existing fixed-sample-size approaches, and moreover are able to learn with certainty, not just high probability. In fact, we show a simple sequential strategy that is optimally efficient in many cases.

Citation

R. Greiner, D. Schuurmans. "Fast Distribution-Specific Learning". Computational Learning Theory and Natural Learning Systems, MIT Press, 4, pp 155-167, August 1997.

Keywords: distribution specific, machine learning, PAC
Category: In Book
Web Links: SeqPAC page

BibTeX

@inbook{Greiner+Schuurmans:CLNL97,
  author = {Russ Greiner and Dale Schuurmans},
  title = {Fast Distribution-Specific Learning},
  Publisher = {MIT Press},
  Volume = "4",
  Chapter = "6",
  Pages = {155-167},
  year = 1997,
}

Last Updated: May 04, 2013
Submitted by Russ Greiner

University of Alberta Logo AICML Logo