Not Logged In

Practical PAC Learning

Full Text: pracpac-ijcai95.pdf PDF

We present new strategies for ``probably approximately correct'' (pac) learning that use fewer training examples than previous pac-learning techniques, while maintaining the exact same worst-case guarantees. We achieve this improvement by using *sequential* learning procedures that observe training examples one-at-a-time, and decide ``on-line'' whether to halt and return a hypothesis, or continue training. We first provide a theoretical worst-case upper-bound on the average number of training examples these procedures will use, and show that it is often better than the fixed number required by previous ``batch'' learning algorithms. However, the efficiency of our approach is primarily established by a series of experiments which show a sequential learning algorithm that uses many times fewer training examples in practice than previous techniques. These results are robust to changes in the target concept, domain distribution, etc., and moreover, the reductions are obtained with little (additional) computational overhead. Overall, these results demonstrate how pac-learning can be achieved far more efficiently in practice than previously thought, even in the worst case, countering the claim that pac-learning can never be feasibly achieved in real applications.

Citation

D. Schuurmans, R. Greiner. "Practical PAC Learning". International Joint Conference on Artificial Intelligence (IJCAI), August 1995.

Keywords: PAC, machine learning
Category: In Conference
Web Links: SeqPAC page

BibTeX

@incollection{Schuurmans+Greiner:IJCAI95,
  author = {Dale Schuurmans and Russ Greiner},
  title = {Practical PAC Learning},
  booktitle = {International Joint Conference on Artificial Intelligence
    (IJCAI)},
  year = 1995,
}

Last Updated: May 04, 2013
Submitted by Russ Greiner

University of Alberta Logo AICML Logo