Reinforcement Learning for Active Model Selection
- Aloak Kapoor, Dept of Computing Science, University of Alberta
- Russ Greiner, Dept of Computing Science; PI of AICML
In many practical Machine Learning tasks,there are costs associated with acquiring the feature values of the training instances,and often a hard learning budget, which limits the number of feature values that can be purchased. Here, it is important to use an effective ``data acquisition policy'', which specifies how to spend the budget acquiring the training data that produce an accurate classifier. This paper considers a simplified version of this problem,``active model selection'' [10]. As this is a Markov decision problem, we consider applying reinforcement learning (RL) techniques to learn an effective spending policy. Despite extensive training,our experiments on various versions of the problem show thatRL techniques exhibit lower performance than the standard simpler spending policies.
Citation
A. Kapoor, R. Greiner. "Reinforcement Learning for Active Model Selection". Utility-Based Data Mining (UBDM), ACM, pp 17-23, August 2005.Keywords: | budgeted learning, active learning, machine learning, reinforcement learning |
Category: | In Workshop |
Web Links: | ACM Digital Library |
BibTeX
@misc{Kapoor+Greiner:UBDM05, author = {Aloak Kapoor and Russ Greiner}, title = {Reinforcement Learning for Active Model Selection}, Booktitle = {UBDM '05: Proceedings of the 1st international workshop on Utility-based data mining}, Publisher = "ACM", Pages = {17-23}, booktitle = {Utility-Based Data Mining (UBDM)}, year = 2005, }Last Updated: November 21, 2019
Submitted by Sabina P