Automatic Basis Selectsion for RBF Networks
This paper proposes a generic criterion that defines 
the optimum number of basis functions for radial basis function 
neural networks. The generalization performance of an RBF 
network relates to its prediction capability on independent test 
data. This performance gives a measure of the quality of the 
chosen model. An RBF network with an overly restricted basis 
gives poor predictions on new data, since the model has too 
little flexibility (yielding high bias and low variance). By contrast, 
an RBF network with too many basis functions also gives poor 
generalization performance since it is too flexible and fits too 
much of the noise on the training data (yielding low bias but high 
variance). Bias and variance are complementary quantities, and 
it is necessary to assign the number of basis function optimally 
in order to achieve the best compromise between them. In this 
paper we use Stein's unbiased risk estimator (SURE) to derive an 
analytical criterion for assigning the appropriate number of basis 
functions. Two cases of known and unknown noise have been 
considered and the efficacy of this criterion in both situations 
is illustrated experimentally. The paper also shows an empirical 
comparison between this method and two well known classical 
methods, cross validation and the Bayesian information criterion, 
BIC. 
Citation
A. Ghodsi, 
D. Schuurmans. 
"Automatic Basis Selectsion for RBF Networks". IJCNN, July 2003.
	
		| Keywords: | RBF networks, machine learning | 
	
		| Category: | In Conference | 
BibTeX
@incollection{Ghodsi+Schuurmans:IJCNN03,
  author = {Ali Ghodsi and Dale Schuurmans},
  title = {Automatic Basis Selectsion for RBF Networks},
  booktitle = {},
  year = 2003,
}Last Updated: March 21, 2007
Submitted by Nelson Loyola