Department of Chemical Engineering,Ferdowsi University of Mashhad
Data acquisition of chemical engineering processes is expensive and the collected data are always contaminated with inevitable measurement errors. Efficient algorithms are required to filter out the noise and capture the true underlying trend hidden in the training data sets. Regularization networks, which are the exact solution of multivariate linear regularization problem, provide appropriate facility to perform such a demanding task. These networks can be represented as a single hidden layer neural network with one neuron for each distinct exemplar. Efficient training of Regularization network requires calculation of linear synaptic weights, selection of isotropic spread ( ) and computation of optimum level of regularization ( ). The latter parameters ( and ) are highly correlated with each other. A novel method is presented in this article for development of a convenient procedure for de-correlating the above parameters and selecting the optimal values of and . The plot of versus suggests a threshold that can be regarded as the optimal isotropic spread for which the Regularization network provides appropriate model for the training data set. It is also shown that the effective degrees of freedom of a Regularization network is a function of both regularization level and isotropic spread. A readily calculable measure of the approximate degrees of freedom of a Regularization network is also introduced which may be used to de-couple and .