RKHS based Functional Analysis for Exact Incremental Learning
Sethu Vijayakumar and Hidemitsu Ogawa
Abstract of paper published in Neurocomputing : Special issue on Theoretical analysis of real valued function classes
We investigate the problem of incremental learning in artificial neural
networks by viewing it as a sequential function approximation problem.
A framework for discussing the generalization ability of a trained network
in the original function space using tools of functional analysis based on
Reproducing Kernel Hilbert Spaces(RKHS) is introduced.
Using this framework, we devise a method of carrying out optimal incremental
learning with respect to the entire set of training data by employing
the results derived at the previous stage of learning and incorporating
the newly available training data effectively.
Most importantly, the incrementally learned function has the same (optimal)
generalization ability as would have been achieved by using batch learning on
the entire set of training data, hence, referred to as exact learning.
This ensures that both the learning operator and the learned function can
be computed using an online incremental scheme.
Finally, we also provide a simplified closed form relationship
between the learned functions before and after the incorporation of new data
for various optimization criteria, opening avenues for work into selection of
optimal training set.
We also show that learning under this kind of framework
is inherently well-suited for applying novel model selection strategies and
introducing bias and apriori knowledge in a more systematic way.
Most importantly, it provides a useful hint in performing kernel based
approximations, of which the regularization and SVM networks are special
cases, in an online setting.
Click here to download an gzip-ed version of the paper (22 pages). Click here for a pdf version.