next up previous
Next: Introduction to Wavelet Networks Up: Wavelet Networks for Vision Previous: Abstract

Introduction


Wavelets networks were first mentioned by [Zhang and Benveniste, 1992] in the context of non-parametric regression of functions in $ {\Bbb{L}}^2({\Bbb{R}}^2)$. In wavelet networks, the radial basis functions of RBF-networks are replaced by wavelets. During the training phase, the network weights as well as the the degrees of freedom (position, scale, orientation) of the wavelet functions are optimized. Zhang and Benveniste realized that wavelet networks inherit the properties of wavelet decomposition and mention especially their universal approximation property, the availability of convergence rates and the explicit link between the network coefficients and the wavelet transform.

However, since their introduction in 1992, wavelet networks (WN) have received little attention in recent publications. [Szu et al., 1992,Szu et al., 1996] have used WNs for signal representation and classification. They have explained how a WN template, a superwavelet, can be generated and presented original ideas for how they can be used for pattern matching. In addition, they mention the large data compression achieved by such a WN representation. [Zhang, 1997] showed that WNs are able to handle nonlinear regression of moderately large input dimension with sparse training data. [Holmes and Mallick, 2000], analyzed WNs in the context of a Bayesian framework. [Reyneri, 1999] lately analyzed the relations between artificial neural networks (ANNs), fuzzy systems and WNs have been discussed.

It appears, that in the cited works, WNs have only been applied to certain problems but that their properties have never been investigated. Starting from a wavelet representation as described by [Zhang, 1997] we have analyzed the properties such a representation has. [Zhang and Benveniste, 1992] mentioned, e.g., that there is an explicit link between the weights (wavelet coefficients) and some appropriate transform. This link is established through wavelet theory. We have further investigated the following properties of wavelet networks:

We will exploit the above properties for object representation. In particular we will show that tracking and recognition is facilitated by the above properties: Both can be carried out efficiently in the low-dimensional wavelet subspace while the mapping of an input image into the wavelet subspace can be established with a small number of local image filtrations (projections). We have carried out a small set of experiments, affine face tracking and face recognition, in order to support our claims. Some of the presented ideas such as the use of WN template (superwavelets) enhance the ideas mentioned in [Szu et al., 1992,Szu et al., 1996]. In addition as mentioned above, WNs can be used to optimize image filtering. We have used the optimized wavelets as filters in a face-pose estimation experiment. Having reached an estimation error of $ 0.65^{\circ}$ using non-optimized filters, the error decreased to $ 0.21^{\circ}$ using the optimized wavelets.

The content of this paper is a substract of the dissertation of [Krueger, 2001].



next up previous
Next: Introduction to Wavelet Networks Up: Wavelet Networks for Vision Previous: Abstract
Volker Krueger
2001-05-31