next up previous
Next: Bibliography Up: Zoom-lens Camera Calibration Previous: Experimental Results

Conclusions and Outlook

We have presented a neural framework for zoom-lens camera calibration, which makes use of our recently-introduced neurocalibration approach [1]. Our experimental results have demonstrated better performance of our approach compared to Wilson's approach, which is a reference of this domain. To improve the accuracy of our approach, more sampling positions during data collection are needed, along with removal of lens distortion [9],[6] from images before calibration. These two goals define our future directions in this work. We believe that this approach has the following key features, as opposed to other techniques (e.g., [7],[13],[14],[8]):

  1. it is general; it can consider, in a straightforward manner, any number/combination of lens control parameters, e.g., zoom, focus and/or aperture.
  2. Since no a priori knowledge about how lens settings affect the model parameters can be assumed available, our framework is flexible enough to capture complex variations in the model parameters across continuous ranges of control space.
  3. it integrates parameter formulation with the minimization of the overall calibration error; the formulations for all model parameters are refined at the same time, while in other approaches one parameter is fitted at a time and the final level of error generally depends on the sequence in which the models are fit to the data.

In context of neural network, this work has a number of novel aspects. Firstly, the neurocalibration network represents a non-typical multi-layered network structure, in which the slopes of the output neurons activation functions vary as learning proceeds. Typically the activation functions of the network neurons are chosen and then kept fixed during network training. Secondly, each weight of the neurocalibration network has its own physical meaning since it represents a particular camera model parameter. Accordingly, each network weight may play a different role during the training of the network. Usually, neural network applications treat the network as a 'black box', while network training algorithms typically view network weights as a single vector of isotropic parameters to be minimized. Lastly, the combination of the neurocalibration network and the other MLFNs used in the global optimization stage represents a novel, non-typical neural network structure as well. We have developed an extended variant of the known Backpropagation algorithm to train these networks minimizing the overall calibration error for all the collected calibration data and yet minimizing the fitting error of each MLFN.

One ongoing research direction is to use this framework to model the effect of the direction of lens adjustment on the camera calibrated parameters. According to a previous, interesting study [14] using the autocollimated laser approach, the direction of adjusting the zoom or the focus of the lens has affected the trajectory of the image center with any of them, i.e., the variation of the image center with a lens control variable (focus or zoom) will be different in case of increasing the control variable from the case of decreasing it. This is mainly attributed to some amount of mechanical hysteresis in the lens system. This suggests that the lens setting-varying parameters may have different behavior depending on the direction of adjusting the lens settings. To overcome this phenomenon which would complicate the calibration process, any lens setting should consistently be approached from one direction. However, this would introduce some delays in the on-line operation of the system. Moreover, although this will increase considerably the volume of calibration data to be collected, we believe that considering this effect within the same calibration framework will improve the overall accuracy and usefulness of the adjustable zoom-lens camera model.


next up previous
Next: Bibliography Up: Zoom-lens Camera Calibration Previous: Experimental Results
Moumen T. Ahmed 2001-06-27