next up previous
Next: An Estimation Algorithm Up: Multiple Illuminant Estimation Previous: Multiple Illuminant Estimation


Introduction

Knowledge of illuminant directions is necessary both in computer vision for shape reconstruction, and in image based computer graphics, in order to realistically manipulate existing images.

An early attempt to recover a more general illumination description [4], modeled multiple light sources as a polynomial distribution. A discussion of the various types of light sources can be found in [5].

In general, there are two approaches in multiple illuminant estimation, one is to recover environment radiance maps and another is to estimate the significant light sources. Most methods need to use a calibration object of fixed shape, typically a sphere. In [3] a specular sphere is used as a light probe to measure the incident illumination at the location where synthetic objects will be placed in the scene. Such a sphere though might have strong inter-reflections with other objects of the scene, especially if they are close to the sphere.

Using the Lambertian shading model, Yang and Yuille [10] observed that multiple light sources can be deduced from boundary conditions, i.e., the image intensity along the occluding boundaries and at singular points. Based on this idea, Zhang and Yang [11] show that the illuminant directions have a close relationship to critical points on a Lambertian sphere and that, by identifying most of those critical points, illuminant directions may be recovered if certain conditions are satisfied. Conceptually, a critical point is a point on the surface such that all its neighbors are not illuminated by the same set of light sources. More specifically, a point in the image is called a critical point if the surface normal at the corresponding point on the surface of the object is perpendicular to one of the light sources (illustrated in Figure  1). However, because the detection of critical points is sensitive to noise, the direction of extracted real lights in not very robust to noisy data. In [6] a calibration object that comprises of diffuse and specular parts is proposed.

Figure 1: the surface normal of a critical point is perpendicular to the light source
\includegraphics [width=4.0in, angle =0]{fig_cp}

The idea of using arbitrary known shape, can also be found in the approach of Sato et al. [8], which exploits information of a radiance distribution inside shadows cast by an object of known shape in the scene. In [8] the illumination distribution of a scene is approximated by discrete sampling of an extended light source and whole distribution is represented as a set of point sources equally distributed in the scene by node directions of a geodesic dome. Recently, under a signal processing approach [7,1] a comprehensive mathematical framework for evaluation of illumination parameters through convolution is described. Unfortunately, this framework does not provide a method to estimate high-frequency illumination such as directional light sources when the BRDF is smooth as in the Lambertian case. Convolution is a local operation and the problem is ill-posed when only local information is considered [2]. However, this problem could be overcome by using global information as proposed by Wang and Samaras [9] (described in section  2).


next up previous
Next: An Estimation Algorithm Up: Multiple Illuminant Estimation Previous: Multiple Illuminant Estimation
Yang Wang
2002-06-20