next up previous
Next: Bibliography Up: Multiple Illuminant Estimation Previous: Introduction


An Estimation Algorithm

In [9] Wang and Samaras proposed an illuminant direction detection process that minimizes global error, using one image of any object with known geometry and Lambertian reflectance. In general, each point of a surface is illuminated by a subset of all the directional light sources in the scene. First, it segments the surface into regions (virtual ``light patches''), with each region illuminated by a different set of sources. The regions are separated by boundaries consisting of critical points (where one illuminant is perpendicular to the normal of the surface [11]). Then, real lights are extracted based on the segmented virtual ``light patches'' instead of critical points that are relatively sensitive to noise. Since there are more points in a region than on the boundary, the method's accuracy does not depend on the exact extraction of the boundary and can tolerate noisy and missing data better. Furthermore, it can further adjust and merge light patches to minimize the least-squares errors. The number of critical boundaries detected is sensitive to the threshold used in the Hough transform, especially when there are noisy or incomplete data. However, this this can be solved in this method since the spurious lights will be eliminated during the merging stage that follows the Hough transform. The ability of this method to perform well when the data is not perfect is crucial for the extension of the method from spheres (for which it was initially developed) to arbitrary shapes. When the observed shape is not spherical its normals will be mapped to a sphere, although a lot of normals will be missing. However this method works well even for incomplete spheres, as long as there are enough points inside each light patch for the least-squares method to work correctly. It detects multiple illuminant directions through the following six steps (results and further details):

  1. Detect critical points.
  2. Find initial critical boundaries by Hough transform.
  3. Adjust critical boundaries. Adjust every critical boundary by moving it by a small step, and a reduction in the least-squares error indicates a better solution. Update boundaries using a “greedy” algorithm to minimize the total error.
  4. Merge spurious critical boundaries. If two critical boundaries are closer than a threshold angle (e.g.5 degrees), they can be replaced by their average.
  5. Remove spurious critical boundaries. Test every critical boundary, and remove it if the least-squares error does not increase. Test boundaries in increasing order of Hough transform votes (first test boundaries that are not as trustworthy).
  6. Calculate the real lights along a boundary by subtracting neighboring virtual lights.


next up previous
Next: Bibliography Up: Multiple Illuminant Estimation Previous: Introduction
Yang Wang
2002-06-20