In [9] Wang and Samaras proposed an illuminant direction detection process that minimizes global error, using one image of any object with known geometry and Lambertian reflectance. In general, each point of a surface is illuminated by a subset of all the directional light sources in the scene. First, it segments the surface into regions (virtual ``light patches''), with each region illuminated by a different set of sources. The regions are separated by boundaries consisting of critical points (where one illuminant is perpendicular to the normal of the surface [11]). Then, real lights are extracted based on the segmented virtual ``light patches'' instead of critical points that are relatively sensitive to noise. Since there are more points in a region than on the boundary, the method's accuracy does not depend on the exact extraction of the boundary and can tolerate noisy and missing data better. Furthermore, it can further adjust and merge light patches to minimize the least-squares errors. The number of critical boundaries detected is sensitive to the threshold used in the Hough transform, especially when there are noisy or incomplete data. However, this this can be solved in this method since the spurious lights will be eliminated during the merging stage that follows the Hough transform. The ability of this method to perform well when the data is not perfect is crucial for the extension of the method from spheres (for which it was initially developed) to arbitrary shapes. When the observed shape is not spherical its normals will be mapped to a sphere, although a lot of normals will be missing. However this method works well even for incomplete spheres, as long as there are enough points inside each light patch for the least-squares method to work correctly. It detects multiple illuminant directions through the following six steps (results and further details):