next up previous
Next: References Up: An Introduction to Previous: The Plenoptic Function

Generation of Novel Illumination

A small number of researchers have considered the problem of rendering images under changing illumination with the assumption that the viewpoint remains fixed. The simplest approach to this problem, suggested by Haeberli [10], uses the principle of superposition. This principle assumes that the illumination of the scene under a variety of light sources can be determined by summing the illumination due to each of the light sources alone. An image of the scene is taken for each light source and a weighted sum of these provides a range of lighting conditions. The technique is extremely simple but is restricted to a set of fixed light sources.

Given three example images of an object illuminated by a single point light source in three different but known positions, the set of all possible appearances of the object can be determined [2]. Once this set, known as the illumination cone, has been determined, the appearance of the image for any number of light sources in any configuration can be synthesized. The main problem with this approach is that the relationship between the positions of the light sources and the image's appearance is not explicit.

If the surface structure, for example the surface normal direction, of an image-based object is known then local lighting models can be employed to control the illumination of the object. This is the principle used in Bump-Mapping [4] which has been used to enhance the rendering of textures in conventional computer graphics. A variety of techniques including binocular stereo, photometric stereo [11] and structured light range finding can be used to estimate the surface structure of real objects. Volumetric information, which can be estimated from a number images taken from different viewpoints, can also be used to estimate the change in appearance as the illumination is changed [14].

One of the problems of using surface structure to control surface illumination is that a lighting model must be employed, such as the Lambertian lighting model, and these do not capture some of the subtlety of real lighting effects. To generate authentic illumination of real objects and scenes, some researchers have attempted to directly measure the appearance of a surface as a function of viewing direction and light source direction. This function is commonly known as the bi-directional reflectance distribution functions (BRDF) [16]. Although the BRDF is able to capture some of these subtleties, it is difficult to measure for real objects and requires a large amount of memory to store.



next up previous
Next: References Up: An Introduction to Previous: The Plenoptic Function



Bob Fisher
Mon Mar 29 14:58:18 BST 1999