next up previous
Next: View Interpolation Up: Image-Based Modelling Previous: Image-Based Modelling

Approximation of the Plenoptic Function

People visualise the world as an interaction of light with the surface of the objects surrounding them. A set of light rays passing through the eye and focusing on the retina results to an image of the environment. In this sense everything that can be seen is contained in the dense array of different light rays that fill the space.

Adelson et al. [1] first introduced the plenoptic function to describe the structure of light. Measuring this function involves deploying a camera at every possible position in the 3 dimensional space and recording the light intensity passing through the camera lenses at every possible angle for every possible wavelength at every time. Once this 7D function has been captured generating images of a scene from arbitrary positions reduces to a trivial ray indexing process.

The plenoptic function is in practice impossible to compute. However, under certain assumptions its dimensionality can be reduced. Two techniques that utilised a 4D subset of the plenoptic function were the Lumigraph [21] and the Light Field [33]. They considered only the set of light rays leaving (if we observe from outside) a convex bound of the examined scene so that the radiance along each ray remains constant. They also assume a snapshot of the function to eliminate time and considered a monochromatic function to avoid the need for examination over different wavelengths.

Both approaches parameterise the light rays by their intersection with two apriori known oriented planes. Synthesising an image from a novel view subsequently involves computing the four line parameters for each image ray and resampling the radiance at those line parameters. Gotler et al [21] approximated the resampling process as the linear sum of the product between a quadralinear basis function and the value at each grid point in the two planes. They also use prior knowledge of the geometry of the object to adaptively define the shape of this basis function. Levoy et al [33] interpolated the 4D function from the nearest point on the grid.

The simplicity of generating arbitrary new images once the approximation of the plenoptic function has been computed is the principal advantage of these methods. The preprocess required for approximating this function however is highly expensive in both computational and storage terms as the sampling density must be high enough to avoid excessive blurriness. Besides the compression improvements proposed this approach is not feasible for approximating scenes with a wide range of views such as buildings due to the prohibitive storage costs.


next up previous
Next: View Interpolation Up: Image-Based Modelling Previous: Image-Based Modelling

Bob Fisher
Wed Jan 23 15:38:40 GMT 2002