The plenoptic function is the 5-dimensional function representing the intensity or chromacity of the light observed from every position and direction in 3-dimensional space. In image based modelling the aim is to reconstruct the plenoptic function from a set of examples images. Once the plenoptic function has been reconstructed it is straightforward to generate images by indexing the appropriate light rays.
Sampling and storing a 5-dimensional function for any useful region of space is impractical so researchers have used both constraints on the viewpoint and coherence inherent in the function to reduce the problems complexity.
If the plenoptic function is only constructed for a single point in space then its dimensionality is reduced from 5 to 2. This is the principle used in reflection mapping (also known as environment mapping)  where the view of the environment from a fixed position is represented by a 2-dimensional texture map. This reduction in the dimensionality of the plenoptic function by constraining the viewpoint is analogous to the observation that certain sequences of images can be mapped onto an image plane, as in the construction of panoramas.
When viewing an environment from inside its convex hull the 5-dimensional plenoptic function is reduced to a 4-dimensional function. This is because any particular ray through the convex space always intersects the same surface. Two similar methods for representing this 4-dimensional function and for constructing the function from example images have been proposed [12,9]. Both of these methods allow scenes and objects to be rendered very efficiently from novel viewpoints but even the 4-dimensional functions requires very large amounts of storage. A typical scene or simple object will require many hundreds of megabytes.