Given one or more example images of a scene it is possible to generate novel views by re-sampling the images using appropriate warping functions.
Certain sequences of images, such as the sequence formed by a camera rotating around its optical centre, can be mapped onto a single image plane. Any image in the sequence, or images between example frames, can then be recovered using the inverse mapping. This is the principle used in QuickTimeVR which enables an environment to be viewed in any direction from fixed positions . The image resulting from a pure camera rotation is commonly known as a panorama.
Any two images of the same scene or object taken from similar viewpoints will generally have a similar appearance. If the correspondence between image pixels is known then intermediate viewpoints can be approximated by interpolating pixel positions from one frame to the next . A whole set of images can be represented by a connected graph where nodes represent the images and arcs represent pixel correspondences. The user can then move continuously around the space represented by the images by interpolating from one node to the next. The advantage of this approach is its simplicity but to be effective a very large number of closely spaced images are needed. Sietz  improved this basic idea to produce physically valid intermediate views by first rectifying the images so that their epipolar lines were aligned.
Methods for correctly reconstructing novel views from multiple example images are based on the observation that certain relationships exist between the positions of pixels representing the same points in space observed from different viewpoints. In general, given two views, the camera's internal and external parameters and the correspondence between image pixels any third view can be reconstructed . Under more specific circumstances, however, some of these requirements can be relaxed. For example, for orthographic images only the pixel correspondences are required . For perspective images it can be shown that any third view can be reconstructed given only the pixel correspondences and the epipolar geometry for the two example views which can be estimated from a small number of point correspondences [7,1]. Laveau et al  exploit this to represent a scene as a collection of images and fundamental matrices.