next up previous
Next: Image-Based Modelling

Overview of related work on Scene Modelling

Anastasios Manessis

There is no doubt that over the last decade everybody has experienced the rapid invasion of computerised technology in different areas of our day to day life. Applications reach from cinema and TV to entertainment games, art and medicine. The digital world seems so exciting but at the same time its synthetic creation is still apparent.

Several research disciplines like photogrammetry, computer graphics and computer vision have endeavoured for the creation of realistic scene representations. Photogrammetrists were the first that studied problems like camera calibration, image registration and bundle adjustment. Building on this basis 3D computer vision considered the problem of automatically reconstructing geometric descriptions of real world (or even synthetic) objects using data captured from several different type of sensors.

Computer graphics on the other hand focused on the inverse problem of synthesising images from geometric models such as the ones resulted from the 3D vision field. Using additional information on the surface reflectance properties and the scene illumination conditions realistic images are rendered. Recent approaches however address the issue of creating such views not based on an underlying geometric model but instead on a number of images.

Work on generating 3D representations for real objects in general is a topic that cannot be covered in the limited length of a single chapter. Thus the main focus has been on different techniques that have been proposed in the literature in the field of reconstruction of realistic large scale scenes. These techniques have been categorised as image-based and geometric-based. The primary criterion of this classification is the underlying structures used to build the reconstructed representation and it is either a set of images or geometric primitives.





Bob Fisher
Wed Jan 23 15:38:40 GMT 2002