In the context of image formation, a sensor registers information about radiation that has interacted with physical objects. For example, an electronic camera converts reflected light of varying intensity and hue into a two-dimensional matrix of luminance and chrominance values, a laser rangefinder converts received laser radiation reflected from the scene when a transmitter is scanned across it into a ``depth map'' constructed from the receiver's viewpoint. A model of the imaging process has several different components :-
, where x and y are the two spatial coordinates.
might be intensity in a range from 0 ( black ) to 255 ( white ), or colour, where

or depth, where
refers to the z coordinate, or distance to
an imaged point from the sensor.
In Figure 3: Intensity and depth images, the depth data is encoded by intensity, the brighter the point the nearer the viewer.
Colour spaces are a way of organising the colours perceived by humans in the range 400nm.(blue) to 700nm.(red) approx. The colour signal perceived by an electronic system may be a weighted combination of three signal channels, i.e. red, green, and blue, but this does not give any direct correspondence to the human capability to see things in black-and-white, effectively deriving intensity information from colour receptors. There are various 3-variable colour combinations in use throughout the world, e.g. IHS (intensity-saturation-hue) and YIQ respectively red-cyan, magenta-green, white-black.
depends on the following factors.
First, there is the radiant intensity of the source, i.e. the power per unit area emitted into a unit solid angle
. Second there is the the reflectance of the objects in the scene, in terms of the proportion, spatial distribution and spectral variation of light reflected. The reflectance of a surface is generally somewhere between specular,
i.e. mirror-like, and Lambertian, i.e. reflecting light ``evenly'' in all directions according to a cosine distribution.
Of course, a distinction between sources and objects is over-simplistic, objects may radiate light and there will in general be multiple reflections.
It is also worth noting that most electronic light transducers do not have a linear intensity response, or more markedly they have a very non-uniform spectral response ( like humans ! ).
or
instead of t.
This type of analysis is fundamental to image processing, and since computer vision usually involves low level image processing operations
such as convolutions it is dangerous to be unaware of the implications of the recording mechanism and the effects of this and subsequent processing on the space-spatial frequency duality.
There are many different mediums for image acquisition, including ultrasound, visible light, infra-red light, X-rays etc. Of these, visible light is the most widely used, and there are many acquisition systems based on TV cameras, spot rangers, laser scanners and solid state devices. To scan the image in two dimensions a raster scan system is commonly used, in which the scanning mechanism may be electrical, mechanical or a combination of both. Generally, access to the stored image data is on a random basis.
