Edges are detected in areas of the image where the intensity level fluctuates sharply, the more rapid the intensity change the stronger the edge. A good edge detection stage makes the formation of extended boundaries and object recognition easier; errors due to a poor edge detector soon become magnified as more processing is performed, so care must be taken in choosing the right edge detector (or operator) for the job. Intensity changes may arise from several physical causes, illustrated in Figure 1
In the depicted scene, these may be due to occluding (blade) boundaries, curvature (fold) boundaries, extremal boundaries and those dependent on object reflectivity and illumination, i.e. marks, shadows and specularities. Boundaries can be distinguished between regions of different texture, although detection of such boundaries is very dependent on the scale of the image and detection mechanism. If appropriate, boundaries can also be formed between regions of different colour. For example, satellite images may exhibit a patchwork of different textures and colours due to the different usage of fields within a highly cultivated area. Experiments with the human visual system have shown that boundary information is very important in recognising objects; it is common for a human to recognise an object from a crude outline drawing. This is one principal motivation for the use of boundary representations within computer vision systems, but it is also easy to integrate a boundary representation into a wide number of recognition algorithms in a wide variety of applications.
It is difficult to extract meaningful boundaries directly from grey level image data, particularly if the shapes are complex, but much greater success has been achieved by first transforming the image into an intermediate representation of local discontinuities in intensity, then grouping these ``edges'' into more elaborate boundaries. Thus, boundaries which are highly dependent on the particular object models are broken down into context-independent local edges. Because the process of edge detection is so widespread, a lot of research and development effort has been expended on different forms of local operator for edge extraction.
In this section, we shall consider 2 different approaches to edge detection, one simple and one more complex, each of which implements a transformation from a 2D intensity array to a 2D edge discontinuity array. These edge operators compute three fundamental properties of edges, first a position of the edge (x,y) denoted by it's location in the 2D array, second a magnitude describing the severity of the intensity change, and third, a direction which is aligned with the maximum change in intensity.
Edge detectors compute initially gradient values at all pixel locations. Commonly, it is necessary to determine which gradient values correspond to ``edges'' and can be used for subsequent processing. The rate of change of the intensity value is directly proportional to the edge strength. Weak edges produce low gradient values, which may be confused with uniform regions where noise exists in the image. Thresholding sets a minimum value below which all edges are considered the product of noise and are discarded.
[ Gradient Edge Detection ]
Comments to: Sarah Price at ICBL.
(Last update: 4th July 1996)