Final-year Geometric Modelling Course

Rendering Algorithms

The Depth Buffer


The depth buffer (sometimes called the Z buffer) is a very fast hardware solution to the problem of rendering pictures of three-dimensional objects.

First, consider an ordinary computer display.

Computer displays have a framestore (shown diagramatically on the left). This is a part of the memory that holds an image that is placed on the display (the screen on the right). Typically each pixel or dot on the screen is represented by 24 bits, which means that there are 8 bits each for red, green, and blue. The display electronics scans the memory at the same speed as the electron beams are scanning across the screen, and the numbers representing the red, green, and blue are used to control the beam currents, thus making a picture. When the computer puts new numbers in the memory, the picture changes.

A depth buffer is effectively an extra framestore. For each pixel on the screen, it records how far away the real object that that pixel represents is. To plot objects, they are sent to the depth buffer in any order. The colours of the pixels are only changed when the buffer detects that it's being asked to plot something that's in front of what's already there. This is all done in the electronics of the display, and is very fast: a good system can deal with a million triangles every second.

Here is a typical depth-buffer picture. Though in principle depth buffers can deal with any shapes, in practice it's simpler if they are set up just to deal with triangles, so the object to be depicted has to be faceted first to approximate its surface with polygons, which are then further split into triangles.

Here are the polygons from the same picture.

Note that the curved surfaces in the shaded picture appear to be smooth, even though they are made up from flat facets. This is done by interpolating surface normals:

When each triangle is made, the surface that it comes from (like the red cylinder or the green sphere) is interogated to find its surface normals at the corners of the triangles. Then when the depth buffer is plotting the triangle it takes a weighted average of the three normal vectors to find the normal at the point on the triangle which corresponds to the pixel that it's plotting. This means that the pixel can be shaded as if the light were falling on the curved surface, and not on a flat one.


Back to: Chapter 2.

© Adrian Bowyer 1996