next up previous
Next: FingerPaint Up: Watching Hands: Gesture Previous: Watching Hands: Gesture

Projecting the Workspace

In the digital desk a computer screen is projected onto a physical desk using a video-projector, such as a liquid-crystal ``data-show'' working with standard overhead projector. A video-camera is set up to watch the work area such that the surface of the projected image and the surface of the imaged area coincide. This coincidence can not match ``pixel to pixel'' unless the camera and projector occupy the same physical space and use the same optics. Since this is impossible, it is necessary to master the transformation between the real workspace, the camera and the virtual workspace.

The projection of a plane to another plane is an affine transformation. Thus the video projector can be used to project a reference frame onto the physical desk in the form of a set of points. The camera image of these four points permits the calibration of 6 coefficients which transform image coordinates (i, j) to workspace coordinates (x, y). The visual processes required for the digital desk are relatively simple. The basic operation is tracking of a pointing device, such as a finger, a pencil or an eraser. Such tracking should be supported by methods to determine what device to track and to detect when tracking has failed. A methods is also required to detect the equivalent of a ``mouse-down'' event for selection.

  
Figure 1: Drawing and placing with ``Fingerpaint''.

The tracking problem can be expressed as: ``Given an observation of an object at time t, determine the most likely position of the same object at time ''. If different objects can be used as a pointing device, then the system must include some form of ``trigger'' which includes presentation of the pointing device to the system. The observation of the pointing device gives a small neighbourhood, w(n,m), of an image p(i, j). This neighbourhood will serves as a ``reference template''. The tracking problem can then be expressed as, given the position of the pointing device in the image, determine the most likely position of the pointing devise in the image. The size of the tracked neighbourhood must be determined such that the neighbourhood includes a sufficiently large portion of the object to be tracked with a minimum of the background.

We have experimented with a number different approaches to tracking pointing devices: these include color, correlation tracking, principal components and active contours (snakes) [Ber94,Mar95]. The active contour model [KWT87] presented problems which we believe can be resolved, but which will require additional experiments. Our current demonstration uses cross-correlation and principal components analsys.





next up previous
Next: FingerPaint Up: Watching Hands: Gesture Previous: Watching Hands: Gesture



Patrick Reignier
Fri Jul 21 18:22:45 MET DST 1995