home left right up

---

Translate


Common Names: Translate

Brief Description

The translate operator performs a geometric transformation which maps the position of each picture element in an input image into a new position in an output image, where the dimensionality of the two images often is, but need not necessarily be, the same. Under translation, an image element located at Eqn:eqnxy1 in the original is shifted to a new position Eqn:eqnxy2 in the corresponding output image by displacing it through a user-specified translation Eqn:eqntr. The treatment of elements near image edges varies with implementation. Translation is used to improve visualization of an image, but also has a role as a preprocessor in applications where registration of two or more images is required. Translation is a special case of affine transformation.

How It Works

The translation operator performs a transformation of the form:

Eqn:eqntrx
Eqn:eqntry

Since the dimensions of the input image are well defined, the output image is also a discrete space of finite dimension. If the new coordinates Eqn:eqnxy2 are outside the image, the translate operator will normally ignore them, although, in some implementations, it may link the higher coordinate points with the lower ones so as to wrap the result around back onto the visible space of the image. Most implementations fill the image areas out of which an image has been shifted with black pixels.

Guidelines for Use

The translate operator takes two arguments, Eqn:eqntr, which specify the desired horizontal and vertical pixel displacements, respectively. For example, consider the artificial image

art4ctr1

in which the subject's center lies in the center of the 300×300 pixel image. We can naively translate the subject into the lower, right corner of the image by defining a mapping (i.e. a set of values) for Eqn:eqntr which will take the subject's center from its present position at Eqn:eqntr1 to an output position of Eqn:eqntr2, as shown in

art4trn1

In this case, information is lost because pixels which were mapped to points outside the boundaries defined by the input image were ignored. If we perform the same translation, but wrap the result, all the intensity information is retained, giving image

art4trn2

Both of the mappings shown above disturb the original geometric structure of the scene. It is often the case that we perform translation merely to change the position of a scene object, not its geometric structure. In the above example, we could achieve this effect by translating the circle center to a position located at the lower, right corner of the image less the circle radius

art4trn3

At this point, we might build a collage by adding) another image(s) whose subject(s) has been appropriately translated, such as in

art7trn1

to the previous result. This simple collage is shown in

art7add1

Translation has many applications of the cosmetic sort illustrated above. However, it is also very commonly used as a preprocessor in application domains where registration of two or more images is required. For example, feature detection and spatial filtering algorithms may calculate gradients in such a way as to introduce an offset in the positions of the pixels in the output image with respect to the corresponding pixels from the input image. In the case of the Laplacian of Gaussian spatial sharpening filter, some implementations require that the filtered image be translated by half the width of the Gaussian kernel with which it was convolved in order to bring it into alignment with the original. Likewise, the unsharp filter, requires translation to achieve a re-registration of images. The result of subtracting the smoothed version of the image

wdg1

away from the original image (after translating the smoothed image by the offset induced by the filter before we subtract to re-align the two images) yields the edge image

wdg1usp2

We again view the effects of mis-alignment if we consider translating

art2

by one pixel in the x and y directions and then subtracting this result from the original. The resulting image, shown in

art2sub1

contains a description of all the places (along the direction of translation) where the intensity gradients are different; i.e. it highlights edges (and noise). The image

cln1

was used in examples of edge detection using the Roberts Cross, Sobel and Canny operators. Compare this result to the translation-based edge-detector illustrated here

cln1trn1

Note that if we increase the translation parameter too much, e.g., by 6 pixels in each direction, as in

cln1trn2

edges become severely mis-aligned and blurred.

Interactive Experimentation

You can interactively experiment with this operator by clicking here.

Exercises

  1. Investigate which arguments to the translation operator could perform the following translations:
    a)
    art2

    into

    art2trn1

    b)

    art3

    into

    art3trn1

  2. We can create more interesting artificial images by combining the translate operation with other operators. For example,
    art7

    has been translated and then pixel added back onto itself to produce

    art7add2

    a) Produce an artificial image of this sort using

    art6

    b) Combine

    art5

    and

    art7

    using translation and pixel addition into a collage.

  3. Describe how you might derive a simple isotropic edge detector using a series of translation and subtraction operations.

  4. Would it be possible to make a simple sharpening filter based on translation and pixel addition or subtraction? On what types of images might such a filter work?

  5. How could one use translation to implement convolution with the kernel shown in Figure 1.




    Figure 1 Convolution kernel.

    Can one implement every convolution using this approach?

References

D. Ballard and C. Brown Computer Vision, Prentice-Hall, 1982, Appendix 1.

B. Horn Robot Vision, MIT Press, 1986, Chap. 3.

Local Information

Specific information about this operator may be found here.

More general advice about the local HIPR installation is available in the Local Information introductory section.

---

home left right up

©2003 R. Fisher, S. Perkins, A. Walker and E. Wolfart.

Valid HTML 4.0!