Given the rotations, the translations are estimated. Fisher  estimated these directly from the boundary data. Depth was estimated by comparing model and data areas and cross-section widths. The three dimensional translation was estimated using the two dimensional translation that best fitted the data and then inverting the projection relationship using the estimated depth.
Here, depth estimates are directly available, and the translation
is estimated by relating the rotated model SURFACE centroid to the two dimensional image centroid
and inverting the projection relationship.
Typical estimated and nominal translation values for the modeled SURFACEs
successfully invoked in the test image are given in Table 9.4.
|Image||Measured (cm)||Estimated (cm)|
The translation estimates are reasonable, though not as accurate as the rotation estimates. This is true even though SURFACEs lsideb, ledgea and uside were substantially obscured. For the unobscured SURFACEs, the average translation error for the test image is (-6.0,1.6,-1.6), and is believed to arise from errors in estimating the camera coordinate system. Other sources of error include measurement error (estimated as 1.0 cm and 0.1 radian), image quantization (estimated as 0.6 cm at 5m and 0.002 radian) and errors arising from the approximate nature of the parameter estimations. In any case, the error is about 1.5% of the distance to the object, so the relative position error is small.
To illustrate the position results for the test image better, several pictures are shown with the SURFACEs drawn in their estimated positions on top of the original scene. Figure 9.4 shows the robot body side SURFACE (robbodyside), the robot upper arm side SURFACE (uside) and the trash can outer SURFACE (tcanoutf). Figure 9.5 shows the robot shoulder end SURFACE (robshldend), the robot lower arm side SURFACE (lsideb) and the robot upper arm small end SURFACE (uends). The lower arm side SURFACE translation estimate is high because of occlusion.