next up previous
Next: Direct Evidence Collection Up: Feature Visibility Analysis Previous: Deducing Self-Obscured SURFACEs

Detecting External Occlusion

Structure obscured by unrelated objects cannot be anticipated in coincidental scene arrangements, unless closer objects can be identified. What remains possible is to show that the absence of a feature is consistent with the assumption of occlusion, that is, there are closer, unrelated surfaces completely covering the portion of the image where it is expected. This unrelatedness can be verified by detecting front-surface-obscuring or concave boundaries completely surrounding the closer surfaces, as in Figure 9.17.

The other case considered occurs when non-self-obscured SURFACEs are observed as partially obscured. These must meet all shape and adjacency constraints required by the model and the invisible portions must be totally behind other unrelated surfaces (as before). The boundary between the partial object and obscuring surfaces must be obscuring.

Verifying fully obscured structure is the simplest case. Here, every portion of the predicted model SURFACE must be behind an unrelated data surface. Minor errors in absolute distance prediction make it difficult to directly verify that an object surface pixel is further away than the corresponding observed pixel, such as when a piece of paper lies on a table surface. Fortunately, relative surface depth differences have already been accounted for in the labeling of obscuring boundaries and the formation of depth ordered surface clusters (Chapter 5). The ordering test can then be reformulated to verify that the entire missing SURFACE lies within the image region belonging to an unrelated, closer, surface cluster. In practice, the test can be performed using a raycasting technique:

  1. Find the set of closer, unrelated surfaces.
  2. Predict the image locations for the missing SURFACE.
  3. For each pixel, verify that the observed surface image region has been assigned to one of the closer surfaces.

Again, this ideal algorithm was altered to tolerate parameter misestimation:

  P = set of predicted image positions for the SURFACE
  I = subset of P lying on identified object surfaces (should be empty)
  O = subset of P lying on closer unrelated obscuring surfaces (should be P)
  E = subset of P lying elsewhere (should be empty)
  size(I) / size(P) $< \tau_1$ and size(E) / size(O) $< \tau_2$ ( $\tau_1 = 0.2, \tau_2 = 0.2$)
Then: declare the surface to be externally obscured

Figure 9.18 illustrates the test.

Figure 9.18: Predicted Boundary of Externally Obscured Surface
\begin{figure}\epsfysize =4in

Because of the depth ordering ambiguities of concave surface boundaries (i.e. which surface, if either, is in front of the other), this approach will fail to detect some cases of external occlusion. Difficulties also occur with surfaces that lie both in front of and behind objects. In the absence of more accurate depth predictions, the only correct test may be to observe an obscuring boundary between the visible portions of the object and the missing portions.

The only fully externally obscured structure was the robot hand, which was correctly detected. Because the reference frame estimates for the lowerarm had a slightly larger rotation angle, part of the hand was predicted not to be obscured by the trash can. This motivated the threshold based test described above.

Figure 9.19 shows the predicted position of the robot hand on top of the scene.

Figure 9.19: Predicted Gripper Position
\begin{figure}\epsfysize =3.75in

Determining the visibility status of the model features was computationally expensive - particularly the raycasting image generation for self-occlusion analysis. About one-third of the processing time was spent in this process. In response to this, the SMS modeling approach [70] was developed. These models explicitly record the visibility of all model features for the SMS equivalent of each ASSEMBLY, for the key viewpoints. The model does not specify the visibility relationship of all recursively accessible features, merely those represented at the current level of the hierarchy, which considerably reduces the number of occlusion relationships considered for each object. Feature visibility is associated with a partitioning of the view sphere, and the relative position between the viewer and the object determines which partition applies at a given time. From this index, the visibility is directly accessed.

next up previous
Next: Direct Evidence Collection Up: Feature Visibility Analysis Previous: Deducing Self-Obscured SURFACEs
Bob Fisher 2004-02-26