There is an "idealism" embedded in the matching assumptions, with the goal of accounting for all model features. This most stringent criterion is ultimately not practical because position estimate errors will make location of smaller features difficult and segmentation may not isolate the desired structures, or isolate them at a different level of analytic scale. Other phenomena that cause the loss of data include occlusion, faulty objects, sensor noise and generic object variations. The result is that bad or unexpected evidence will cause failure, such as when a surface is too fragmented.
In general, numerical techniques (e.g. least-squared error) could probably improve the methods used here, provided the problems could be reformulated to allow the degrees-of-freedom needed for partially constrained relationships, such as joint angles. This seems like a suitable extension for a final geometric reasoning refinement phase, after all evidence has been accumulated.
The programs account for several expected difficulties, such as when two surfaces are not properly segmented (as in the upper arm edge surfaces), or when thin cylindrical features (e.g. chair legs) are too distant to be considered cylinders. Further, variation in segmentation is allowed by not examining boundary placement when matching surfaces.
Some special case reasoning seems acceptable, but incompleteness of evidence should also be permitted. Unfortunately, this leads to heuristic match evaluation criteria, or explicit designation of required versus auxiliary evidence. More generally, a full model of an object should also have descriptions at several scales and the construction process should match the data across the levels.
Another major criticism is that the recognition process only uses surfaces. The traditional "edge" is still useful, especially as surface data does not represent reflectance variations (e.g. surface markings). Volumetric evidence could also be included. Relationships between structures, such as line parallelisms and perpendicularities can provide strong evidence on orientation, particularly when occlusion leaves little visible evidence.
Object knowledge could help the recognition of subcomponents. Each subcomponent is currently recognized independently and then aggregated in a strictly bottom-up process. However, one subcomponent may invoke the object, which could partially constrain the identity and location of the other subcomponents. Since these objects often obscure each other in unpredictable ways, there may not be enough evidence to invoke and identify a subcomponent independently, whereas additional active top-down object knowledge might overcome this.
The level of detail in a model affects the quantity of evidence required.
Hierarchical models that represent finer details in lower levels of the
model lead to hypothesis construction processes that add the details once
the coarser description is satisfied (if the details are needed).
This symbolic coarse-to-fine recognition approach has not been well explored yet,
but some modeling systems (e.g. ACRONYM ,
SMS ) have experimented with scale dependent models.
This chapter has investigated model matching mechanisms that use surfaces as the primary recognition evidence. Previous work has demonstrated how to use surfaces, but their approaches, while using real data, did not use all available data (including surface curvature), understand the visibility of model features or richly exploit hierarchical models. This chapter showed how to use models, surfaces and associated positional information to: