One important and difficult task for a general model-based vision system is selecting the correct model. Model-based vision is computationally intractable without reducing the large set of objects that potentially explain a set of data to a few serious candidates that require more detailed analysis. Since there may be 1,000 - 100,000 distinct objects in a competent general vision system's range, and even a modest industrial vision system may have 100 distinct objects in its repertoire, the problem is too large to undertake model-directed comparisons of every known object in every viewpoint. The problem remains even if the potentially massive parallelism of the brain or VLSI are considered.
Visual understanding must also include a non-attentive element, because all models need be accessible for interpreting all image data. So, the solution must consider both efficiency and completeness of access.
There is also a more crucial competence aspect to the problem. A vision system needs to be capable of (loosely) identifying previously unseen objects, based on their similarity to known objects. This is required for non-rigid objects seen in new configurations, incompletely visible objects (e.g. from occlusion) or object variants (e.g. flaws, generics, new exemplars). Hence, "similar" models must be invoked to help start identification, where "similar" means sharing some features or having an identically arranged substructure.
In view of all these requirements, model invocation is clearly a complex problem. This chapter presents a solution that embodies ideas on association networks, object description and representation, and parallel implementations. In the first section, the relevant aspects of the problem are discussed. The second presents a theoretical formulation of the proposed solution, the third shows examples of the process, the fourth discusses related work, and the last evaluates the theory.
The work described here builds on the original work by Fisher 
and many significant improvements by Paechter .