Image understanding is a very diverse and active field of research. Even an exhaustive review of subtopics such as two-dimensional segmentation could fill several volumes. Hence, there are many different approaches one could use to solve any given image understanding task. Even after trading off complexity, generality, cost, and a host of other factors, there would still be many approaches worth pursuing. In this chapter, I have described several successful approaches used to solve various problems dealing with segmentation, shape representation, and image understanding. I have shown how computational operators such as thresholding, edge detection, and ridge detection are used extensively in low-level image processing tasks. Snakes and active contour models have the potential for handling more difficult image understanding tasks that require a greater degree of autonomy. Still, higher-level models such as those proposed by Binford and Levitt are required to perform complex three-dimensional object recognition. In summary, effective image understanding requires tight coupling between high-level, hierarchical, task-dependent models and low-level computational operators. The low-level operators are responsible for generating context-dependent data under the guidance of the high-level models. The high-level agents can synthesize the low-level data in a structured manner to make decisions about how to track boundaries, how to extract regions, how to aggregrate information, and to how determine when an object has been recognized correctly.