Computer vision is the process of extracting useful information from images in order to perform a specific task. This practical emphasis is often forgotten in some algorithmic research but is an important part of the definition. Clearly, algorithms which deliver information which is of no practical use will never be used. The first rule of algorithmic research is therefore to specify the information that we wish to obtain from the image. We can generally expect that once this information has been obtained it will be used as the basis for subsequent action resulting from a decision based on this data. The decision process may well require complex evaluation of several sources of data and for this reason the practical use of computer vision is closely related to artificial intelligence.
Given that we wish to determine information for a particular purpose we now need to know if there is an optimal way of presenting this data. Clearly a decision-making process based on delivered information will crucially require information regarding the expected success of a particular outcome given the data. There are two ways that the successful outcome can be affected, the first by a failure in action and the second an error in the data. As a consequence, an algorithm must not only deliver an estimate of the required data but also an estimate of data reliability. Anything less than this information will make subsequent decision formation unreliable and the algorithm could then never form part of a practical system. The most common form of such information is error covariance measures. Computation and manipulation of these quantities is thus fundamental to much computational research.
The most direct information regarding the successful outcome of a particular decision is the posterior (Bayes) probability. This is defined as the probability that a particular event will be true given a particular observation. Knowledge of Bayes probabilities of outcomes given a set of alternative states allows a direct assessment of attempting alternative actions. Probability theory is regarded as the only self-consistent computational framework for all data analysis and decision making. It is therefore not surprising that it forms the basis of all statistical analysis processes. Further, an algorithm which makes use of all available data in the correct manner must deliver an optimal result.