Automated methods for parsing images have been a holy grail of computer vision for many years. The techniques that have been used have developed significantly over time. A number of people have been involved in pushing forward probabilistic approaches for image understanding. Such methods can be very useful as the probabilistic machinery provides consistent methods for representing prior knowledge and combining information from a multitude of sources. Work is this area can either be high level (object classification and connectivity), or low level (distributions of local areas or features in an image). Some of the work I am or have been involved in includes
We provide some details of a few of these approaches. For more information see the publications on this.
Jpeg compression utilises quantisation of the discrete cosine transform (DCT) for lossy encoding. However standard decompression techniques simply use the quantised values as if they were true. In fact we can obtain prior information regarding the values the DCT coefficients take, and combine that with constraints across JPEG blocks to reduce the blocky artefacts which JPEG compression creates.