All the work which I undertake could be understood to be probabilistic modelling. In this section the focus is on the machine learning advances generated from the work I am involved in rather than the specific modelling benefits to particular domains. This does not cover everything. Rather, here is a selection of a few things that may be of interest.
Structural equation models can be seen as a generalisation of Gaussian belief networks. They do not have the acyclicity constraints that belief networks have, and can be seen to be a more natural representation for modelling the long term activity of a dynamic interacting process. In particular they have become a standard tool for connectivity analysis in fMRI. However sometimes we actually need to learn structural equation models.
There is a question regarding the use and appropriate application of generalised belief propagation. The FFT is a deterministic map from data to Fourier space. However with missing data, the FFT becomes an inference problem. I ask the question "Is it still possible to utilise the distributed structure of the FFT network in the noisy or missing data scenario?". This project investigates whether propagation techniques might provide a useful answer.
In the context of dynamic tree models, we needed to develop variational techniques that captured many of the inherent posterior dependencies which mean field methods did not provide. The method has (since) come to be understood as one of the conjugate exponential class of variational methods, but is a form that is particularly useful for situations where there are multiple network structures which are possible explanations for a given data record.
Dynamic Trees are useful structures not just for modelling images, but for hierarchical topic models or other dynamic hierarchical scenarios.
This is really an introduction rather than an area of research, but knowing the workings of belief networks is a pre-requisite for much of what I do.