My work focuses on probabilistic modelling as the appropriate approach to dealing with uncertainty in inferential problems. The Bayesian approach to modelling stems from a recognition of the need for inclusion of prior knowledge or belief in order to ever be able to say anything about data, and a recognition that probability theory is necessary and sufficient for reasoning about uncertainty.
In one sense problems in Bayesian inference can be split into two different endeavours. The first is the matter of writing down "good" priors, where "good" is measured by how well the prior probability distributions represent the degrees of belief we bring to a problem. This is non trivial - understanding what beliefs to bring to a problem is hard enough, but also understanding many of the implications of a particular probabilistic assumption (especially in high dimensions) involves the hard work of probability theorists. Mathematics may be tautological, but it is not trivial. The second area is, as John Skilling put it, "the business of doing the sums". Bayesian inference involves definite integrals which are often non-analytic. The industry of ways of getting good approximate solutions to Bayesian integrals is a whole subject in itself.
Practical Bayesian modelling for large scale structured problems (such as those which may be more traditionally seen as the remit of machine learners rather than statisticians, but which are really the business of all parties) involves a mixture of the above endeavours. We are rarely at the liberty of writing down the prior distributions we wish to use without recourse to the computational difficulties we might be signing ourselves up to. On the other hand the choice of approximate inference techniques can be tailored and furthered by reference to specific problems. It is this mixture which directs my work and the work of my group.
One area of application is that of Astronomy. Astronomy is becoming more and more data driven, and there are increasingly more areas in astronomy in which machine learning skills are useful. My current project involves setting up a collaboration between astronomers and the machine learning community within Edinburgh. A number of us are looking at various collaborations which will be of mutual benefit. It has become clear that there are many areas in which known probabilistic modelling techniques can be applied and for which new techniques can be developed.
One simple problem involves the identification of artefacts in astronomical data which are due to satellite tracks, diffraction spikes or other related phenomena. For example satellite tracks are elliptical sections which are generally translated by object recognition programs as a large number of objects along that ellipse. Recognising which objects are generated from the satellite track is important so that they are not included in calculations or other assessments. These satellite artefacts are a small number among the million or more objects which are generally in the data taken from one plate.
Initial approaches have proved successful, and a demonstration of satellite track identification can be seen on the demonstrations page.
Prior to this work I have been involved in looking at hierarchical models and belief networks as a tool for image segmentation and tracking. Variational methods were developed for inference in Dynamic Trees and variants of dynamic tree methods have been proposed developed and tested. Before that I was developing Gaussian process models. Gaussian process methods suffer from two significant problems: scaling, and the necessity for a joint Gaussian sample point distribution.
I have tackled the problem of using Gaussian process methods for signals generated from switching sources. This involves the use of a latent variable model, where the form of the latent variables affect the covariance matrix of the Gaussian process. Sampling methods were used to approximate the integration over the latent variable space.
It is possible to improve Gaussian process scaling in circumstances where the sampling points can be chosen; even in high dimensional systems, these provide a special but important class of problems for which GP covariance functions are Toeplitz, and can therefore be inverted faster, significantly increasing the speed. To get this form `truncated' covariance functions need to be used.
Prior to this I have been working on the development and analysis of a new learning algorithm for Hopfield networks. This algorithm retains important flexibilities of other learning rules, but allows greater storage capacity. Furthermore the introduction of correlated patterns does not severely affect performance in the way it does for the Hebb rule. We also find that the shapes of basins of attraction are more regular for the new rule, and the size of attraction basins are more evenly spread.
A variant of this algorithm can be shown to have palimpsest or forgetful properties. This technique has much higher capacity than other palimpsest learning rules. The characteristics of the learning rule are demonstrated by showing that it acts as an iterated function sequence on the space of weight matrices.
I am interested in neural networks, Bayesian methods, probabilistic graphical models, forms of structural priors for images, Bayesian approaches to data mining, Gaussian process modelling, sampling methods, Hopfield networks, probability and combinatorics, fractal methods, dynamical systems models, bifurcation methods, analytical methods for neural networks and inference methods. time series analysis, interpretational developments in neurobiological computation, statistical learning theory, traffic flow models, transport models, network assignment models and the theoretical relationships between different network types and dynamics.