Chris Williams: Research Interests
I am interested in a wide range of theoretical and practical issues in
machine learning, statistical pattern recognition, probabilistic
graphical models and computer vision. This includes theoretical
foundations, the development of new models and
algorithms, and applications. At a high level my interests can be
summarized as "finding structure in data".
My main areas of interest are described below:
Further information on
past research grants/projects is available.
- Image Interpretation.
Object recognition can be cast in a statistical framework. This
approach argues for image understanding using generative models,
i.e. explaining an image by instantiated objects.
Recent work in this direction is the paper with Ali Eslami on
Factored Shapes and Appearances for Parts-based Object Understanding.
Other recent work looks at lower-level edge-based and region-based models
of images. The paper with Jyri Kivinen on
Transformation Equivariant Boltzmann Machines considers learning
to group edges while imposing rotation equivariance (i.e. the system
will detect a specified configuration of edges at any rotation). The paper
with Nicolas Heess on
Learning generative models texture models with extended
Field-of-Experts learns higher-order random field models of
visual texture. These contour- and region-based models can be combined
to form models of textured regions.
I am also one of the organizers of the PASCAL
Visual Object Classes challenges concerning the
recognition of object classes (e.g. cars, cats, etc) in images.
In work with Michalis
Titsias we learned models of multiple objects that occur in a many
images--- for further information and movies
etc click here.
In earlier work (with Nick Adams, Steve Felderhof, Xiaojuan Feng, and
Amos Storkey) we studied
the use of tree-structured belief networks (TSBNs) and Dynamic Trees
(DTs) as models of images. DTs are
TSBNs that reconfigure themselves to a given input image or image
Click here and
here for futher information.
- Models for Understanding
Time-Series/Condition Monitoring: Data that comes
from a set of sensors recording through time can have rich
structure. For example, premature babies in intensive care
are monitored by many sensors. Our goal is to carry out
condition monitoring, to identify
different types of artifact and pathology in real time based on
characteristic patterns in the data. See
condition monitoring of premature babies for more details (work carried out with
Prof Neil McIntosh (Edinburgh Royal Infirmary) and
This approach is being extended to
monitoring in an adult neuro ICU
in the Southern General Hospital, Glasgow.
- Gaussian Processes. Since 1995 I have been working on the
use of Gaussian processes (GPs) for supervised learning problems with a
number of collaborators including Carl
Rasmussen, David Barber, Francesco Vivarelli and Matthias
Seeger. GP prediction works by placing a Gaussian
process prior over functions and conditioning this on observations in
order to make predictions. There are close similarities
between GPs, Support Vector Machines (SVMs) and other
kernel machines, see, e.g. http://www.kernel-machines.org/. An overview of this work can be obtained from the book
Gaussian Processes for
Machine Learning (C. E. Rasmussen and C. K. I. Williams,
MIT Press, 2006). Recent work with Edwin Bonilla
and Kian Ming Adam Chai
concerns multi-task learning in a Gaussian process framework.
- Unsupervised Learning: In addition to the work on
learning learning multiple object from images, I also worked on GTM, the Generative Topographic Mapping (along with Chris
Bishop and Markus
Svensen). I have also worked on hierarchical mixture
models, and probabilistic minor components analysis/extreme components
analysis (with Felix
- Other: I am also interested in trying to understand the
processing and representations used in animal visual systems.