I'm interested in understanding human communication using machine learning and statistical models, and constructing systems that can recognize and interpret communication scenes. My research career is grounded in speech processing, and our approaches start from the signals.

Speech Recognition and Synthesis

How can we improve conversational speech recognition? How can we make speech synthesis more natural? We are looking at better speech recognition systems that are better adapted or normalised to new domains or speakers, that can be ported across languages, and that are robust to different acoustic environments. We are particularly interested in models based on deep neural networks, for both acoustic modelling and language modelling. Current research students in speech recognition and synthesis include Pawel Swietojanski, Siva Reddy Gangireddy, and Joachim Fainberg. I'm also working with Ben Krause on recurrent neural networks. Researchers working with me on speech recognition include Peter Bell and Liang Lu. In speech synthesis, I am trying to keep up with the great work of Simon King, Junichi Yamagishi, and their colleagues, as well as working with Korin Richmond on articulatory modelling.

Multimodal Interaction

Human communication is factored across more than one modality. The analysis and interpretation of multimodal interaction presents a number of challenges, ranging from ways to model multiple asynchronous streams of data to the construction of systems that can interpret aspects of multiparty human communication. A lot of this work is about augmenting communication in meetings - in the AMI and AMIDA Integrated Projects, and in the InEvent project. I work with Catherine Lai, Jonathan Kilgour, and Jean Carletta in these areas. And I try to keep up with Hiroshi Shimodaira's work on synthesising conversational agents and social signals.


I'm principal investigator of the Natural Speech Technology, SUMMA, and uDialogue projects. Previous projects include the AMI and AMIDA Integrated Projects.


We are always looking for excellent research students: see the page about PhD opportunities at CSTR. I am not looking for visiting interns for the foreseeable future.


This year I am teaching the Machine Learning Practical, and also Automatic Speech Recognition (ASR jointly with Hiroshi Shimodaira).