Jon Oberlander has been Professor of Epistemics at the University of Edinburgh since 2005. He works on getting computers to talk (or write) like individual people, so his research involves not only studying how people express themselves - face to face or online - but also building machines that can adapt themselves to people. He collaborates with linguists, psychologists, computer scientists and social scientists, and has long standing interests in the uses of technology in cultural heritage and creative industries. He was founder-Director of the Scottish Informatics and Computer Science Alliance, and is now Director of the University's new Data Technology Institute, Director of the Institute for Language, Cognition and Computation, and Co-Director of the Centre for Design Informatics.
- Professor of Epistemics in the University of Edinburgh 's School of Informatics.
- Honorary Professor in Heriot Watt University's School of Mathematical and Computer Science.
- Fellow of the Royal Society of Edinburgh.
- Fellow of the British Computer Society.
- Member of the Science and Technology Advisory Panel of the National Museums Scotland.
- Member of the Recognition Committee of Museums Galleries Scotland.
- Member of the Board of New Media Scotland.
- Within the University, affiliated to:
My main interests lie in the intersection of computational linguistics and cognitive science. The primary aim is to develop cognitively-motivated computational and formal models of the ways in which differing people produce fluent discourse. Such models underpin the proper design of systems presenting data and teledata to users, tailoring it to their individual needs and interests.
There are three main strands to my current research: discourse generation, individual differences, and multimodality.
- Intelligent Labelling. With Chris Mellish, Colin Matheson, Amy Isard and others, I have worked on `intelligent labelling': the automatic tailoring of personalised text in electronic catalogues, through two EPSRC-funded projects. By combining techniques from natural language generation, hypertext, and user modelling, ILEX provides a core system for dynamic hypertext generation, and demonstrates its utility both for exploring museum collections, and browsing home shopping catalogues. The successful EU-funded MPIRO project investigated a generalisation of the approach, to support multilingual generation, and the subsequent EU-funded INDIGO project explored multilingual, multimodal human-robot interaction in museums. We're currently working with Bitwink to port this approach, as part of the Talisman project.
- Affect in Communication. Recently, I have been focussing on modelling personality-based differences in discourse generation; this has led to publications with Alastair Gill, interests in personality and blogging, pursued with Scott Nowson, a patent application, and project funding through the Stanford Link. The CrAg project on Critical Agent Dialogues investigated dialogue agents with personality, and explored how this affects the ways they adjust their behaviour to each other---and whether human observers like watching particular kinds of dialogue. Work on affect has lead into sentiment analysis, and now text mining in both traditional archives and current social media.
- Multimodal reasoning and communication. With Keith Stenning and colleagues, I have investigated the relationships between graphicality and expressiveness. Combining diverse research methods, we have shown how differing multimodal presentations of the same material affects the ways that people with differing cognitive styles learn new formal systems.The MAGIC project, in collaboration with Pat Healey in London, Simon Garrod in Glasgow, and John Lee in Edinburgh, explored how conventions in the use of graphical notations arise from sequences of individual interactions. With Mary Ellen Foster and others, the EU-funded COMIC project (COnversational Multimodal Interaction with Computers) let us put intelligent labelling ideas from generation and synthesis (see above) together with those on multimodal interaction and problem solving. The EU-funded JAST project (Joint Action Science and Technology), took a deeper look at psycholinguistic processes during multimodal dialogue, and let us develop a new multimodal human-robot dialogue engine. The current EU-funded JAMES project project is building on this, to study the acquisition of social rules for human-robot interaction.