I am interested in all aspects of intelligent behaviour that involve interaction among humans and/or machines. More specifically, I am interested in normative and descriptive models of communication, coordination, collaboration, and competition among artificial and human agents with different capabilities and objectives.

While my enthusiasm for these issues is motivated by fundamental questions regarding cognition and society, and are driven by the long-term vision of making AI more human-friendly, my research mostly emphasises the development of concrete architectures and algorithms that can support such reasoning about interaction.

Currently I am particularly interested in human-machine collective intelligence (see the SmartSociety project), the design of fair and transparent data-driven algorithms (see the UnBias project) and in how machines can negotiate and evolve meaning (which we investigate in the ESSENCE network), and in the ethics of AI, in particular in how we can develop concrete architectures and algorithms for "safe" agents.

Over the past 15 years, I have worked on a variety of issues that all relate to these themes, including

  • fair and diversity-aware task allocation mechanisms,
  • collaborative and strategic multiagent planning,
  • task-oriented ontology alignment,
  • automated synthesis of norms,
  • argumentation in plan-based environments,
  • planning and learning in agent dialogue,
  • collaborative agent-based machine learning,
  • multiagent reinforcement learning,
  • agent communication language semantics,
  • trust and reputation mechanisms, and
  • opponent modelling in games.

You can find papers on each of these topics in my publications page.

In Edinburgh, I lead the Agents Group. If you are interested in getting involved, please email me or any of the members of the group.