I am a Chair in Robot Systems in the
School of Informatics at the
University of Edinburgh (UK). Prior to my current role, I held academic appointments in the
School of Computer Science at the
University of Birmingham (UK), the
Department of Electrical and Computer Engineering at
The University of Auckland (NZ), and at
Texas Tech University (USA). I received my Ph.D. in
Electrical and Computer Engineering from
The University of Texas at Austin. My primary research interests include knowledge representation and reasoning, cognitive systems, and interactive learning in the context of human-robot/human-agent collaboration. Specific research thrusts include the following:
My
Curriculum Vitae (CV): (
pdf) (
ps)
My
Research Statement: (
pdf) (
ps)
My
Teaching Statement: (
pdf) (
ps)
My
Publications.
My
Research Group is part of the
Institute of Perception, Action, and Behavior (IPAB) at the
University of Edinburgh.
For students: if you are interested in working with me, please
read this before you contact me.
Some recent work:
- We have developed a hybrid framework for ad hoc teamwork that combines the principles of step-wise iterative refinement and ecological rationality to support non-monotonic logical reasoning (with prior commonsense knowledge), probabilistic reasoning, and incremental (and rapid) learning of domain knowledge and models of the other agents' behavior, enabling an agent to collaborate with others without prior coordination. This work has been published in the Theory and Practice of Logic Programming (TPLP 2023, pdf) and the AAAI Conference on Artificial Intelligence (AAAI 2023, pdf).
- We have developed frameworks for robot grasping and manipulation. This includes a framework that jointly considers safety and task-specific constraints in a three-level representation for each object class; this was presented at CASE 2022 (pdf). We have also developed a hybrid framework for smooth, variable impedance force-motion control of robot manipulators performing changing-contact manipulation tasks that involve making and breaking contact with objects and surfaces; this was presented at IROS 2021 (pdf). More recently, we have co-developed a benchmark for robot manipulation for industrial assembly tasks; this was published in the Robotics and Automation Letters (RAL 2024, pdf).
- We have developed architectures that combine the principles of non-monotonic logical reasoning (with commonsense knowledge) and inductive learning to guide deep learning for visual scene understanding, visual question answering (VQA), planning, and for providing relational descriptions of decisions in the context of assistive agents and robots. This work has been published in the Journal of Autonomous Agents and Multiagent Systems (JAAMAS 2023 pdf). Prior work over the last five years has been published in journal articles (SNCS 2021 pdf, Front. in Robotics and AI 2019 pdf), and conference/workshop papers (RSS 2019 pdf, NMR 2022 pdf).
- We have contributed to the development of frameworks that combine inference based on commonsense knowledge, and data-driven learning methods toward tasks such as object-goal navigation and visual scene rearrangement, which require the robot to find and move objects to desired places and configurations. This work has been published and presented at different venues (ROMAN 2023 pdf, ICRA 2023 pdf, CASE 2022 pdf1, pdf2). More recent work combines high-level task anticipation based on generic knowledge encoded in LLMs with fine-grained action planning and execution using a classical planner (project page)
- Prior work on theories of explanation (and explainable agency), intention, and affordance for human robot collaboration are described in related papers (AMAI 2021 pdf, KI 2019 pdf), and implemented in our refinement-based architecture for knowledge representation and reasoning (JAIR 2019 pdf).
Robot Platforms
Some images of robots I have used recently in my research and education projects:
Next, images of some other robot platforms that I have worked with in the past: the AUV ENDURANCE for autonomous underwater navigation, a robot wheelchair for the physically challenged, the SONY ERS-7 Aibo robot playing soccer, the ERRATIC wheeled robot and the Nao humanoid robot for indoor exploration, and an unmanned aerial robot.
Currently, my students and I evaluate our algorithms on wheeled, humanoid and aerial robot platforms---our algorithms are designed with the long-term goal of enabling robots to
socially engage humans. The corresponding capabilities can be used to support, for instance, robots in assistive roles in elder care homes. The individual research thrusts are described below.
Knowledge Representation and Reasoning
Mobile robots often have to reason with different descriptions of incomplete commonsense domain knowledge and uncertainty. We develop architectures that represent and reason with tightly coupled transition diagrams of the domain at different resolutions, with the fine-resolution transition diagram defined as a
refinement of the coarse-resolution diagram. For any given goal, non-monotonic logical reasoning with commonsense knowledge at the coarse-resolution provides a sequence of intentional abstract actions. Each abstract action is implemented as a sequence of concrete actions by identifying and reasoning probabilistically with the relevant part of the fine-resolution transition diagram, with the corresponding outcomes added to the coarse-resolution description for subsequent reasoning. These architectures encode theories of intention, affordance, and observation inspired by human cognition, and provide explanatory descriptions of the robot's decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. This research thrust builds on my prior work that won a
Distinguished Paper Award (at ICAPS 2008) and a
Paper of Excellence Award (at ICDL-EpiRob 2012). For more details, please look at my
publications.
Interactive Learning
In complex application domains, it is challenging to equip robots with comprehensive knowledge of domain dynamics. We develop algorithms and architectures that enable robots to interactively and cumulatively learn previously unknown action capabilities, actions, and their preconditions and effects, from sensor inputs and human feedback. These algorithms and architecture build on the principles of commonsense (logical) reasoning, relational reinforcement learning, multiple instance learning, and inductive learning, based on observations obtained through active exploration and reactive action execution. This research thrust is related to my prior work on learning multimodal associative models of domain objects (based on visual and verbal cues) that won the
Best Paper Award (at FLAIRS 2014), my prior work on learning models of domain objects from appearance-based and contextual visual cues, and to my doctoral
dissertation on autonomously learn representations for
color distributions and
illumination changes on mobile robots. For more details, please look at my
publications.
Dexterous Manipulation
Dexterous manipulation continues to be an open problem to the use of robots in complex real-world domains. Inspired by results in human/animal motor control, we develop algorithms and architectures that seek to address this problem by building on principles of machine learning, control theory, psychology and cognitive science. One recent architecture enabled a robot manipulator to incrementally and interactively learn forward models in the task space instead of the joint space. The measured error in the predictions of these models is used to revise the models, and to vary the impedance (i.e., gain/stiffness) parameters that govern the manipulator's ability to follow a desired motion pattern. The architecture also includes a hybrid force-motion controller to provide compliance in certain direction(s); it has been used to perform challenging continuous contact tasks. We are also developing architectures for tasks that involve making and breaking contacts, integrated perception and manipulation, and human-robot collaborative control of prosthetic devices. For more information, please look at my
publications.
Applications of Machine Learning
Research challenges in robotics are highly interdisciplinary in nature, drawing upon developments in a fields such as machine learning, control theory, psychology and cognitive science. We design and adapt machine learning algorithms to address estimation and prediction challenges in domains such as agricultural irrigation management, climate informatics, and short-term traffic prediction. For instance, I have adapted non-parametric Bayesian algorithms to estimate crop reference evapotranspiration and facilitate accurate
irrigation management. I have also designed frameworks for
estimating extreme weather events such as ice storms and for downscaling global climate models to provide accurate regional climate projections. In the past, I have also contributed to the adaptation of stochastic sampling algorithms to address
software testing challenges---also see
slides corresponding to a tutorial on Bayesian methods for software engineering. For more information, please look at my
publications.