Ron Petrick's Webpages


Representing an Agent's Incomplete Knowledge for Planning, R. Petrick, Master's Thesis, Department of Computer Science, University of Waterloo, 1998.

[ bib | pdf | slides ]


In many domains, an agent may not have complete knowledge of its environment. Before acting to achieve a goal in such a domain, an agent may have to update its model of the world by sensing the real world to gather more information. Manipulating the world through action may also affect the agent's knowledge. Therefore, being able to represent an agent's knowledge and update that knowledge as it changes is essential for both generating and executing plans.

During the process of planning, an agent must reason about the effects of the actions needed to achieve a desired goal. However, an agent must make the distinction between reasoning about a sequence of actions at plan time and actually executing a sequence of actions. The complication arises from the fact that the effects of actions at plan time are often quite different from the effects of actions at execution time. Moreover, at plan time the agent must know that the plan will achieve its goals while at execution time the agent must have sufficient knowledge at each step of the plan in order to execute it.

This thesis describes a formalism for modelling an agent's incomplete knowledge. The standard STRIPS representation is extended to allow an agent's knowledge to be represented by a collection of databases, with each database storing a specific type of knowledge. This extension is necessary in order to distinguish between the types of knowledge that an agent may gain during the process of planning, compared with the knowledge that it may obtain during execution. The agent's knowledge state is formally defined by providing a translation of the database contents to formulas of a modal logic. Algorithms are given for updating an agent's knowledge while preserving the conditions necessary for maintaining a consistent knowledge state. A sound but incomplete inference procedure is described that allows queries of the agent's knowledge to be made in order that knowledge conditions, such as those required to select the appropriate actions during the planning process, can be satisfied. This research also addresses how an agent's actions can be represented, making explicit the separation between plan time and execution time effects. The result is that an agent's knowledge can be projected over a sequence of either planned or executed actions. The notion of an exception is developed as a means of managing the interaction between knowledge of different types. Examples are given that illustrate reasoning about sequences of actions both at plan time and at execution time. The examples also demonstrate how the knowledge representation, action representation, inference procedure, and database update rules interact. In developing a STRIPS-like representation for incomplete knowledge that clearly separates plan time and execution time considerations, this thesis provides the foundation for designing planners that have the ability to model knowledge producing actions and build contingent plans based on information that will only become known during execution.