The Ontology of Tasks and Methods

B. Chandrasekaran1, J. R. Josephson1 and V. Richard Benjamins2

1Laboratory for AI Research, The Ohio State University, Columbus, OH 43210, chandra,,

2Dept. of Social Science Informatics (SWI), University of Amsterdam, Roetersstraat 15, 1018 WB Amsterdam, The Netherlands,,

Much of the work on ontologies in AI has focused on describing some aspect of reality: objects, relations, states of affairs, events, and processes in the world. A goal is to make knowledge sharable, by encoding domain knowledge using a standard vocabulary based on the ontology. A parallel attempt at identifying the ontology of problem-solving knowledge would make it possible to share problem-solving methods. For example, when one is dealing with a type of problem known as abductive inference, the following are some of the terms that recur in the representation of problem-solving methods: hypotheses, explanatory coverage, evidence, degree of confidence, plausibility, composite hypothesis, etc. Method ontology, in good part, is goal- and method-specific. ``Generic Tasks,'' ``Heuristic Classification,'' ``Task-specific Architectures,'' ``Task-method Structures,'' ``Inference Structures'' and ``Task Structures'' are representative bodies of work in the knowledge-systems area that have focused on problem-solving methods. However, connections have not been made to work that is explicitly concerned with domain ontologies. Making such connections is the goal of this paper.

1 Ontologies as Content Theories

In philosophy, ontology is the study of the kinds of things that exist. In AI, the term has largely come to mean one of two related things.

In this paper, we use the term ontology in the first sense, except that we broaden the notion of knowledge to include knowledge about problem solving.

The current interest in ontologies is really the latest version of our field's alternation of focus between content theories and mechanism theories. Sometimes everyone gets excited by some mechanisms, be they rule systems, frame languages, connectionist systems, fuzzy logic, etc. The mechanisms are proposed as the secret of making intelligent machines. At other times, there is a realization that, however great the mechanism, it cannot do much without a good content theory of the domain on which to set the mechanism to work. Moreover, it is often realized that once a good content theory is available, many different mechanisms might be used to implement effective systems, all using essentially the same content Chandrasekaran, 1994.

In AI, there have been several attempts to characterize the essence of what it means to have a content theory. McCarthy and Hayes' McCarthy & Hayes, 1969 Epistemic versus Heuristic distinction, Marr's three levels Marr, 1982 --Information Processing Strategy level, algorithms and data structures level, and physical mechanisms level-- and Newell's Knowledge Level versus Symbol Levels Newell, 1982 all grapple in their own ways with characterizing content. Ontologies are quintessentially content theories.

1.1 Why Are Ontologies Important?

Ontological analysis clarifies the structure of knowledge. The first reason is that they form the heart of any system of knowledge representation. If we don't have the conceptualizations that underlie knowledge, then we do not have a vocabulary for representing knowledge. Thus the first step in knowledge representation is performing a correct ontological analysis of some field of knowledge. Incorrect analyses lead to incoherent knowledge bases. A good example of the need for correct analysis comes from the field of databases Wieringa & de Jonge , 1995. Consider a domain in which there are people, some of whom are students, some professors, some other type of employees, some females and some males. For quite some time, a simple ontology was used in which the classes of students, employees, professors, males and females were represented as ``types of'' humans. Soon this caused problems because it was noted that students could also be employees at times and can also stop being students. Databases built using the simple ontology could not make simple inferences that one would expect to be able to make given the knowledge base. Further ontological analysis showed that ``students,'' ``employees,'' etc. are not ``types-of'' humans, but rather they were ``roles'' that humans can play, unlike terms such as ``females,'' which were in fact a ``type-of'' humans. Clarifying the ontology of this data domain made it possible to avoid a number of difficulties in reasoning about the data. Analysis of this sort that reveals the subtle connections between terms can often be quite challenging to perform.

Ontologies enable knowledge-sharing. The second reason why ontologies are important is that they provide a means for sharing knowledge. We just described how demanding it could be to come up with the appropriate conceptualizations for representing some area of knowledge. Suppose we do such an analysis and arrive at a satisfactory set of conceptualizations and terms standing for them for some area of knowledge, say, the domain of ``electronic devices.'' The resulting ontology would likely include terms such as ``transistors,'' and ``diodes,'' and more general terms such as ``functions,'' ``causal processes,'' ``modes'', and also terms in the electrical domain that would be necessary to represent the behavior of these devices. It is important to note that we are not talking about terms in one natural language or another: the ontology --the basic concepts involved and their relations-- are intrinsic to the domain. If we can come up with a set of terms to stand for the conceptualizations, and a syntax for encoding the conceptualizations and the relations among them, then the efforts that went into analysis can be encoded into an ontology. This ontology can be shared with others who have similar needs for knowledge representation in that domain, and a significant amount of labor in knowledge analysis won't have to be replicated. These ontologies can form the basis for domain-specific knowledge representation languages. In contrast to the previous generation of knowledge-representation languages, such as KL-One, these domain-specific languages may be termed ``content-rich.'' That is, they have a large number of terms that reflect a complex content theory of the domain.

Given such an ontology, specific knowledge bases describing specific situations can be built. For example, each manufacturer of electronic devices can build catalogs that describe his products. With the shared vocabulary and syntax, such catalogs can be shared easily, and used in automated design systems. This kind of sharing vastly increases the potential for reuse of knowledge.

We will now briefly review the basics of work on ontology to set the stage for discussing a specific type of ontology, that of problem solving knowledge.

1.2 Ontology: Terms for Describing the World

An ontology helps us to describe facts, beliefs, hypotheses, and predictions about the world in general, or in a limited domain, if that is what is needed. Constructing ontologies for representing factual knowledge is still an on-going research enterprise. Ontologies range in abstraction from very general terms that lie at the heart of our understanding and descriptive capabilities in all domains, to terms that are restricted to specific domains of knowledge. Basic phenomena of space, time, parts and subparts apply to all domains, while the concept of malfunction applies to engineered or biological domains, and even more specifically the concept of hepatitis applies to the medical domain. The example also suggests that there is no sharp line of abstraction that separates the general from the domain- specific. Domains come in differing degrees of specificity. Ontologies required to describe knowledge of some domain may require, in addition to domain-specific terms, terms from higher levels of abstraction. Terms at very general levels of description are often said to be part of the so-called ``upper ontology,'' denoting the relative level of description of these terms. There are many open research issues about the correct ways of analyzing knowledge at this level, and disagreements and open problems abound. To give some idea of the issues involved, here is a quote from a recent call for papers: (1)

``On the one hand there are entities, such as processes and events, which have temporal parts... On the other hand there are entities, such as material objects, which are always present in their entirety at any time at which they exist at all. The categorial distinction between entities which do, and entities which do not have temporal parts is grounded in common sense. Yet various philosophers have been inclined to oppose it. Some ... have defended an ontology consisting exclusively of things with no temporal parts. Whiteheadians have favored ontologies including only temporally extended processes. Quine has endorsed a four-dimensional ontology in which the distinction between objects and processes vanishes and every entity comprises simply the content of some arbitrarily demarcated portion of space-time. One further option, embraced by philosophers such as David Lewis, accepts the opposition between objects and processes, while still finding a way to allow that all entities have both spatial and temporal parts.''

Sowa Sowa, 1997, CYC Lenat & Guha, 1990 and Guarino Guarino, 1995 are researchers in AI who have proposed alternative upper ontologies. As a practical matter, there is agreement that there are objects in the world, they have properties that can take values, the objects may exist in various relations with each other, that the properties and relations may change over time, that there events that occur at different time instants, that there are processes in which objects participate and that occur over time, that the world and its objects can be in different states, that events may cause other events as effects, that objects may have parts, and so on. Further, perhaps not as basic facts of the world, but as ways of organizing them is the notion of classes and members and subclasses, where ``classhood'' arises out of shared properties. Thus, Is-A relations indicating subclass relations are fundamental for ontology representations.

The representational repertoire of objects, relations, states, events and processes does not say anything about what classes of these entities exist. They are left as commitments to be made by the person modeling the domain of interest. Even at very general levels such commitments already appear. Many ontologies agree to have as root the class ``thing'' or ``entity'', but already at the next more specific level, they start to diverge. A fact which is nicely illustrated by the slightly different taxonomies of the top levels in existing ontology projects such as CYC, Wordnet, Generalized Upper Model, Gensim, etc. (see Fridman-Noy & Hafner, 1997 for an overview). The more specific the domain is one wants to model, the more ontological commitments have to be made. For example, someone, faced with expressing his knowledge of a certain part of the world, might assert that there are certain categories of things called animals, minerals and plants, that Has-Life(x), and Contains- carbon(x) are relevant properties for the objects; that Larger-than(x,y), Can-eat(x,y) are two of the relations that may be true or false between any two objects. These commitments are not arbitrary --any old declaration of classes and relations won't do. For them to be useful, such commitments should reflect some underlying reality, i.e., should reflect real existence, hence the term ``ontology'' for such commitments.

As mentioned, there is no sharp division between domain-independent and domain-specific ontologies in representing knowledge. For example, the terms object, physical object, device, engine, and diesel engine, all describe objects, but in an order of increasing domain- specificity. Similarly, terms for relations between objects can span a range as well: e.g., connected(component1, component2) relation can be specialized as electrically-connected, physically-attached, magnetically-connected and so on. Ontologies are terms that are needed to describe the world, but an ontology for representing domain facts can of course be used to represent non-facts, hopes, expectations, etc. as well.

Two Levels of Ontology. Research on ontologies generally proceeds by asking the question, ``What is the ontology of P?'' where P is some type of entity, process or phenomenon. P may be something very general and ubiquitous such as a ``causal process,'' or ``liquids.'' Or P can be of narrower scope such as ``device,'' or ``diseases of biological organism.'' It is usual to identify two levels at which such an analysis is often conducted, and correspondingly two levels of ontology for P can be distinguished (this distinction is reminiscent of the distinction between core and peripheral ontologies in van Heijst et al., 1997).

At the first level, one identifies the basic conceptualizations needed to talk about all instances of P. For example, the first level ontology of ``causal process'' would include terms such as ``time instants,'' ``system,'' ``system properties,'' ``system states,'' ``causes that change states,'' and ``effects (also states),'' and ``causal relations.'' All these terms and the corresponding conceptualizations would constitute a first-level ontology of ``causal processes.'' We can't talk about causal processes without this vocabulary. At the second level, one would identify and name different types of P, and relate the typology to additional constraints on or types of the concepts in the first-level ontology. For the causal process example, we may identify two types of causal processes, ``discrete causal processes,'' and ``continuous causal processes,'' and define them as the types of process when the time instants are discrete or continuous respectively. These terms, and the corresponding conceptualizations, are also parts of the ontology of the phenomenon being analyzed. Second-level ontology is essentially open-ended: that is, new types may be identified any time.

How task-dependent are ontologies? What kinds of things actually exist should not depend on what we want to do with that knowledge. In that sense, ontologies cannot be task-dependent. On the other hand, exactly what aspects of reality in some domain get identified and written down in a particular ontology depends on what tasks the ontology is being built for. An ontology of the domain of fruits would focus on some aspects of reality if it is being written for selecting pesticides, and on different aspects if it is being written to help chefs select fruits for cooking (cf. the interaction problem Bylander & Chandrasekaran, 1988). As we will see, assumptions or requirements of problem-solving methods capture explicitly the way in which ontologies are task-dependent Fensel & Benjamins, 1996. Assumptions are therefore a key-factor in practical sharing of ontologies.

Technology for Ontology Sharing. There have been several recent attempts at creating engineering frameworks in which to construct ontologies. Neches, et al. Neches et al., 1991 describe an enabling technology called KIF intended to facilitate expression of domain factual knowledge using a Predicate Calculus-like formalism. A language called Ontolingua Gruber, 1993 has been proposed to aid in the construction of portable ontologies. In Europe, the CommonKADS project has taken a similar approach to modeling domain knowledge Schreiber et al., 1994. These languages use various versions of Predicate Calculus as the basic formalism. Predicate Calculus supports the ontology of objects, properties, relations, and classes. Variations such as Situational Calculus can be used to introduce time so that states, events and processes can be represented. If the idea of knowledge is extended to include images and other sense modalities, radically different languages may be needed. As for now, PC provides a good starting point for ontology-sharing technologies.

Using a logical notation for writing and sharing ontologies does not imply any commitment to implementing the knowledge system in that or a related logic. One is simply taking a Knowledge Level stance in describing the knowledge system, whatever the means of implementation. In this view, we can ask of any intelligent system, even one implemented in, say, neural networks, ``What does the system know?''

We are now ready to move to discussing the ontology of a specific phenomenon, that of problem-solving. We think that almost all of the work on ontologies, until recently Fensel et al., 1997, Mizoguchi et al., 1995, has been focused on one dimension of knowledge content. In order to explain this claim, we will need to identify different dimensions to the study of ontologies. We turn to this task next.

1.3 Dimensions for Ontology Specification in Knowledge Systems

In building a problem-solver, we need two types of knowledge:

  1. Domain factual knowledge: Knowledge about the objective realities in the domain of interest (Objects, relations, events, states, causal relations, etc. that obtain in some domain)
  2. Problem-solving knowledge: Knowledge about how to use the domain knowledge to achieve various goals. This knowledge is often in the form of a problem-solving method (PSM) that can help achieve a given type of problem-solving goal in different domains.

Early research in KBS mixed together both factual and problem-solving knowledge into highly domain-specific rules and called all of it ``domain knowledge.'' As research progressed, however, it became clear that there were systematic regularities in how domain knowledge --we'll use ``domain knowledge'' as a short hand for ``domain factual knowledge''-- was used for different goals, and that this knowledge could be abstracted and reused.

The domain knowledge dimension has by far been the focus so far of most of the AI investigations on ontologies. In our view, the historical reasons for this are the following. In AI, there have been two distinct sets of practical applications of knowledge representation. One of them is obviously the area that started out as Expert Systems, but has evolved into a somewhat broader area called Knowledge-Based Problem-Solving (KBPS). Another area within AI that has also needed knowledge representation is natural language understanding (NLU). Ontological analysis is typically required to identify appropriate semantic structures for an understanding program to map utterances to. Knowledge also plays a crucial role in disambiguation. Database and information systems communities have also recently begun to take interest in ontology issues. In practice, much of the work on ontologies, and in knowledge representation in general, has been driven more by concern with the needs of these communities, and these needs have mostly to do with the structure of what we have called factual knowledge. NLU and database systems typically do not do much problem-solving of the sort KBPS systems do.

KBPS systems on the other hand clearly need to be concerned with uses of the knowledge for complex chains of inference-making. Thus the field of KBPS came to a realization in quick order that in addition to factual knowledge there is also knowledge about how to use the knowledge, i.e., how to make inferences to solve the problem. In fact, the so-called second generation research in knowledge systems was fueled by this emphasis on methods appropriate for different types of problems. Quite a bit of the work on knowledge representation that has gone on within KBPS is not even known to the general knowledge representation community.

The dimension of representing problem-solving knowledge will be the focus of this paper. We will start by analyzing the ontology of a problem-solver, and note the role of problem-solving knowledge in it. We will ask: What is problem-solving knowledge made of? What are methods? What specific methods are there for what kinds of problems? What is the relationship between method ontologies and factual knowledge ontologies? We will also discuss sharing of and using method knowledge, since sharing is one of the major motivations of this line of research.

2 Ontology of a Problem Solver

The major elements of a first-level ontology of a problem solver are the following:

  1. A problem-solving goal
  2. Domain data describing the problem-instance
  3. Problem-solving state
  4. Problem-solving knowledge (PSK)
  5. Domain factual knowledge (DFK)

The underlying picture is that the problem-solver's state changes as a result of applying problem solving knowledge to domain knowledge and problem data. Eventually, a part of its state description might contain a solution to the goal, at which point the problem solving process stops. It might also stop when, given its knowledge and data, no state change is possible. This description most likely does not capture all forms of what one might consider problem-solving. For example, distributed problem solving, where a number of independent agents collectively solve problems would require changes to our description. However, for the points we wish to make, this characterization is a good starting point.

2.1 Problem-Solving Goals

Goal descriptions use ``attitude'' terms along with (external) world descriptions. By attitude terms, we mean terms such as ``desired,'' ``to be avoided,'' ``explain,'' ``assign likelihood to.'' These terms take as arguments world state, event, object configuration, or process descriptions. Goal descriptions also include either explicitly or implicitly a description of the form of the solution. For example, a diagnostic goal might be stated as: Explain (attitude ontology) abnormal observations of a system (domain knowledge ontology); solution form: Set of Malfunctions caused Observations. The first-level ontology for goals is simply attitude terms plus the DFK ontology.

Examples of the first-level ontology for goals also help us introduce second-level ontology terms for goals. As mentioned, diagnostic systems have the goal of explaining certain world states. Planning systems have goals to generate actions needed to achieve certain desired world states or avoid undesired ones. Design systems have the goal to synthesize object configurations in the world that would have desired behaviors. Prediction or simulation systems have the goal of predicting future world states, and so on. Thus explain, diagnose, design, plan, predict, etc., are some of the common terms in the second-level ontology for problem- solving goals. Classification goals are very common in knowledge systems as well. Many of these second-level elements of the goal ontology were identified during the research on task-oriented approaches. In fact, the term ``task'' was coined to refer to goal types of certain generality (2). Tasks come in a range of generality: ``diagnose medical problems,'' is more specific than ``diagnose systems,'' but is more general than ``diagnose liver illnesses.'' ``Explain observations,'' is more general than ``diagnose.'' ``Explain observations'' can cover many other task types in addition to diagnosis.

2.2 Data Describing the Problem Instance

A problem instance is also described in terms of domain factual ontology. In diagnosis, it is a set of observations. In prediction, it is description of actions and conditions that obtain in some world. In design, it is a set of constraints and specifications. A problem instance for a logistic planner might be a set of specific supply items to be delivered to specific locations and under specific weather and equipment availability conditions.

The ontology of problem instance data is the same as that of domain factual knowledge. The second-level ontology for problem instance data parallels that of the goal. Data for diagnostic goals are variously called normal and abnormal observations, and symptoms; for design goals, they are specifications, constraints, and functions; for prediction goals, they are initial conditions, and actions, and so on.

2.3 Problem States

The problem-solver creates and changes a number of internal objects during the process of problem-solving. The problem state is the set of state variables representing these internal objects. Problem state includes information about current goals and subgoals. It would also include all knowledge inferred during problem-solving: e.g., elements of candidate solutions, their plausibility values, rejected solutions and reasons for them. In the case of diagnosis, problem state would contain information such as: current diagnostic hypotheses, observations explained by hypothesis H, the best hypotheses so far. In the case of design, problem state would contain information such as: partial design, design candidate, specifications satisfied by the design candidate, best candidate so far. Thus task types determine types of problem state variables. Active problem-solving goals and subgoals are a distinguished part of the problem state description. As problem solving proceeds, some of the subgoals are solved, new subgoals created, some goals abandoned, and so on.

2.4 Problem-Solving Knowledge

The basic unit of problem solving knowledge (PSK) is a mapping of the following form:

<conditions on the problem state (including goals)>
<conditions on domain knowledge>
<conditions on data describing the problem instance>
changes to <problem state (including goal components)>
requests for <data>

The above is not intended to be seen as a rule (which is an implementation formalism in KBS), but as a Knowledge Level description of a basic unit of problem- solving. It describes what the first-level ontology of an inference is. It says that problem-solving behavior is responsive to the current state of problem solving (including goals), and uses domain knowledge and problem data to make changes to problem state, achieve goals or setup subgoals, and to obtain additional data. For example, a piece of diagnostic problem-solving knowledge might be (3):

``If the problem state includes the goal Evaluate hypothesis H, and if domain knowledge indicates that H can be evaluated as confirmed if the observations O_1, ..., O_n have the values v_1, ..., v_n respectively, and if O_1, ..., O_n do have values v_1, ..., v_n in the data describing the problem instance, then evaluate H as confirmed.''

Problem-solving knowledge may be indexed by the goal it serves (in the example, Evaluate hypothesis). This facilitates sharing of this type of knowledge. It can be applied to any domain in which we need to assess hypotheses --engine or medical diagnosis as long as we have the domain knowledge corresponding to the ``observations --> hypothesis'' part in the inference. This is a key part of our analysis: the above is not a rule such as the rules of Mycin, in which domain factual knowledge and inference knowledge were combined. It has an abstract character as problem-solving knowledge.

PSK units may come at different degrees of abstraction, based on how abstract the indexing goals are. The goal may range in abstractions: for example, from ``Establish hypothesis,'' through ``Establish diagnostic hypothesis,'' to ``Establish device malfunction,'' to ``Establish diode failure.'' Actual implementations in specific domains may combine problem-solving knowledge and domain knowledge, the way early rule-based systems did. For example, a system may have a rule of the form:

``If the goal is to establish diode malfunction, then if voltage v_1 = 0, then confirm diode malfunction.''

This unit of problem-solving knowledge is harder to share across domains, even though for this specific domain it may have exactly the same result as the application of the more abstract PSK unit.

2.5 Domain Knowledge

We have already discussed the issues in describing factual knowledge in the world: objects, properties, relations, classes and subclasses, states, processes, events, parts, etc. are some of the elements in that ontology. We also indicated that the ontology for domain knowledge is determined by the needs of the goal and the problem-solving knowledge. A second level ontology for diagnostic systems would contain terms such as device, event, component, and component connection, function, malfunction, symptom, and normal/abnormal behavior, and relational knowledge of the sort Can-cause(malfunction, {observation | malfunction}). The domain knowledge might also have process knowledge in the form of causal processes that realize the function in the device. Similar analyses can be made for other tasks. The second-level ontology for domain knowledge for design and diagnosis have much in common: the language of devices, components and causal relations is common to both tasks.

2.6 Relations Between Different Elements of the Problem-Solving Ontology

Elements of a problem-solver use domain knowledge of specific types, and place mutual ontological constraints. McDermott coined the term knowledge roles to refer to how problem-solving knowledge required domain knowledge of certain types. CommonKADS work --and subsequent work built on it Aben, 1995, Coelho & Lapalme, 1996-- on inference structures tries to make explicit the relation between different elements of a problem solving knowledge.

Let us take an example that Coelho and Lapalme use: Select_Parameter task (which, to look ahead somewhat, is a subtask in the Propose-and-Revise method). Select is an attitude term. Parameters are properties of some objects in the domain. Using our notation, this unit of problem-solving knowledge may be described as:

(Sub)Goal: Select value for parameter ?p1
Condition on Problem Instance Data: value of parameter ?p2
Condition on Problem State: problem state includes a parameter ?p1 which has
not been assigned a value
Condition on Domain Knowledge: domain knowledge has a constraint relating the
values of ?p1 and ?p2
Changes to Problem State: change the problem state such that ?p1 has the value
allowed by the constraint, and remove subgoal

It is this close relationship between goal types (tasks), problem-solving knowledge, and domain knowledge that is at the basis of sharing of problem-solving knowledge. Problem-solving knowledge can be reused in a different domain and task by simply applying it to knowledge stated in the appropriate domain ontology. We'll discuss this in some detail later in the context of methods.

3 Problem-Solving Methods

A problem solving method is an organized package of PSK units, indexed by the problem solving goal to which it is applicable. Why would one need to organize inferences in the form of methods? Recall that a PSK unit may set up a subgoal, instead of achieving the goal for which it is invoked. Suppose there is a PSK indexed with goal G, and this PSK sets up a subgoal, G1. Suppose there is another PSK for the goal type G1. One may then use the combination of the two PSK's as a packaged unit, index it by G and invoke it whenever G is encountered. A method may consist of just one PSK unit. In general, however, methods derive their value by their larger granularity, since a complex reasoning strategy may thus be reused.

In the example we just gave, the situation was quite simple: the PSK unit for G set up G1 and the two PSK units then helped achieve G. However, more complex situations will normally arise. G may set up more than one subgoal; subgoal G1 may have more than one PSK unit available to solve it, depending on conditions on data and DFK. Thus a method may include alternate ways of accomplishing some of the subgoals. It would also need to include control knowledge to organize invocation of subgoals, and knowledge about exchanging and integrating information between subgoals. Note that, just like PSK units, a method will either achieve the main goal, or set up one or more subgoals that would need to be solved by additional methods.

How general is the characterization of methods as compositions of problem-solving knowledge units, related to each other by subgoal relations? Again, similar to our point about our model of the problem solver itself, the definition does not cover everything that would intuitively appear to be a method. Our definition of a method is intended to capture the range of methods that a universal subgoaling system such as SOAR Laird et al., 1987 can accomplish. But our goal here is not to account for all methods, but to indicate what a method ontology looks like.

First-Level Ontology for Methods. The first-level ontology of a PSM then is simply the ontology of PSK units plus control knowledge. The method ontology thus includes the goal (also referred to as competence), the goal-subgoal tree it induces, the forms in which it requires data and domain knowledge (i.e. assumptions and requirements of the method), and control knowledge for organizing the invocation of the subgoals (see also Fensel et al., 1997). We have discussed all the above elements earlier, except for control knowledge.

Control Ontology. Control may be explicitly specified using the standard vocabulary of control: sequential control, conditional branching, iteration, recursion, etc. There appears to be no task-specific vocabulary for control. Control may also emerge from an interaction of domain knowledge and the problem state. For example, in hierarchical classification, navigation over a hierarchy may be explicitly programmed. Or, one may simply have the knowledge, ``consider the successor of current concept,'' as in Johnson, 1991. If the domain knowledge has information about successors for the concepts, then the resulting behavior will be hierarchical navigation, without this strategy being explicitly stated. Complex control behaviors may emerge as a result of the interaction between the architecture and the contents of the knowledge base.

Indexing of Methods. The method is indexed by its goal. Just as in the case of an individual PSK unit, the goals and subgoals of the methods can occur at different degrees of abstraction. If the goals are very abstract and general --say ``Explain,''-- then the user of the method has the task of mapping it to his needs. If the goal is very specific --say ``diagnose TC tuner circuit,''-- then the method's reusability is limited. This has come to be called the usability-reusability trade-off Klinker et al., 1991. There have been proposals in the literature to distinguish a class of methods called task-neutral methods Beys et al., 1996 i.e., methods that are not intended for a specific goal type. The intent is to capture the idea that some methods have extremely general applicability, while others seem to be more narrowly tailored to tasks of great specificity. We think that there is no binary separation between task-specific and task-neutral methods. Instead, there is a spectrum within which we can talk about more general and more specific Fensel, 1997. The point remains that the more general the task index of a method, then more work will be needed to recognize how to apply it to a specific task. We'll revisit this issue later when we discuss the issues surrounding sharing and use of methods.

Operationalization of Methods. A method of the sort we have been discussing is operationalized. That is, the description of a method can be used directly to implement a problem solver. As long as domain knowledge and problem data of the types required by the method definition are available, the method should ``run.''

However, the term ``method'' has often been used in the literature to refer, not to operationalized methods, but to general approaches and strategies to a problem class. For example, one often sees references to the ``divide and conquer'' method: this method is so general, and potentially so ubiquitously applicable, that it is virtually impossible a priori to operationalize it for all cases to which it is applicable. On the other hand, that method has indeed been successfully operationalized for specific domains, such as for composition of a certain class of algorithms Smith & Douglas, 1991.

Then there are methods, such as ``Propose-and-Revise,'' and ``Heuristic Classification,'' that are also quite general and may be applied to almost any problem in principle. These methods have been operationalized: if we can find domain knowledge to suit their requirements, they can be applied. Almost all the problems in the world can in theory be solved by proposing an initial solution of the right sort and then critiquing and revising it. The difficulty comes in identifying the right sort of initial solution, and right sorts of criticisms and modifications for arbitrary problems. Similarly, any problem can in theory be solved by categorizing its solution space into classes and mapping from the problem statement into solution categories. If this were easy to do in practice, then we do not need any other methods at all. The rub comes when one tries to identify, for an arbitrary problem, knowledge corresponding to good initial solution, criticism knowledge, etc. (for the Propose-and-Revise method), or a categorization of the solution space and mapping from the problem statement to the solution category space (for the Classification method). Thus, in practice, these methods, though operationalizable in general, can only be usefully applied to a narrow range of problems. For example, Propose-and-Revise has been used to solve parametric design problems in domains, where there is often clear and straightforward knowledge about initial candidates and about how to assess and modify the parameters Schreiber & Birmingham, 1996. Similarly, classification methods can be operationalized and applied to problems where knowledge about solution hierarchies and mappings from problem data to solution hierarchy elements is not hard to get. Simple types of selection problems and diagnostic problems where malfunction hierarchies are available are examples of problems for which the Classification method has been readily applicable.

3.1 Second-Level Method Ontology

A second level ontology for methods would identify, characterize and name methods based on how they achieve their goals. How a method works is the goal-subgoal structure induced by all the PSK units in the method. This goal-subgoal structure can be modified in many ways: by replacing one of the PSK units with a new one for the same subgoal, or adding a new one for it, we get a different overall goal-subgoal structure. Each variation counts as a new method. Because of this, even for simple goal types, the number of distinct methods may be too numerous to list and name.

In spite of the proliferation problem, one might still identify especially useful combinations of goal-subgoal structures for various goal types, and make these methods available for sharing. This was the approach adopted in the much of the work on task-oriented approaches. Thus Clancey's Heuristic Classification Clancey, 1985, Chandrasekaran's Generic Tasks Chandrasekaran, 1986 and KADS' Interpretation Models Wielinga & Breuker, 1986 identified a number of such generically useful problem-solving tasks and particularly appropriate problem-solving methods for them. Heuristic Classification was a method with the subtasks of data abstraction, heuristic match, and class refinement. The Generic Task paradigm identified hierarchical classification, abductive assembly, hypothesis assessment, design-plan selection and refinement, and data abstraction as some of the most ubiquitous tasks in knowledge systems. This framework also proposed how complex problems might be solved by the composition of several different generic tasks. For example, a diagnostic system might be built out of the methods for abductive assembly, classification, hypothesis assessment and data abstraction. This architecture is really for the generic problem of best-explanation finding, a task discussed in detail in Josephson & Josephson, 1994. This task is very important, since perception, natural language understanding, diagnostic problem-solving and scientific discovery can all be viewed as instances of best-explanation finding.

In later work, instead of identifying a unique preferred method with a task, Chandrasekaran developed the notion of a task structure Chandrasekaran, 1990. The task structure identifies a number of alternative methods for a task. Each of the methods sets up subtasks in its turn. The methods are typically shallow in the nesting of subgoals, increasing the chances that a user would use them as a unit without much modification. This kind of task-method-subtask analysis can be carried on to a level of detail until the tasks are primitive tasks with respect to the knowledge in the knowledge base. The advantage of developing task structures for a task, as opposed to a specific method, is that there is greater flexibility in putting together a method that meets with the needs of the specific situation. Methods with shallow goal-subgoal trees have relatively small number of PSK units. Of course, these methods would leave a number of subgoals unsolved. The user is free to seek other methods for these subgoals, thus providing flexibility. For example, the method of logarithms for multiplying two numbers might simply say, ``Find logarithms of the two numbers, add them, and then find the antilogarithm of the sum,'' without specifying how to achieve the subgoals ``Find logarithm of number,'' and ``Find antilogarithm of number.'' The task structure would identify alternate methods for each of these subtasks. The logarithm method is highly reusable, while giving the user freedom to use appropriate methods for the subgoals.

Task structures have been developed for the task of design Chandrasekaran, 1990, diagnosis Benjamins, 1993, planning Barros, L. Nunes de et al., 1996 and abductive inference Josephson & Josephson, 1994. This is not the place to give the details of these task analysis. The main points to be made here are the following. As a result of the GT and Task Structure work --and in general, of work on task analysis-- we now have a good repertoire of tasks and methods. The descriptions of the tasks and methods is a rich source of ontologies for problem solving. The examples we gave in the earlier section for diagnosis and design are but a small subset of the ontologies that can be constructed for problem solving knowledge from the work on task-oriented approaches The fact that this work focuses on tasks of certain generality makes the ontologies that arise from them of potential general interest as well.

The earlier generation of Generic Task languages can be viewed in the light of knowledge reuse. To take a simple example, a Generic Task language called CSRL Bylander & Mittal, 1990 was widely used to build classification problem-solving systems. CSRL can be viewed as giving the user an ability to:

  1. synthesize a classification method using a method- specific ontology consisting of terms such as ``establish concept'' and ``refine concept,'' within a control vocabulary that allowed variations on top-down navigation of the classification hierarchy, and
  2. represent domain factual knowledge for classification in the chosen domain.

Thus, the method ontology for classification directly resulted in a number of system builders reusing the problem-solving knowledge for classification embedded in CSRL. The Protégé family of planning tools of Musen and his associates has a similar connection to the method ontology idea we have been discussing in this paper.

Although the notion of problem-solving ontolgy as we introduce in this paper is relatively new compared to the work on domain ontologies, some work has been performed in the field of knowledge engineering that provided important input. Already in the beginning of the nineties, in Japan Mizoguchi's group started to talk about task-ontologies as valuable instruments to link the vocabulary and view of a user to that of a problem solver Tijerino & Mizoguchi, 1993. These task ontologies comprise the vocabulary and the reasoning steps of specific tasks and, in that sense, they relate to our notions of problem-solving goal and problem-solving knowledge

Work carried out at Stanford on Protégé also is relevant for problem-solving ontologies. A method ontology, in their view, defines the concepts and relationships that are used by the method to achieve its goal Gennari et al., 1994. Thus, a method ontology refers to a domain ontology from a method point of view. In this sense, Protégé-II is closely related to what are called ontological assumptions or requirements of problem-solving methods Benjamins et al., 1996, Benjamins & Pierret-Golbreich, 1996. Such assumptions define ontological commitments of PSMs in a domain-independent way. However, assumptions do not only refer to domain knowledge needed by the PSM, but also to the task a method is supposed to realize, in which case they are called teleological assumptions. Such assumptions define a weakening of the goal to be achieved by introducing assumptions about the precise problem type: the goal of the method is thus reduced. For example, in diagnosis a particular PSM might only be able to find single faults. If the original goal is to find any kind of fault (including multiple faults), then the PSM can achieve this goal under the single-fault assumption. Thus in fact the PSM achieves a weaker goal.

Different research communities are nowadays working on ontologies. In planning, a large effort is made to come up with an ontology for planning. For instance, the SPAR initiative (Shared Planning and Activity Representation Tate, 1998) aims at developing a standard language for describing plans and domain knowledge used in planning. SPAR then corresponds to a second-level ontology for planning, but it does not, however, not include an ontology of planning problem-solving steps.

4 Method Knowledge Sharing

We argued that problem-solving method knowledge is a somewhat neglected dimension of ontology, and that a motivation for studying its ontology is knowledge sharing. Consider a knowledge-system builder who wishes to build a medical decision-making system. He wants a system that would accept a description of a patient's complaints and produce an answer describing what is wrong with the patient. We have deliberately not used the term diagnosis at this point. In order to build the system, he is told that he would have to find knowledge libraries containing knowledge about his medical domain and also about problem-solving methods. He should first search the method libraries for a method that relates to the goal type for his problem-solver. If successful in finding an appropriate method, he should use its operationalized specifications to implement it. Then he could search knowledge libraries for medical knowledge of the type the method needs. In theory he should be able to put the knowledge system together.

Suppose in a library of problem-solving methods there is a method for the goal type of classification. We know that this method is a good candidate --simple diagnostic systems may be built by viewing diagnosis as classification of symptoms into a disease/malfunction hierarchy. Let us assume that his diagnostic problem is actually simple enough for this approach to work. Our system builder sends out the goal, ``discover what is wrong with a patient'', hoping that this would match a method in the library. The classification method in the library is indexed with the goal: ``classify a situation into a situation classification hierarchy''. There is clearly a gap here. Before the system builder can realize that this method is applicable, he would have to figure out the relation between the level of abstraction at which his goal is stated and that of the method index. Note that we are not simply talking about the vocabulary differences --we are assuming that this problem is handled by the standardization of ontologies. We are talking about the deeper problem of realizing that ``what is wrong with a patient,'' is an instance of a ``diagnosis,'' which under some conditions is solvable by ``classification.''

Assuming that our system builder is able to recognize that his problem can be solved by a classification method, suppose he retrieves that method and implements it. Now he is going to need domain knowledge. The classification method in the library would describe the needed knowledge in terms of classification hierarchy, observations, and causal relations between observations and the classes. In order to map this ontology to something closer to his domain, he will need to map ``classification hierarchy'' to ``disease hierarchy,'' and ``observations'' to ``symptoms,'' ``complaints,'' etc.

We have thus seen three possible gaps which have to be bridged before we can perform effective method sharing.

  1. The task and the method may use different underlying ontologies, and thus different terminology.
  2. The goal of the method may not be strong enough for achieving the task.
  3. The method requirements are stated in different terminology than the domain knowledge that the method has to reason about.

Fensel Fensel, 1997 proposes adapters as an instrument to bridge these gaps. An adapter can establish a mapping between the task and method ontology, can introduce teleological assumptions to weaken to goal to be achieved, and can establish a mapping between method and the domain ontology. In addition, adapters can introduce ontological assumptions on domain knowledge in order to satisfy the requirements of the method. A significant part of the work of adapters amounts thus to relating different ontologies to each other. How to do this is still an ongoing issue. Earlier we spoke about task-neutral methods. Clearly, the more task-neutral a method is, the more work needs to be done before it can be applied to a task and a domain, or, in other words, the more adapters need to be used. Adapters thus have the capability to turn task-neutral PSMs into task- dependent ones.

The simple example of our system builder who wishes to build a system that would decide what is wrong with a patient illustrates the problems in sharing and using method knowledge in a seamless way. Nevertheless, method-sharing has been practical to varying degrees of generality for more than a decade. The notion of problem-solving methods as separate items of knowledge that can be instantiated with domain knowledge and reused originated with Chandrasekaran, 1985 Clancey, 1985, and extended by Musen, 1989, Steels, 1990, and, in the CommonKADS project, by Benjamins et al., 1992. Typically, in this line of research it was required that the user of a method and the method indexing share the same terms in the description of the goal. For example, once one realizes that what he has is a classification problem, the GT framework provided the CSRL language to help build a classification problem-solver. The language guided the system-builder in inputting domain knowledge that was required by the method. Similarly, DSPL helped users build simple planning systems. However, neither the knowledge requirements nor the methods were stated in an abstract way, making it difficult to search for knowledge or import methods from a different source.

5 Concluding Remarks

We have argued for ontology engineering efforts to move in two parallel tracks, one with the current focus on representing domain knowledge, and the other a new focus on representing problem-solving knowledge. One of the major benefits of identifying and standardizing ontologies is the potential for knowledge sharing. We should be able to share knowledge of problem solving just as easily as factual knowledge about specific domains. We have provided in this paper what we believe is a clearer analysis of the components of the problem solving knowledge and related it to domain factual knowledge ontology. It turns out that much research on task analysis forms a good basis to develop such problem-solving ontologies. We thus build on previous work in this area, some our own.

The topic that we have treated in this paper is important because a knowledge system consists of both domain knowledge and problem-solving knowledge. Therefore, it is necessary to study domain and problem-solving ontologies together and pay attention to their integration. Moreover, for practical knowledge sharing and reuse only identifying the ontologies for domain and problem-solving knowledge is not enough. We also need to establish mechanisms for relating them to each other, such as mapping terms or making assumptions. And it is even harder, but needed, to automate these mechanisms as much as possible. Several new projects include these goals in their objectives such as DARPA's HPKB (4) project and Esprit's IBROW3 (5) project, but there is still a long way to go.

An important consideration in reusability of ontologies, whether of domain factual knowledge or of problem solving knowledge, is the challenge to reusability made by the Situated Cognition (SC) movement Winograd & Flores, 1987, Menzies, 1996. The relevance of SC for us is the idea that the way in which knowledge is needed is highly dependent on the situation. For one situation, ``A causes B'' might be the relevant knowledge, while for another, ``A causes B after a delay of 1 sec'' is the relevant knowledge, while for yet a third situation, ``A causes B as long as D and E are not true'' becomes the appropriate form in which the knowledge is needed. Note that we are talking about the same knowledge, but represented in different ways for different situations. SC proponents hold that these kinds of additional conditions may be added indefinitely, thus making the prospect of representing the essence of that item of knowledge, once and for all at the Knowledge Level, rather slim. Similar comments may be made about the representation of problem-solving methods as well.

In our view, this issue needs to be approached empirically. For one thing, in the above examples, what changes radically from situation to situation is not the ontology itself, but assertions made using the ontology. Thus, it is very likely that in general the rate of changes needed in ontologies from situation to situation is likely to be much less than the changes needed to a specific knowledge base. We think also that assumptions (requirements) are interesting candidates to capture the situation to some extent. We feel that reuse issues should be investigated by varying the situations and task specifications around a starting situation in such a way that we can track the needed changes to ontology and the knowledge base. This also implies that, in order truly to investigate reuse, we need systems that support the changes needed for knowledge and method ontologies as situations are changed. This appears to be an important new direction of research.


This material is based upon work supported by The Office of Naval Research, grant number N00014-96-1- 0701, DARPA order number D594. Richard Benjamins is supported by the Netherlands Computer Science Research Foundation with financial support from the Netherlands Organisation for Scientific Research (NWO).

1 Bibliography

Aben, 1995
Aben, M. (1995). Formal Methods in Knowledge Engineering. PhD thesis, University of Amsterdam, Amsterdam.
Barros, L. Nunes de et al., 1996
Barros, L. Nunes de , Valente, A., & Benjamins, V. R. (1996). Modeling planning tasks. In Third International Conference on Artificial Intelligence Planning Systems, AIPS-96, pp. 11--18. American Association of Artificial Intelligence (AAAI).
Benjamins, 1993
Benjamins, V. R. (1993). Problem Solving Methods for Diagnosis. PhD thesis, University of Amsterdam, Amsterdam, The Netherlands.
Benjamins et al., 1996
Benjamins, V. R., Fensel, D., & Straatman, R. (1996). Assumptions of problem-solving methods and their role in knowledge engineering. In Wahlster, W., (Ed.) , Proc. ECAI--96, pp. 408--412. J. Wiley & Sons, Ltd.
Benjamins et al., 1992
Benjamins, V. R., Jansweijer, W. N. H., & Abu-Hanna, A. (1992). Integrating problem solving methods into KADS. In Bauer, C. & Karbach, W., (Eds.) , Interpretation Models for KADS -- Proceedings 2nd KADS User Meeting (KUM'92), Munich. GMD Studie 212, Sankt Augustin: Gesellschaft für Mathematik und Datenverarbeitung (GMD).
Benjamins & Pierret-Golbreich, 1996
Benjamins, V. R. & Pierret-Golbreich, C. (1996). Assumptions of problem-solving methods. In Shadbolt, N., O'Hara, K., & Schreiber, G., (Eds.) , Lecture Notes in Artificial Intelligence, 1076, 9th European Knowledge Acquisition Workshop, EKAW-96, pp. 1--16, Berlin. Springer-Verlag.
Beys et al., 1996
Beys, P., Benjamins, V. R., & van Heijst, G. (1996). Remedying the reusability-usability tradeoff for problem-solving methods. In Gaines, B. R. & Musen, M. A., (Eds.) , Proceedings of the 10th Banff Knowledge Acquisition for Knowledge-Based Systems Workshop, pp. 2.1--2.20, Alberta, Canada. SRDG Publications, University of Calgary.
Bylander & Chandrasekaran, 1988
Bylander, T. & Chandrasekaran, B. (1988). Generic tasks in knowledge-based reasoning: The right level of abstraction for knowledge acquisition. In Gaines, B. & Boose, J., (Eds.) , Knowledge Acquisition for Knowledge Based Systems, volume 1, pp. 65--77. London, Academic Press.
Bylander & Mittal, 1990
Bylander, T. & Mittal, S. (1990). CSRL a language for classificatory problem solving and uncertainty handling. AI Magazine, 7:66--77.
Chandrasekaran, 1985
Chandrasekaran, B. (1985). Generic tasks in expert system design and their role in explanation of problem solving. In Proceedings of the National Academy of Science/Office of Naval Research Workshop on AI and Distributed Problem Solving, Washington, DC. National Academy of Sciences.
Chandrasekaran, 1986
Chandrasekaran, B. (1986). Generic tasks in knowledge based reasoning: High level building blocks for expert system design. IEEE Expert, 1(3):23--30.
Chandrasekaran, 1990
Chandrasekaran, B. (1990). Design problem solving: A task analysis. AI Magazine, 11:59--71.
Chandrasekaran, 1994
Chandrasekaran, B. (1994). AI, knowledge and the quest for smart systems. IEEE Expert, 9(6):2--6.
Clancey, 1985
Clancey, W. J. (1985). Heuristic classification. Artificial Intelligence, 27:289--350.
Coelho & Lapalme, 1996
Coelho, E. & Lapalme, G. (1996). Describing reusable problem-solving methods with a method ontology. In Gaines, B. R. & Musen, M. A., (Eds.) , Proceedings of the 10th Banff Knowledge Acquisition for Knowledge-Based Systems Workshop, pp. 3.1--3.20, Alberta, Canada. SRDG Publications, University of Calgary.
Fensel, 1997
Fensel, D. (1997). The tower-of-adapters method for developing and reusing problem-solving methods. In Plaza, E. & Benjamins, V. R., (Eds.) , Knowledge Acquisition, Modeling and Management, pp. 97--112. Springer-Verlag.
Fensel & Benjamins, 1996
Fensel, D. & Benjamins, V. R. (1996). Assumptions in model-based diagnosis. In Gaines, B. R. & Musen, M. A., (Eds.) , Proceedings of the 10th Banff Knowledge Acquisition for Knowledge-Based Systems Workshop, pp. 5.1--5.18, Alberta, Canada. SRDG Publications, University of Calgary.
Fensel et al., 1997
Fensel, D., Motta, E., Decker, S., & Zdrahal, Z. (1997). Using ontologies for defining tasks, problem-solving methods and their mappings. In Plaza, E. & Benjamins, V. R., (Eds.) , Knowledge Acquisition, Modeling and Management, pp. 113--128. Springer-Verlag.
Fridman-Noy & Hafner, 1997
Fridman-Noy, N. & Hafner, C. D. (1997). The state of the art in ontology design. AI Magazine, 18(3):53--74.
Gennari et al., 1994
Gennari, J. H., Tu, S. W., Rotenfluh, T. E., & Musen, M. A. (1994). Mapping domains to methods in support of reuse. International Journal of Human-Computer Studies, 41:399--424.
Gruber, 1993
Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5:199--220.
Guarino, 1995
Guarino, N. (1995). Formal ontology, conceptual analysis and knowledge representation. International Journal of Human-Computer Studies, 43(5/6):625--640. Special issue on The Role of Formal Ontology in the Information Technology.
Johnson, 1991
Johnson, T. (1991). Generic Tasks in the Problem-Space Paradigm: Building Flexible Knowledge Systems while Using Task-Level Constraints. PhD thesis, The Ohio State University, Ohio.
Josephson & Josephson, 1994
Josephson, J. & Josephson, S., (Eds.) (1994). Abductive Inference, Computation, Philosophy, Technology. Cambridge, Cambridge University Press.
Klinker et al., 1991
Klinker, G., Bhola, C., Dallemagne, G., Marques, D., & McDermott, J. (1991). Usable and reusable programming constructs. Knowledge Acquisition, 3:117--136.
Laird et al., 1987
Laird, J. E., Newell, A., & Rosenbloom, P. S. (1987). SOAR: an architecture for general intelligence. Artificial Intelligence, 33:1--64.
Lenat & Guha, 1990
Lenat, D. B. & Guha, R. V. (1990). Building large knowledge-based systems. Representation and inference in the Cyc project. Reading, Massachusetts, Addison-Wesley.
Marr, 1982
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco, W.H. Freeman.
McCarthy & Hayes, 1969
McCarthy, J. & Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence, 6:133--153.
Menzies, 1996
Menzies, T. (1996). Assessing responses to situated cognition. In Gaines, B. R. & Musen, M. A., (Eds.) , Proceedings of the 10th Banff Knowledge Acquisition for Knowledge-Based Systems Workshop, Alberta, Canada. SRDG Publications, University of Calgary.
Mizoguchi et al., 1995
Mizoguchi, R., van Welkenhuysen , J., & Ikeda, M. (1995). Task ontology for reuse of problem solving knowledge. In Mars, N. J. I., (Ed.) , Towards very large knowledge bases. IOS Press.
Musen, 1989
Musen, M. A. (1989). Automated Generation of Model-Based Knowledge-Acquisition Tools. London, Pitman. Research Notes in Artificial Intelligence.
Neches et al., 1991
Neches, R., Fikes, R. E., Finin, T., Gruber, T. R., Senator, T., & Swartout, W. R. (1991). Enabling technology for knowledge sharing. AI Magazine, 12(3):36--56.
Newell, 1982
Newell, A. (1982). The knowledge level. Artificial Intelligence, 18:87--127.
Schreiber & Birmingham, 1996
Schreiber, A. T. & Birmingham, W. P. (1996). The Sisyphus-VT initiative. International Journal of Human-Computer Studies, 44. Editorial special issue.
Schreiber et al., 1994
Schreiber, A. T., Wielinga, B. J., de Hoog, R., Akkermans, J. M., & Van de Velde , W. (1994). CommonKADS: A comprehensive methodology for KBS development. IEEE Expert, 9(6):28--37.
Smith & Douglas, 1991
Smith & Douglas, R. (1991). Structure and design of problem reduction generators. In Möller , B., (Ed.) , Constructing Programs from Specifications. North Holland.
Sowa, 1997
Sowa, J. F. (1997). Knowledge Representation: Logical, Philosophical, and Computational Foundations. Book draft.
Steels, 1990
Steels, L. (1990). Components of expertise. AI Magazine, 11(2):28--49.
Tate, 1998
Tate, A. (1998). Roots of SPAR - shared planning and activity representation. The Knowledge Engineering Review, pp. to appear. Special Issue on Ontologies.
Tijerino & Mizoguchi, 1993
Tijerino, Y. A. & Mizoguchi, R. (1993). MULTIS II : enabling end-users to design problem-solving egines via two-level task ontologies. In et al., A., (Ed.) , EKAW'93 Knowledge Acquisition for Knowledge-Based Systems. Lecture Notes in Artificial Intelligence, LNCS 723, pp. 340--359, Berlin, Germany. Springer-Verlag.
van Heijst et al., 1997
van Heijst , G., Schreiber, A. T., & Wielinga, B. J. (1997). Using explicit ontologies in KBS development. International Journal of Human-Computer Studies, 46(2/3):183--292.
Wielinga & Breuker, 1986
Wielinga, B. J. & Breuker, J. A. (1986). Models of expertise. In Proceedings ECAI--86, pp. 306--318.
Wieringa & de Jonge , 1995
Wieringa, R. & de Jonge , W. (1995). bject identifiers, keys and surrogates -- object identifiers revisited. Theory and Practice of Object Systems (TaPOS), 1.
Winograd & Flores, 1987
Winograd, T. & Flores, F. (1987). On understanding computers and cognition: A new foundation for design: A response to the reviews. Artificial Intelligence, 31:250--261.


Call for papers for a special issue on ``Temporal Parts'' for The Monist, An International Quarterly Journal of General Philosophical Inquiry.
The term ``task'' has been used in two related, but still distinct, ways, causing some confusion. In one usage, task is a set or sequence of things to do: that is, task is something you do. Another usage has task as a term for a goal type, leaving open what the steps are in achieving it. We use it in the latter sense; that is, tasks are goal types, what needs to be accomplished.
Our examples throughout are chosen mainly for clarity in making the conceptual points rather than for accuracy or completeness in describing the knowledge. In particular, in this example, the knowledge for establishing hypotheses is usually substantially more complex than the example.