next up previous
Next: Knowledge-Acquisition--Tool Critiquing Up: Design Critiquing for a Previous: Background: A Pattern

Ontology Critiquing

 

An ontology is an abstract description of the world (Neches et al., 1991). In principle, several modeling approaches can be used to define ontologies. In PROTÉGÉ-II and in many other tools, however, an ontology is an object-oriented model of concepts and their instances. Examples of such concepts are vehicles, automobiles, trucks, and bicycles.

PROTÉGÉ-II distinguishes among domain ontologies that model concepts relevant for problem domains (e.g., symptoms, diceases, and drugs), method ontologies that model concepts relevant for problem-solving methods (e.g., hypotheses, constraints, and operations), and application ontologies that are an amalgamation of the domain, problem-solving, and other concepts relevant for an application (Gennari et al., 1994; Eriksson et al., 1995). These ontology types help developers reuse domain models and problem-solving methods. Moreover, developers can use application ontologies to define application-specific models. These application ontologies can be based on concepts from reused domain and method ontologies.

Critiquing an ontology can be difficult, because many of the potential deficiencies are modeling decisions. For instance, what constitutes a subclass (in contrast to an instance with a certain property) is a modeling decision that must be made by the developer. We make a distinction between inappropriate modeling decisions (which a program or a person cannot detect and correct without domain knowledge), and logical inconsistencies (which a program can detect, and a program or a person can correct). Inappropriate modeling decisions are difficult to approach without domain knowledge. In PROTÉGÉ-II, this problem is complicated further by the different natures of the domain, method, and application ontologies. A program, however, can sometimes use various metrics (e.g., the depth of a class hierarchy) to alert the user of modeling strategies that are inappropriate potentially. Although this high-level critiquing approach cannot define the problem precisely, it can provide valuable feedback to the user. Note that this approach is analogous to the way grammar checkers report abnormal language metrics, such as readability indices.

There are a few examples of modeling mistakes that can be approached with automated critiquing, such as depth-first modeling without purpose. In a somewhat misguided attempt to model the real world, developers sometimes approach physical objects in a depth-first manner. For instance, creating an object-oriented model of a chair (or any other physical object), and continuing to model every component and subcomponent of the chair is meaningless unless the developer has a clear idea of how the model will be used in the problem-solving process. One possible approach to identifying such excessive depth-first modeling is to monitor the growth of the ontology in the ontology editor.

A critiquing system can detect relatively easily logical inconsistencies in ontologies. Let us examine four typical cases of logical inconsistencies that automated systems can approach.

  1. Disconnected class references. In PROTÉGÉ-II, a common problem stems from disconnected references among classes. A slot of type instance (i.e., an instance pointer) should have a list of allowed classes for the instance referenced. These class references are textual tokens in the ontology language.gif

  2. Missing information. In PROTÉGÉ-II, the developer must specify which slot of a class should be used as the key in browsers of instances of this class (e.g., the name slot for a object). Currently, the developer provides this information to the ontology editor through the browser-key slot facet. Without the browser-key information, the run-time system for the knowledge-acquisition tools cannot display the correct information in the browser.

  3. Confused relationship semantics. A common class-modeling problem is the beginner's mistake of mixing is-a and is-part relationships in ontologies. Even well-trained developers mix these relationships occasionally. Although developers make trivial mistakes rarely, such as the use of the class wheels as a subclass of the class automobile, situations more intricate than this example may occur. Unfortunately, it is difficult to detect these mistakes automatically, because most ontologies do not contain redundant common-sense knowledge about the classes defined.

  4. Systematic faults. The ontologies can contain widespread systematic errors. Programs can detect some of these problems automatically. For instance, a critiquing system can point out a general lack of documentation (comments) in the ontology, and can request more documentation for classes and slots in the ontology. Another example of a general recommendation approach is to warn the developer about missing type information for slots.

In addition to analyzing class definitions in ontologies, a critiquing system could check sets of object instances of the ontologies. In this case, the system should check that the instances are consistent with the constraints defined in the ontology.

Currently, the prototype CT implementation does not critique ontologies in the sense that it analyzes ontologies directly. CT analyzes definitions of knowledge-acquisition tools only. However, these definitions contain sufficient information to detect a few common ontology problems, such as disconnected class references and missing browser keys.



next up previous
Next: Knowledge-Acquisition--Tool Critiquing Up: Design Critiquing for a Previous: Background: A Pattern



Henrik Eriksson