An ontology is an abstract description of the world (Neches et al., 1991). In principle, several modeling approaches can be used to define ontologies. In PROTÉGÉ-II and in many other tools, however, an ontology is an object-oriented model of concepts and their instances. Examples of such concepts are vehicles, automobiles, trucks, and bicycles.
PROTÉGÉ-II distinguishes among domain ontologies that model concepts relevant for problem domains (e.g., symptoms, diceases, and drugs), method ontologies that model concepts relevant for problem-solving methods (e.g., hypotheses, constraints, and operations), and application ontologies that are an amalgamation of the domain, problem-solving, and other concepts relevant for an application (Gennari et al., 1994; Eriksson et al., 1995). These ontology types help developers reuse domain models and problem-solving methods. Moreover, developers can use application ontologies to define application-specific models. These application ontologies can be based on concepts from reused domain and method ontologies.
Critiquing an ontology can be difficult, because many of the potential deficiencies are modeling decisions. For instance, what constitutes a subclass (in contrast to an instance with a certain property) is a modeling decision that must be made by the developer. We make a distinction between inappropriate modeling decisions (which a program or a person cannot detect and correct without domain knowledge), and logical inconsistencies (which a program can detect, and a program or a person can correct). Inappropriate modeling decisions are difficult to approach without domain knowledge. In PROTÉGÉ-II, this problem is complicated further by the different natures of the domain, method, and application ontologies. A program, however, can sometimes use various metrics (e.g., the depth of a class hierarchy) to alert the user of modeling strategies that are inappropriate potentially. Although this high-level critiquing approach cannot define the problem precisely, it can provide valuable feedback to the user. Note that this approach is analogous to the way grammar checkers report abnormal language metrics, such as readability indices.
There are a few examples of modeling mistakes that can be approached with automated critiquing, such as depth-first modeling without purpose. In a somewhat misguided attempt to model the real world, developers sometimes approach physical objects in a depth-first manner. For instance, creating an object-oriented model of a chair (or any other physical object), and continuing to model every component and subcomponent of the chair is meaningless unless the developer has a clear idea of how the model will be used in the problem-solving process. One possible approach to identifying such excessive depth-first modeling is to monitor the growth of the ontology in the ontology editor.
A critiquing system can detect relatively easily logical inconsistencies in ontologies. Let us examine four typical cases of logical inconsistencies that automated systems can approach.
Currently, the prototype CT implementation does not critique ontologies in the sense that it analyzes ontologies directly. CT analyzes definitions of knowledge-acquisition tools only. However, these definitions contain sufficient information to detect a few common ontology problems, such as disconnected class references and missing browser keys.