next up previous
Next: Reporting Up: Design Critiquing for a Previous: Ontology Critiquing

Knowledge-Acquisition--Tool Critiquing

 

A knowledge-acquisition tool in the PROTÉGÉ-II approach is a tool specification that defines the tool. PROTÉGÉ-II generates this specification automatically. The critiquing of knowledge-acquisition tools generated automatically may seem unnecessary, because a correct metatool should produce flawless tools. In practice, however, there are several reasons for taking advantage of a critiquing system. Let us examine three types of problems that can occur in tools generated automatically.

  1. Incorrect metatool input. The input to the metatool must conform to the assumptions on which the metatool is based. As discussed in Section 3, domain ontologies used as the basis for tool generation can have flaws that result in problematic tools. A critiquing system can check the metatool input and output to ensure that the generation process is working correctly.

  2. Incorrect custom adjustments. Developers can make mistakes when custom tailoring knowledge-acquisition tools. For example, in the PROTÉGÉ-II approach, developers sometimes create inappropriate windows when adjusting the window layouts. Critiquing systems can examine the custom adjustments, or, alternatively, the tools generated to detect problematic cases.

  3. Incorrect metatool implementation. Naturally, there can be bugs in the metatool implementation that result in incorrect knowledge-acquisition tools. Critiquing systems can detect some of these problems, because metatool bugs often result in incomplete target tools.
We have seen examples of problems from these problem categories in PROTÉGÉ-II. Thus, developers get confused by incorrect metatool input that manifests itself in the target tool. Developers make inappropriate custom adjustments occasionally. Naturally, bugs in the tool generator---occurring at various protoyping stages during the development of PROTÉGÉ-II---frustrate developers. Before we can discuss how a critiquing system can approach these problem categories, we must first examine how PROTÉGÉ-II generates and runs knowledge-acquisition tools.

PROTÉGÉ-II uses the metatool DASH to generate knowledge-acquisition tools from domain ontologies (Eriksson et al., 1994; Eriksson et al., 1995). A tool generated by PROTÉGÉ-II consists of a textual definition (EO file) of the tool and its properties. A run-time system then uses this definition to produce the tool user interface, and to provide the appropriate tool behavior.gif A critiquing system can examine the textual definition and critique the tool design on the basis of this definition.

The developer can run CT once DASH has completed the knowledge-acquisition tool. CT reads the resulting EO file and builds an internal structure of the tool definition. CT then analyzes this structure by applying critiquing rules to it. To detect the problem classes discussed earlier, CT uses three major classes of critiquing rules.

  1. Inconsistent input ontologies. This rule set checks for symptoms of problematic input ontologies (e.g., disconnected class references).

  2. Window layout problems. This rule set checks for geometrical problems in window layouts. Figure 2 shows a sample inappropriate layout with overlapping widgets and widgets partially outside the window.

     
    Figure 2:   Inappropriate window layout. This window contains overlapping widgets and widgets partially outside the window boundaries.

  3. Bugs in DASH. This rule set checks for symptoms of bugs in the metatool.gif Examples of such symptoms include disconnected references in the tool definition and partial tool definitions.

Generally, the critiquing of window layouts is similar to the critiquing of user-interface designs (Löwgren & Nordqvist, 1992). Although we do not check the windows with respect to an official user-interface style guide (as in KRI, Löwgren & Nordqvist, 1992), we use several critiquing rules that are consistent with such style guides and with established design principles. Appendix A describes the critiquing rules in further detail.

Many critiquing systems, including CT, use a two-step approach to the critique generation: (1) analysis of input data and (2) report generation. CT uses critiquing rules for the first step. A report module performs the second step. Let us discuss the rule format that CT uses by examining a sample rule. Figure 3 shows a sample rule that checks for widgets outside the window. The parameter lostWidgets is the number of widgets outside the window boundaries. CT computes this value by using utility functions for checking for widgets outside a window and for counting such widgets. Figure 4 shows the definition of the widgetOutside function. This utility function determines if a widget is outside the window area.

 
Figure 3:   Sample critique rule. This rule checks for widgets outside the window area, and logs critique points for the critique report to the user.

 
Figure 4:   Utility function for determining if a widget is located outside its window.

When lostWidgets are greater than zero, the rule in Figure 3 adds a critique point to the list of problems to report to the user. CT uses this list to prepare the critique report (see Section 5). In this case, the rule adds a geometry point of priority 8. (This priority factor allows the reporting module to sort the critique points.) The string ``The window contains...'' is the kernel text, which is the most basic problem report. The string ``Widgets outside this area...'' is the explanation text, which provides the rationale for the critique. Finally, the string ``This problem can be avoided by...'' is the advice string, which suggests an action that can correct the problem. In the next section, we shall examine how CT uses the list of critique points collected from the rules activated to report the critique to the user.



next up previous
Next: Reporting Up: Design Critiquing for a Previous: Ontology Critiquing



Henrik Eriksson