next up previous
Next: 5.3 Critique Up: 5 A Propose-critique-modify Method Previous: 5.1 Propose

5.2 Verify

 

The verify step checks whether the proposed method satisfies the constraints and the user-requirements (goals). The verify step is divided into two (sub) steps: knowledge-verification and simulation-verification (these names are taken from [Chandrasekaran, 1990]). In the context of problem solving methods we might better call them the static-verification (verifying before execution of the method) and the dynamic-verification (verifying after execution of the method) respectively. We discuss both verification steps in turn.

Knowledge-verification In flexible problem solving the knowledge-verification consists of two types of problem-type specific knowledge (e.g. diagnosis specific) (1) constraints between components and (2) constraints following from assumptions. The knowledge-verification step (see Figure 4) uses the component-constraints and the assumption-constraints for testing whether a method is valid. Both type of constraints might depend on the given assumptions and the input problem. For example, the compatibility of some diagnostic components depends on the kind of behaviour model (which is part of the diagnostic input problem).

An example of an assumption-conflict is the following: Suppose that the assumption is given that the causes in our behaviour model are not necessarily indepent, but are possibly correlated. This would cause an assumption-conflict if we would ever use number-minimality as a Selection component. Number-minimality selects the explanation with the lowest number of causes (since a small number of faults is more likely than a high number of faults). This minimality-criterion only makes sense if the the causes are assumed to be uncorrelated. After all, if the causes are correlated, a single unmodelled cause might underly a large number of correlated causes in our explanation, and we would incorrectly rule out such an explanation with our selection criterion.

In the configuration literature the term valid configuration is used. A method is valid if it is both component-valid method and assumption-valid. A method is component-valid if and only if all the component constraints hold and a method is assumption-valid if no assumption conflict occurs.

If verification fails, a new propose step will be performed. However, the distinction between the propose step and the verify step is relative. We can make the propose step gradually more knowledge intensive by including more knowledge of the knowledge-verification step in the propose step. We can only propose component-valid methods, or only assumption-valid methods, or even only valid methods. We can make the propose step less knowledge intensive by generating arbitrary methods, without using the static goals for guiding the proposal of a method. The knowledge about the particular problem type (in our case diagnosis) determines which type of knowledge (static goals, assumption conflicts or component constraints) must be part of the propose or knowledge-verification steps. In our case the knowledge about diagnostic methods enables us to guide the propose step using the static goals. This makes the propose step a kind of nested generate-&-test, which generates proposals which are tested using the static goals. This saves us generating proposals which can be easily determined as inappropriate.

  
Figure 4: Knowledge-verify step: a Valid Method is a Possible Method which causes no assumption conflicts and no component-constraint conflicts. If verification failed a new propose step will be performed.

Simulation-verification Simulation-verification consists of performing diagnosis followed by tests whether the dynamic goals are met. Diagnosis is performed using the valid method of the knowledge-verification step. The computed diagnoses are used for testing the dynamic goals. (See Figure 5).

  
Figure 5: Simulation-verify Step: If the Verification-Results contains ``success'' then the method is a Suitable Method. The Verification-Result is ``success'' if the Valid Method meets all the Dynamic Goals, otherwise the result consist of the violated goals, as well as the used method.

The verification of the dynamic goals requires the computed diagnoses. Computing these diagnoses is expensive, and therefore the simulation-verification is expensive. An examples of a dynamic goals is a requirement on the size of the diagnoses. Sometimes these dynamic goals can be guaranteed by a particular choice of a component (ie. statically determined), but if this component is not appropriate for other reasons (e.g. an assumption conflict) then we might chose another component. In such a case we have to verify this goal dynamically. In the case that not all goals are met, the results of verification contain the failing goals, as well as the method that failed to meet these goals.



next up previous
Next: 5.3 Critique Up: 5 A Propose-critique-modify Method Previous: 5.1 Propose



Frank van Harmelen
Fri Oct 4 13:40:35 MET DST 1996