Problem-solving Method Reuse and Assembly:
From Clinical Monitoring to Traffic Control

Martin Molina 1 and Yuval Shahar 2

1 Department of Artificial Intelligence, Technical University of Madrid,
Campus de Montegancedo S/N, Boadilla del Monte 28660, Madrid, SPAIN
mmolina@dia.fi.upm.es
2 Section on Medical Informatics, School of Medicine,
Stanford University, Stanford, CA 94305, USA
shahar@camis.stanford.edu

Abstract. In the knowledge engineering field, models of expertise present interesting properties for reuse. In particular, Problem-Solving Methods (PSMs), that identify generic lines of reasoning with specific uses of knowledge types, are good reusable components for knowledge modeling. In this paper we present a case study of PSM reuse, where we focus on two particular issues: (1) how to reuse a PSM by generalizing some of its task assumptions, and (2) the formulation of a new PSM by assembling simpler PSMs. The case study starts with a PSM called Knowledge Based Temporal Abstraction method, which was originally inspired by several clinical domains. We show how to reuse that method for a new task in the traffic domain: first, the temporal dimension, which the method assumed, is generalized into a linear dimension in order to create abstractions along both the temporal and spatial dimensions; second, a new method for spatiotemporal abstraction is defined by assembling two versions of the generalized PSM. In summary, the paper underlines and illustrates some of the significant steps for PSM reuse, and proposes a modular organization of ontologies (using is-a and part-of relations) to facilitate this activity.

1. INTRODUCTION

Software reuse for the development of different applications is one of the main interests of software engineering. Software reuse improves productivity and maintenance and, consequently, decreases development costs. In knowledge engineering, models of expertise present particular interesting properties for reuse. In fact, this issue has been one of the main lines of interest during the last decade in this community, where it has been considered mainly from the point of view of knowledge modeling, following the knowledge level concept proposed by Newell [Newell, 82]. Expertise reuse may improve drastically the knowledge acquisition and the implementation activities of the knowledge system development.

Basically, two different approaches for reuse may be identified in this field. On the one hand, the domain-oriented approach whose goal is to reuse declarative descriptions about certain fields that may be reused to solve different kind of problems. For instance, it could be possible to formulate a task-independent description of the electrical network domain that could be used later to carry out different tasks such as network malfunction diagnosis or electrical network design. This approach uses the notion of ontology to describe explicit specifications of the domain elements [Gruber, 93]. On the other hand, the task-oriented approach has the goal of reusing problem solving knowledge. The idea is to identify general problem-solving methods that may be reused to solve problems in different domains. For instance, it is possible to make a domain-independent formulation of a method to solve the problem of diagnosis, for example with the establish-and-refine strategy. This generic method may be used to diagnose problems in the electrical network domain or in the urban traffic domain. This approach uses the notion of problem-solving method (PSM) which is present in different knowledge engineering frameworks such as the generic task [Chandrasekaran et al., 92], the role-limiting method [Marcus, 88] and KADS [Screiber et al. 93].

In this paper we present a case study illustrating some key issues about reusability following the second approach. The work presented here intended to analyze the reusability of an existing problem-solving method, called Knowledge Based Temporal Abstraction, which was originally defined for the clinical domain. The paper shows how to reuse this method to carry out a new task in a traffic domain (in which a network of highways is supervised to analyze the impact of certain control actions). The case study focus on two particular issues: first, how to reuse a PSM by generalizing some of its task assumptions that are specialized later for different tasks and, second, the formulation of a new PSM by assembling simpler PSMs.

The paper is organized as follows. Section 2 introduces some of the knowledge modeling topics about reuse that will be analyzed later with the case study. Then, section 3 describes the PSM used in the case study together with one of the original tasks where it was applied. Section 4 describes the task in the traffic domain where the method will be reused. Section 5 describes a first step to reuse the PSM, by doing a kind of generalize-and-specialize process. Section 6 describes how to assemble the resulting components to formulate the final global PSM. Finally, section 7 describes and compares some of the existing knowledge modelling platforms discussing to what extend they support the process described in the paper.

2. DESIGN OF KNOWLEDGE MODELS AS A PROCESS OF REUSE AND ASSEMBLY

Knowledge modeling is the process of building a model of an intelligent agent's expertise that simulates its behaviour in solving problems (typically, an intelligent agent in this context is a human expert or a team of specialized professionals such as a department in a company). The knowledge modeling activity may take a high advantage of expertise reuse as it has been shown by different authors [Steels, 92] [Gennari et al., 94]. In particular, from a task-oriented approach, there are at least two facilities. One is the management of libraries of generic reusable problem-solving methods. When a developer tries to formulate a new model, he/she may use the library for selecting the best PSM for a particular task, according to particular assumptions of the problem. The use of these abstract patterns may reduce drastically the required effort for formalizing the model because they suggest possible knowledge structures to be considered during the knowledge acquisition process. In the knowledge engineering community several proposals has been made following this approach, for example, the CommonKads library [Breuker, Van de Velde, 94], or the work of MacDermott [MacDermott, 88]. On the other hand, a second facility for knowledge modeling is the use of a library of reusable preprogrammed components that helps to operationalize an existing formalized knowledge model. The developer reuses existing high level adaptable constructs to build an operational version of the problem solver. There are several proposals about this kind of components such as the mechanisms [Puerta et al. 93], the primitives of representation [Cuena, Molina, 94], and the application kit with solution methods [Macintyre, 93] .

This second facility is an interesting solution given that it provides a way to operationalize models from a different point of view that conventional programming. Instead of writing imperative lines of code to programme systems, the developer manages a library of high level components. The adaptation of components presents a higher level than conventional programming given that they use declarative representations (rules, frames, constraints, etc) closer to professionals fields than conventional procedural programming languages (C, Fortran, etc.). This advantage is also used by the end-user of the final application, to consult and to modify not only knowledge bases but also the knowledge structure without programming. Therefore, the use of these high level reusable components reduces the gap between formalizing and operationalizing knowledge models.

2.1 Problem-solving Method Reuse

Ideally, a PSM defines a space of tasks that it is able of carrying out. This space may be formulated as a generic task (or task type) that abstractly defines the potential set of application tasks that the method can perform. Thus, for example, the propose-and-revise PSM is associated to the generic task called parametric design. This task is defined by a set of assumptions characterizing the space of tasks that it defines, such as: the system to be designed must be defined in terms of components with parameters and values for these parameters, there must be a set of internal constraints that the resulting design have to satisfy, there must be a set of criteria for fixing constraint violations, there must be preference criteria for selecting fixes, the solution is an assignement of values for the parameters that satisfy internal constraints, etc. These assumptions include functional specifications and domain knowledge requirements of the PSM. Therefore, an approach for reusing PSMs is to consider the activity as a specialization process of the generic task of the method. In general terms, this specialization consists of two consecutive steps: (1) method application to the domain, and (2) a knowledge acquisition process to create the particular knowledge bases.

In more detail, the first step instantiates the abstract task on a particular domain in such a way that it is referred to domain terms instead of method-oriented concepts. This process considers the existence of a method ontology that is a declarative description of concepts, attributes, relations, etc. used by the method during its reasoning, that are abstractly defined considering their role in the strategy of reasoning. In addition, the application ontology, includes a description of the domain where the final task will operate. The instantiating process is done by mapping method ontology terms into application ontology terms. We consider this instantiation as the definition of the domain role that the method plays (as opposite of the task role that a component of the domain plays in a certain problem solving action). Thus, the same method ontology may be instantiated on different application ontologies, which leads to consider an organization as shown in figure 1, where applications ontologies have an is-a relation with the method-ontology when mapping relations may be established between them. According to this, the problem solving method associated to the method ontology is used in different application ontologies doing a kind of inheritance through the is-a relation.

Figure 1: The reuse process of a PSM may be considered as an instantiating process of the method ontology on particular application ontologies. Each application ontology is viewed as a specialization of the method ontology (using an is-a relation).

This process is described in detail in the Protege-II framework (where mappings relations may be defined between ontologies) and it is also described in KADS (using lifts connecting the domain layer and the inference layer). The second step about knowledge acquisition is for building the domain knowledge that describes the particular expertise of the problem (formulated in rules, constraints or another declarative representation). This step acquires domain knowledge to construct the particular knowledge bases of the PSM. As a result of this activity, the final application task is ready to be executed for solving problems.

Therefore, the possibilities of reuse of a PSM are established by the assumptions defined by the corresponding generic task. All the application tasks where the PSM may be used are potentially present in this component. A particular final application task must exactly fit the assumptions in order to use the PSM. However, sometimes it is even possible to reuse the method for other tasks that do not exactly fit the set assumptions by removing or weakening some of the assumptions. Of course, these changes will imply changes in the original PSM specification and implementation, but sometimes it may be advantageous compared to the required effort of building another PSM from scratch. For instance, a typical change in assumptions (that use to be easy to implement) is generalizing some of them. The case study described in this paper shows an example of this process.

2.2 Problem-solving Method Assembly

In this paper, we understand assembly as the combination of PSMs following a general inference structure with a particular strategy of reasoning to create a new PSM specialized in solving a new (more complex) type of problems. Basically, in order to define the complex PSM it is necessary to describe: (1) the set of subtasks performed by the method (with their corresponding submethods), (2) the inference structure, i.e. how the outputs of some subtasks are intputs of other tasks (inputs maybe case data or knowledge bases), (3) the strategy of reasoning to determine the sequence of steps to solve the problem, and (4) the method ontology of the new PSM. This last issue may be viewed as the aggregation of the simpler PSMs ontologies.

Figure 2: The assembly of PSM-1 and PSM-2 for building a new PSM defines a new method ontology as the aggregation of simpler ontologies.

The ontologies associated to simpler PSMs can be more specific that their respective method ontologies, but, at the same time, they must be generic descriptions to maintain a certain domain independence of the complex PSM. This leads to use an organization of ontologies in different abstract levels to which PSMs are associated. A possible organization could be established by is-a and part-of relations, as they were presented in figure 1 and figure 2. There are generic ontologies where PSMs are associated (method ontologies). As subclasses, there are more specific ontologies and, at the bottom level, there are instances that correspond to application ontologies. In subclasses and instances, PSMs are applied by inheritance (this means that the operations that a PSM performs with the generic concepts defined in its method ontology are carried out following the same procedures but with more specific concepts corresponding to more specific ontologies). In addition to this organization, the part-of relation shows an organization where some ontologies are viewed as the aggregation of other simpler ontologies. Complex ontologies are associated to complex PSMs. In summary, the usual organization of PSMs in trees of tasks-subtasks can be complemented by organizations of ontologies with different levels of abstractions and aggregations. An example of this organization will be presented later with the case study of this paper.

3. THE KNOWLEDGE BASED TEMPORAL ABSTRACTION METHOD IN THE CLINICAL DOMAIN

This section describes the PSM to be reused. It is called knowledge-based temporal abstraction, and it was originally developed to be applied in the clinical domain. The method considers a domain that requires the collection of substantial numbers of data over time and the abstraction of those data into higher-level concepts. For instance, most clinical tasks require measurement and capture of numerous time-oriented patient data. It is highly desirable for an automated knowledge-based decision-support tool that assists physicians who monitor patients over significant periods to provide short, informative, context-sensitive summaries of time-oriented clinical data stored on electronic media. Such a tool should be able to answer queries at various levels of abstraction about abstract concepts that summarize the data. Data summaries are valuable to the physician, support diagnostic or therapeutic recommendations by an automated system, and monitor plans suggested by the physician or by the decision-support system. A meaningful summary cannot use only time points, such as data-collection dates; it must be able to characterize significant features over periods of time, such as "2 weeks of grade-II bone-marrow toxicity in the context of therapy for potential complications of a bone-marrow transplantation event" (Figure 3) and more complex patterns. The temporal abstraction task (TA task) is thus an interpretation task: given time-stamped data and external events, produce context-specific, interval-based, relevant abstractions of the data. (See [Shahar, 94] [Shahar, in press] for a formal definition).

Figure 3: Abstraction of platelet and granulocyte values during administration of the PAZ clinical protocol for treating patients who have chronic graft-versus-host disease (CGVHD). The time line starts with a bone-marrow transplantation (BMT) event. The platelet and granulocyte count primitive (raw-data) parameters and the PAZ event are typical inputs to the temporal-abstraction task. The abstraction intervals and context intervals are typically part of the output. * = platelet counts; = event; = open context interval; = closed abstraction interval; M[n] = myelotoxicity (bone-marrow-toxicity) grade n.

A method solving the TA task encounters several conceptual and computational problems: (1) both the input data and the required output abstractions might include several data types (e.g., symbolic, numeric) and can exist at various abstraction levels; (2) input data might arrive out of temporal order, and existing interpretations must be revised nonmonotonically; (3) several alternate interpretations might need to be maintained and followed over time; (4) parameters have context-specific temporal properties, such as expected persistence of measured values and classification functions (e.g., the meaning of the value LOW of the hemoglobin-state abstraction depends on the context); (5) acquisition of knowledge from domain experts and maintenance of that knowledge should be facilitated. The method should enable reusing its domain-independent knowledge for solving the TA task in other domains, and enable sharing of domain-specific knowledge with other tasks in the same domain.

The framework that we are using for solving the TA task is based on our work on temporal-abstraction mechanisms [Shahar et al., 92], [Shahar, Musen, 93], [Shahar, 94]. We have defined a general PSM [Eriksson et al., 95] for interpreting data in time-oriented domains, with clear semantics for both the method and its domain-specific knowledge requirements: the knowledge-based temporal-abstraction (KBTA) method. The KBTA method comprises a knowledge-level representation of the TA task and of the knowledge required to solve that task. The KBTA method has a formal model of input and output entities, their relations, and their properties--the KBTA ontology [Shahar, 94], [Shahar, in press].

Figure 4: The knowledge-based temporal-abstraction method and its mechanisms. = task; = method or mechanism; = knowledge type; = DECOMPOSED-INTO relation; = SOLVED-BY relation; = USED-BY relation.

The KBTA method decomposes the TA task into five subtasks (Figure 4): (1)temporal-context restriction: creation of contexts relevant for data interpretation (e.g., effect of a drug), to focus and limit the scope of the inference, (2) vertical temporal inference: inference from values of contemporaneous input data or abstractions (e.g., results of several blood tests conducted during the same day) into values of higher-level concepts (e.g., classification into bone-marrow toxicity Grade II), (3) horizontal temporal inference: inference from similar-type propositions that hold over different time intervals (e.g., joining different-value abstractions of the same parameter that hold over two meeting time intervals and computing the new abstraction's value), (4) temporal interpolation: bridging of gaps between similar-type but temporally disjoint point- or interval-based propositions to create longer intervals (e.g., joining two disjoint episodes of anemia, occurring during different days, into a longer episode), and (5)temporal-pattern matching: creation of intervals by matching patterns over disjoint intervals over which hold propositions of various types. The five subtasks of the KBTA method are solved by five temporal-abstraction mechanisms (nondecomposable computational modules) that we have defined (see Figure 4). The TA mechanisms depend on four well-defined domain-specific knowledge types: structural knowledge (e.g., IS-A, PART-OF and ABSTRACTED-INTO relations), classification (functional) knowledge (e.g., mapping of hemoglobin values into hemoglobin states), temporal-semantic (logical) knowledge (e.g., the CONCATENABLE property [Shoham, 1987]), and temporal-dynamic (probabilistic) knowledge (e.g., temporal persistence functions that bridge gaps between temporally disjoint intervals [Shahar, 1994]). Values for the four knowledge types are specified as the domain's temporal-abstraction ontology when developing a TA system for a particular domain and task. The TA ontology is a domain-independent, but task-specific theory of the domain-specific entities, properties, and relations relevant for the TA task in each domain.

In the TA ontology, input entities (see Figure 3) include external event propositions (e.g., administration of medications) interpreted over time intervals (i.e., event intervals), and measured (primitive) parameter values (e.g., hemoglobin values), also interpreted over intervals, possibly zero-length intervals (i.e., time points). Parameters are not under control directly, and are modifiable only through external events that have an affect on them. Events can be of several types; each type can have a list of arguments that can be instantiated with values (e.g., dose). Output (and sometimes, input) entities include also abstractions of parameters, which may be of type state, gradient, rate or the more general pattern. An abstraction of a parameter is a parameter (e.g., the state of the hemoglobin value). Abstractions must be defined within an interpretation context. Parameter propositions include a parameter (e.g., hemoglobin level), an abstraction type (e.g., state) a value (e.g., grade_II_toxicity) and an interpretation context (e.g., being treated by the CCTG-522 AIDS-therapy protocol). Parameter propositions are interpreted over some time interval (an ordered pair of time stamps), thus forming a parameter interval. Interpretation contexts are induced by external events, by certain parameter propositions, by the abstraction process's goals, and by certain combinations of these entities, but are not necessarily contemporaneous to the inducing entities [Shahar, 94].

Figure 5 shows an example of such ontology for the domain of insulin-dependent diabetes. It includes a parameter ontology, an event ontology (e.g., insulin therapy, meals, physical exercise), and a context ontology (e.g., preprandial [before a meal] and postprandial [after a meal] contexts). In this domain, administrations of regular insulin and isophane insulin suspension (NPH) are events, inducing different insulin-action interpretation contexts. Meals are events that induce preprandial and postprandial contexts. Thus, values for the Glucuse_state_DM_prebreakfast (the state of glucose in the context of DM and measurement before breakfast) parameter can be created, when relevant, regardless of absolute time. The Glucose_state parameter is a new parameter with six values defined from corresponding ranges used by the domain expert (HYPOGLYCEMIA, LOW, NORMAL, HIGH, VERY HIGH, EXTREMELY HIGH). These values are sensitive to the context in which they are generated; for instance, postprandial values allow for a higher range of the normal value.

Figure 5: Part of the diabetes ontology. Each name represents a class, a normal line is an is-a relation and a dashed line is an ABSTRACTED-INTO relation.

Glucose_state propositions (for all allowed values) have the value TRUE for the temporal-semantic property concatenable in the same meal-phase context. The Glucose_state_state parameter is a higher-level abstraction of the Glucose_state parameter, which maps its six values into three (LOW, NORMAL, HIGH, or L,N,H for short). It has different semantic properties, and allows creation of daily horizontal-inference patterns within a nonconvex preprandial context representing abstraction over several meal phases, such as LLH (LOW, LOW and HIGH Glucose_state_state values over breakfast, lunch, and supper, respectively). Patterns such as LLH values for the Glucose_state_state parameter, especially in the preprandial subcontext, are extremely useful when a physician must decide how to modify a patient's insulin regimen. Glucose_state_state values that are measured within different phases (e.g., prelunch and presupper), but within the same day, can be joined by interpolation within the same generalizing interpretation context (e.g., a nonconvex version of the more general PREPRANDIAL context interval) creating an abstraction comprising several preprandial abstractions, up to 6 to 8 hours apart. Finally, the maximal gap is defined by an interphase [[Delta]] function. Diurnal state abstractions that are measured in the same phase but over different (usually consecutive) days, such as several values of the Glucuse_state_DM_prebreakfast parameter, can be joined by interpolation within the same interpretation context (e.g., a nonconvex PREBREAKFAST context interval, that comprises all breakfasts within a given interval), up 24 to 28 hours apart, using another interphase [[Delta]] function.

Thus, knowledge is organized in an is-a hierarchy of classes in such a way that lower level classes may redefine general knowledge so that it provides a non-monotonic behaviour during the reasoning process. As a summary of the previous description, figure 6 shows a list of the assumptions for applying the KBTA method.

Functional specification:

  1. The method receives as input a set of values of parameters at different levels of abstraction with associated time intervals. Data can arrive out of temporal order.
  2. The method receives as input a set of events with associated time intervals.
  3. The method may receive as input queries including conditions about the presence of values of parameters and intervals.
  4. The method generates as output a new set of values for abstract parameters associated to new time intervals. A given abstract parameter can have different values for the same interval corresponding to interpretations under different contexts.
  5. The method generates yes/no answers about queries.

Domain knowledge requirements:

  1. The domain includes basic elements that may be considered as parameters, events, goals and contexts.
  2. There is a continuous temporal dimension where time intervals (and time points as a particular case of intervals) can be defined and associated to parameters, events, goals and contexts.
  3. There is knowledge for abstracting parameter values into higher level parameter values. This can be established at different levels of abstraction.
  4. There is knowledge for creating intervals, extending intervals and interpolating values (along the continuous dimension).
  5. There is knowledge expressing how contexts are activated by events
  6. Knowledge types (8) and (9) are conditioned by contexts in such a way that they can be organized following a hierarchy of contexts.
  7. Global patterns can be defined to formulate conditions about values of parameters and intervals.

Figure 6: List of assumptions for applying the KB Temporal Abstraction method

We have implemented the KBTA method as the RÉSUMÉ system [Shahar, Musen, 93] and evaluated it with encouraging results in several different medical domains, such as oncology, therapy of AIDS patients, monitoring of children's growth, and monitoring of diabetes patients [Shahar, Musen, 96]. Recently, a generic knowledge-acquisition tool for the KBTA method has been implemented to help a developer in acquiring, maintaining, sharing and reusing the required knowledge for this method [Stein et al., 96].

4. THE TRAFFIC CONTROL TASK

This section describes a particular task in the traffic domain where the KBTA method will be reused. In this domain, have experienced recently a significant demand of advanced information technology. Control centers for traffic management are connected on-line to devices such as detectors on roads, cameras, traffic lights, etc. Thus, operators can supervise the state of the road by consulting data bases with recent information from detectors and can affect the state of control devices. The use of these traffic monitoring and management facilities requires sophisticated support tools for on-line operators, to help them in dealing with the information complexity and diversity of sensors and control devices. In particular, expert systems for decision support have recently been successfully introduced in this field. Existing systems include TRYS [Cuena et al. 95] [Cuena et al 96], KITS [Cuena et al., 92] and ARTIST [Deeter, Ritchie 93]. The goal of a real-time traffic decision-support system for is to propose, to traffic management center operators, control actions to eliminate or reduce problems according to the global state of the traffic.

The decision-support system receives as input the following information. There are sensors on major roads recording several traffic magnitudes such as speed (km/h), flow (vehicle/h) and occupancy (percentage of time the sensor is occupied). The distance between successive sensors on a freeway is around 500 m. Information arrives periodically (e.g. every minute).The system receives also information about the current state of control devices. Control devices include traffic signals at intersections, traffic signals at sideways on-ramps, changeable message signs (CMS) that present different messages to motorists (e.g., warning about existing congestions or alternative path advice), radio advisory systems to broadcast messages to motorists, and reversible lanes (i.e. freeway lanes whose direction can be selected according to the current and expected traffic demand). The system interprets sensor data and detects the presence of a problem and a possible cause. Problems are congestions at certain locations caused by lack of capacity due to accidents, excess of demand (like rush hours), etc. In addition, the system proposes recommendations about how to solve or reduce the problem. For instance it may recommend increasing the duration of a phase (e.g., green time) at a traffic signal, or it may suggest showing certain messages on some CMSs to divert traffic. The system also gives explanations about why it recommends those control actions.

Typically, a city network is divided into two different but related subnetworks, which are supervised by different control centers: the surface streets and the freeways. In this paper the focus will be on freeway management. The expert system supervising the whole freeway network analyzes each direction of each freeway separately in such a way that the whole knowledge model is divided into submodels. Each submodel is specialized in detecting and solving problems in the particular area under control. The, the global expert system is composed of different specialists, called traffic-control agents, coordinated by an additional agent, called coordinator, that integrates the local proposals.

One of the required tasks is to monitor current control actions (e.g., warnings or path recommendations) to be sure that they are performing as expected and they are consistent with the traffic state. The reason is that when a problem occurs, the system proposes solutions making heuristic assumptions about the effect of the proposed control actions, but once the control action is implemented, its real effect may be different. For example, consider the recommendation of alternative paths. In congested locations, the system might propose presenting messages to drivers recommending an alternative path. The system assumes that a certain percentage of traffic (e.g., 10% to 20%) will be diverted to the new path. However, once the messages are implemented different situations may happen due to unexpected conditions. For instance, if only few drivers follow the recommendation, it is better to remove the messages to come up with another solution. Alternatively, if the number of drivers following the recommendation is bigger than expected, a new congestion may appear as a consequence of the massive diversion. In this case it is very important to remove immediately the messages.

This task of monitoring effects on traffic control actions is called control-action monitoring and receives as input the recent evolution of different road parameters (speed, flow and occupancy at every sensor location) and recent control actions. It returns answers about the adequacy of particular control actions. Outputs answers include: (1) the effect of the control action is appropriate and the action must be maintained on the road because it is working properly, (2) the control action is useless because since the control action was implemented, traffic has not changed its behavior, (3) the control action has a negative side effect because a new problem has appeared as a consequence of the action, (4) the effect of the control action is still unknown because the action was recently implemented and it needs some more time to act, (5) the control action has already solved the problem and must be removed, and (6) the control action has some effect but it is insufficient to solve the problem.

Typical reasoning includes whether the number of drivers taking a certain exit is decreasing, whether the length of an existing queue is increasing or decreasing, whether in nearby location (such as related surface streets) the flow is close to a critical value and a new problem may be expected, etc. Solving this task requires temporal reasoning (e.g., about durations, rates and tends) as well as spatial reasoning (e.g., about queue lengths). Thus, control-action monitoring requires both spatial and temporal reasoning.

5. REUSE OF KNOWLEDGE BASED TEMPORAL ABSTRACTION METHOD FOR TRAFFIC CONTROL

The goal is to transport the KBTA method, which has been originally used in the domain of clinical monitoring, to the domain of traffic control to perform another different task. As we will see, this process cannot be reusing directely the original method. It requires first to change some of the original asumptions in such a way that the reuse for space is carried out doing first a generalization of the temporal dimension to a linear dimension. Then, a new method is defined (the spatio-temporal abstraction method) integrating the two versions of the KBTA method (on spatial and temporal domains). Finally, different versions of this new method are assembled to create the required final method that carries out the traffic control task.

5.1 Generalization of Assumptions: The Knowledge Based Linear Abstraction Method

As it was previously described, the traffic control task requires to reason about both time and space. The assumptions for using the KB Temporal Abstraction (KBTA) method were presented in the description of this method (Figure 6), one of which was that there must be a continuous temporal dimension (assumption 7) besides others that make reference to time intervals (assumptions 1 and 2). A solution to reuse the method for traffic control is that, first, these assumptions can be generalized into a generic linear dimension (which may be implemented by doing an easy adaptation of original implementation of the method). Thus, a more general method is defined that will be specialized later twice (on space and on time) to model the traffic control task.

The KBTA method makes only a few assumptions regarding the structure of time, along which it creates interval-based abstractions. For instance, it assumes for each time line a set of totally ordered time stamps, one of which must be a zero point, and a time measure which predefined granularity units (e.g., HOUR); adding or subtracting a time measure to or from a time stamp returns a time stamp. Thus, the KBTA method can be applied to a domain with different linear-distance stamps and measures, as long as they comply with the algebraic constraint on time lines. This suggest that a more general method could be easily constructed by generalizing the assumptions about time for a more abstract linear dimension. We will call this new method KB Linear Abstraction (KBLA). In order to define the new method, the specific change in the method ontology is the following (considering a representation of concept-attribute-facet-value):

  1. Names of concepts. Some concepts of the method ontology remove its reference about time in its name. For instance, concepts such as temporal persistence and temporal properties have to be called respectively linear persistence and linear properties, and the set called valid time units have to be renamed valid linear units. In total, there are 7 concepts that change its name.
  2. Names of attributes. Likewise, some attributes of concepts have to be referred to a generic linear dimension instead of the temporal dimension. This is the case of the attributes called start_time_unit and end_time_unit of the interval concept, or the atributte called delta_function_time_unit of the abstract function concept that must be called delta_function_unit. In total there are 16 attributes that must change its name.
  3. Values of attributes. Some values of attributes have to be removed to generalize the method. For instance, the set valid time units has the original value of {second, minute, hour, day, week, year}. In the linear dimension this set (with the new name of valid linear units) doesn't have any value. Likewise the time unit conversion table includes three attributes: rows, columns and values. Each attribute is a list including time-dependent values. In the new version for a generic dimension, these three attributes will not have any value. In total there are 5 attributes that remove their values to generalize the temporal reasoning to a linear reasoning.
  4. Facets of attributes. Some facets of attributes are refered to time units. For instance, the facet valid domain for the start_time_unit attribute of the interval concept has the value {second, minute, hour, day, week, year}. In the linear dimension this set must not written directly but using an indirect reference. This change affects to 16 attributes.
  5. Names of constants. There is a constant that change its name. The constant is called PRESENT and it represents the last instant in the time. To generalize the method, it must be called END, representing the last point in the linear dimension.

In general, almost all the changes are syntactic, considering the that the symbol time is changed by the symbol linear. The most significant change is knowledge about units where slots referring to units (through facets) must be changed. Furthermore, a time unit conversion table has to be substituted with the corresponding version on space. In addition, although they are not strictly necessary some internal changes should be done in the original implementation of the method, in order to avoid references about time. Those changes include: mapping names of functions (such as +time to +linear, or the function transform_time_units to transform_units), and mapping names of some rules (e.g. the rule with name change_interval_time_units must be called change_interval_units). In total 7 functions and 4 rules change their names.

In summary, a new more general method, the KBLA, has been defined by doing the described changes in the original KBTA method. This process illustrates how the level of reuse of the original method is increased by generalizing some of the initial assumptions. The actual work implementation for these changes was performed in a few days using the original CLIP implementation of the KBTA method.

5.2 Specialization on Space

Given the KBLA method we can consider now a highway as a linear space. Primitive parameters (sensor measurements) along the freeway, are abstracted over spatial intervals into values of abstract parameters. The KBLA method applied to the spatial dimension creates intervals about abstractions along this dimension. In the following paragraphs we provide details on how to apply spatial abstraction to the traffic domain.

The primitive parameters and respective units in the traffic domain are the three basic magnitudes recorded by sensors which are the inputs of the system: speed (km/h), flow (vehicle/h) and occupancy (percentage). The abstract parameters are high-level qualitative variables representing the state of the traffic, and several intermediate variables in the abstraction process: saturation degree (percentage), circulation regime (one of FLUID, UNSTABLE, CONGESTED) and saturation level (one of FREE, CRITICAL). Interpretation contexts include the number of lanes of the freeway. These contexts are induced by events such as the presence of an accident blocking one or more lanes, a road construction reducing the capacity of a freeway section, or the state of a reversible lane. Interpretation contexts determine how parameters such as the saturation degree should be abstracted (e.g. the saturation degree for the one-lane context is 100xFlow/1600, but is 100xFlow/3500 for the two-lanes context). The saturation level is abstracted from the saturation degree (e.g., saturation degree bigger than 70% means that saturation level is CRITICAL), and the circulation regime is abstracted from speed and occupancy.

Figure 7: Part of the spatial ontology for the traffic domain. Each name represents a class, a normal line represents an IS-A relation and a dashed line represents an ABSTRACTED-INTO relation.

Other types of knowledge are represented in the traffic domain's spatial abstraction ontology, besides vertical-classification knowledge. One is the [[Delta]](maximal-gap) persistence function, which expresses the maximal distance between succesive disjoint parameter intervals that still allows joining them into a new parameter interval through interpolation. Thus, in the case of the circulation regime parameter and the CONGESTED value, this distance could be established as 3 km (i.e., two circulation regime parameter intervals with the CONGESTED value would be joined into a longer interval when the distance between the endpoint locations was less than 3 km; if the distance was bigger, they would be interpreted as two different problems). This particular feature of the KBLA method is especially useful in the traffic domain since sometimes sensors do not work, certain data are missing, and the system must be able to interpolate using other sensors together with heuristics.

Values for the knowledge type depend on particular highways. In order to manage different highways with different abstraction knowledge we can consider two approaches. One approach is to consider each highway as a different interpretation context. The other approach involves defining different instances of the spatial ontology for each freeway. In this paper we will follow the second solution in order to illustrate the way different reusable components are instantiated and assembled. Using an appropriate spatial abstraction ontology, the KBLA method was used to create spatial absractions using a spatial version of RÉSUMÉ and values for simulated highway data sensors (Figure 8).

Figure 8: Spatial abstraction for a highway section. At the bottom, a scheme of the highway is presented showing different densities of traffic. Above that, respective valures are presented for different magnitudes recorded by sensors at consecutive locations (speed, occupancy and flow). At the top, different intervals are shown as a result of the spatial abstraction. The figure shows only top-level intervals although different intermediate intervals are also created during the inference process.

In summary, this model shows a first use of the KBLA method with a spatial ontology. Here, the method uses a spatial description of the traffic on the highway, in order to abstract the current state along spatial intervals.

5.3 Specialization on Time

In addition to the spatial abstration task, the control-monitoring task task requires also a solution for a temporal abstraction task to determine conclusions about the adequacy of control actions. This subtask receives as input a set of qualitative instantaneous views of the highway which are the output of the KBLA method with spatial ontology, corresponding to consecutive time instants, and determines the adequacy of the current control action by abstracting these views over time.

In this case, the primitive parameters include values provided by the output of the spatial abstraction task and values provided by sensors at critical points outside the highway, such as ramps or intersections: queue length (meters), flow point Pi (veh/h) (the number i of these points i ussually less than 5 per highway). Note that the value of the queue length parameter is an output of the previous spatial abstraction, i.e., it is the length of the space interval where the circulation regime is CONGESTED. For the sake of clarity, we assume that a highway can have at most one problem at a time. In fact, this is normally true. However, reasoning about multiple problems is not difficult; several zones must be defined with each zone corresponding to a spatial interval between two consecutive message sign devices. A traffic queue usually has a fixed starting point where there is a lack of capacity (an accident, a bottleneck, etc) and the end of the congested area evolves according to the demand. This means that if there were several problems on the same freeway, each one could be identified by the zone of their starting point. The abstract parameters of the temporal abstraction ontology for the control-monitoring task in the traffic domain include: queue length gradient (one of INCREASING, DECREASING, CONSTANT), flow gradient at point Pi (one of INCREASING, DECREASING, CONSTANT), and saturation level at point Pi (one of FREE, CRITICAL). The queue length gradient is necessary to decide if the control action is having some effect on the existing problem. Flow gradients monitor whether control actions such as diversion are followed by drivers. The saturation level at critical points is useful to determine whether a new problem may appear as a consequence of the control action. Vertical-classification tables for the saturation level parameter are specialized by each point Pi. Interpretation contexts are induced by events by executing traffic-control actions, such as a turning on a congestion warning at a certain location or creating a path diversion. The horizontal-inference knowledge for gradient interpolation includes values of variations significant to the values of the parameters abstracted (e.g., 1000 m for congestion length, 500 vehicles/h for the flow parameter).

Figure 9: Part of the temporal ontology for the traffic domain. Each name represents a class, a normal line represents an IS-A relation and a dashed line represents an ABSTRACTED-INTO relation.

Figure 10: Temporal abstraction for a highway section. At the bottom is the evolution of the highway state over time; a queue first increases and then decreases. Above that, values are presented fore the queue length and for values measured by sensors at critical points. At the top are inferred temporal abstraction intervals.

Finally, to determine the adequacy of a control action it is necessary to define temporal patterns. Using the terminology introduced in section 2.1, for each control action we defined the following set of patterns representing its adequacy: APPROPRIATE, USELESS, NEGATIVE, UNKNOWN, SOLVED and INSUFFICIENT. Each pattern is expressed as a set of parameter intervals and temporal and value constraints among them. The values of these patterns (one of TRUE, FALSE) supplies the final answer to the control-action monitoring task.

In summary, this model defines a second use of the KBLA method, on the traffic domain from the temporal point of view. Here, the model uses a temporal description of the traffic on the highway in order to determine if the implemented control actions are adequate and consistent with the traffic behavior.

6. DEFINING NEW METHODS BY ASSEMBLY

The previous section shows how to reuse the KBTA method to formulate a new method called KBLA that is used twice to perform two different subtasks in the traffic domain: spatial abstraction and temporal abstraction. This section shows how these subtasks are assembled to obtain the global task, the control-action monitoring. In order to do so, first, a new method is defined, the KB Spatiotemporal Abstraction (KBSTA) method, for reasoning on a particular highway. Second, another new method is used, the Multiple Region Abstraction (MRA) method, for reasoning about all the network with several highways. Figure 11 shows the structure of task-method-subtasks for the final traffic control task. The main task, control action monitoring, is performed by the MRA method which uses two subtasks: decompose (into regions), and spatiotemporal abstraction. The first subtask is necessary to successively decompose the global network into regions (each region will be a particular section of a highway). The second task is carried out by the KBSTA method. Next subsection shows in more detail how this method is defined by assembly two versions of KBLA.

Figure 11: Structure of task-method-subtasks for the control action monitoring task.

6.1 The Knowledge Based Spatiotemporal Abstraction Method

The KBSTA method is used for control-action monitoring in a particular highway. In general, to formulate a composite method the following elements must be defined: (1) subtasks of the method with their respective submethods, (2) the inference structure, which defines how outputs of subtasks are intputs of others, (3) the control regime, that defines in which order subtasks must be executed, and (4) the method ontology associated to the new method. Figure 11 shows the first part (1) where the KBSTA is carried out by two different tasks: spatial abstraction and temporal abstraction. It shows that both tasks are carried out by the same method: the KBLA method.

Figure 12: Inference structure and control regime of the KB Spatiotemporal abstraction method.

The inference structure (2) defines the data flow between PSM components. Figure 12 shows this structure. Global inputs are descriptions associated to time stamps Ti (e.g., descriptions of the highway state for every minute). In its turn, each individual description is a collection of parameter values associated to spatial stamps (e.g., speed, flow and occupancy at different spatial points along the highway). This information is input to the KBLA method which uses a spatial ontology. This method produces as output spatial abstractions for each time stamp (e.g., traffic-queue length for every minute). Finally, this information is input to the KBLA method which uses the temporal ontology and produces as output the final spatiotemporal abstractions (e.g., adequacy of traffic control actions). Note that this inference structure illustrates a situation where the same method is used several times in a model with different knowledge bases: the KBLA method with spatial and temporal ontologies. Figure 12 shows also part (3), the control regime, defining a strategy that, first, performs spatial abstractions for each description corresponding to an instant, and, second, performs a final temporal abstraction.

Finally, part (4) defines a method ontology for the KBSTA method. This ontology may be considered as the aggregation of the spatial and temporal ontologies. However, spatial and temporal are not direct method ontologies (nor application ontologies), but they are two specializations of a more general ontology associated to the KBLA ontology. Thus, the definition of the ontology for the KBSTA method may be considered as shown in figure 13. This figure presents an organization of different ontologies with is-a and part-of relations. The spatiotemporal ontology is considered as the final method ontology of KBSTA. It is the aggregation of the spatial and temporal ontologies (through part-of relations). In their turn, the spatial and temporal ontologies are related to a more general ontology, the linear ontology through an is-a relation (this relation means that mapping relations between the more general and the more specific ontology may be established).

Figure 13: Ontologies and their relationships for the KB Spatiotemporal Abstraction method.

In summary, the model includes the following ontologies: the spatiotemporal ontology where KBSTA is defined, the linear ontology where KBLA is defined, the spatial ontology and the temporal ontology. Note that, when an application of this method is defined for a particular domain (e.g., to monitor a particular highway), specific application ontologies have to be defined following the generic organization. In our case, there must be spatial and temporal application ontologies for each highway Hi, SOi and TOi, and there must be a spatiotemporal application ontology for the same highway, STOi, that is related to SOi and TOi with part-of relations.

7. PLATFORMS FOR SUPPORTING COMPONENT REUSE AND ASSEMBLY

This section briefly describes some of the existing software platforms which may be used to support PSM reuse as it was described in previous sections. One of these approaches is Protege-II [Puerta et al., 93] in which domain-independent problem-solving methods may be reused to solve different domain-dependent tasks. With Protégé-II, a developer applies an existing generic PSM (e.g., propose-and-revise) for solving a particular task (e.g., the VT task) by defining explicit mappings between the method ontology and the application ontology. Protege-II assists the developer to define the application ontology and to build the user interface of a knowledge acquisition tool to elicitate the domain knowledge. Once knowledge bases are created using this knowledge acquisition tool, the explicit mapping relations are used to apply the generic method to the particular domain knowledge. On the other hand, although the current version of the Protege-II environment does not provide a technique for assembling methods, the paradigm on which it is based offers an appropriate context to consider this process. In particular, subtasks are assembled by higher level methods which include as control knowledge the way in which subtasks must be executed. The detailed process of assembly in Protege-II is still an open question that for the moment is solved by ad hoc procedures. The example we use here illustrates some interesting requirements for assembling to be used in the future for characterizing in detail this activity.

Another approach is the KSM environment (Knowledge Structure Manager) [Cuena, Molina, 95], [Cuena, Molina, 96] which follows the knowledge-unit paradigm to define models at knowledge level. With KSM the developer may create generic abstract knowledge structures that can be applied to different specific domains by duplicating and configuring their components. Although KSM shares many modeling elements with Protege-II (such as methods and tasks) there are some differences. For instance, KSM uses knowledge area view, that can be viewed like high level structure with blocks integrating ontologies and sets of tasks that may be applied to those ontologies. A direct advantage of the use of these components is that it provides a higher level view of the organization of the ontologies (similar to is-a and part-of relations between ontologies described in this paper). In KSM, reuse is viewed as a process of duplication and specialization of knowledge units, which is useful to present an intuitive image of the model development. The reuse process in KSM presents similarities to the Protege-II one. However in Protege-II it is more explicit, since it stores the mapping relations and it is more flexible because more differences may be considered between the structures of the method ontology and the domain ontology. On the other hand, KSM provides a technique to assemble PSMs. This is done by using composite knowledge units which include local control knowledge formulating the assembly model. In order to do so, KSM provides a particular language, Link, to define the block connection (using data flows) and a description of the execution order (using control rules).

Finally, KREST (Knowledge Reuse Tool) is another software environment based on the components-of-expertise approach [Steels, 92]. KREST presents a knowledge level description of an application and assists non-programmer users in reusing parts to develop different applications. The KREST tool [Macintyre, 93], uses the concept of application-kit with solution-methods as elementary reusable components.

8. CONCLUSIONS

In summary, the paper shows how to reuse the KB Temporal Abstraction method. Originally, it was created to be applied to the clinical domain where was used to build several applications (diabetes, aids, children growth monitoring, etc). In this paper we present how it may be applied to a different domain about traffic control.

In order to reuse the method, a first step involves the generalization of some assumptions of the original PSM: the temporal dimension is abstracted to a linear dimension. This led us to propose a new PSM, called KB Linear Abstraction (KBLA) that is used in the traffic domain twice for carrying out two different tasks: spatial and temporal abstraction. Then, another new method is defined, called KB Spatiotemporal Abstraction (KBSTA), as the assembly of two versions of the previous KBLA method.

Hence, this case study illustrates how reuse and assembly are powerful techniques to build large and complex knowledge models. Two main original conclusions are here illustred: (1) about reuse: method asssumptions may be generalized to increase method reusability (it may require specification and implementation changes, but sometimes it may be better than building the method from scratch); (2) about assembly: an explicit organization of modular ontologies provides a complementary structured view of the knowledge model and it is useful to define method ontologies of complex PSMs.

ACKNOWLEDGEMENTS

This work has been partially supported by grants LM05305, LM5708 and LM06245 from the National Library of Medicine, and award IRI-9528444 from the National Science Foundation. We thank Jose Cuena, Mark Musen, Samson Tu and John Gennari for useful discussions.

REFERENCES

[Breuker, Van de Velde, 94] Breuker J., Van de Velde W.: "COMMONKADS Library for Expertise Modelling, Reusable Problem Solving Components". IOS Press, (Amsterdam, Oxford, Washington DC), 1994.

[Chandrasekaran et al., 92] Chandrasekaran B., Johnson T.R, Smith J.W.: "Task Structure Analysis for Knowledge Modeling", Communications of the ACM, 35 (9), 124-137. 1992.

[Cuena, Molina, 94] Cuena J., Molina M.: "KSM: An Environment for Knowledge Oriented Design of Applications Using Structured Knowledge Architectures" in "Applications and Impacts. Information Processing'94", Volume 2. K. Brunnstein y E. Raubold (eds.) Elsevier Science B.V. (North-Holland), IFIP. 1994.

[Cuena, Molina, 96] "KSM: An Environment for Design of Structured Knowledge Models". To be published as chapter of the book "Knowledge-based Systems: Advanced Concepts, Techniques and Applications". S. G. Tzafestas (Ed.). Publisher World Scientific Publishing Company. 1996.

[Cuena et al., 92] Cuena J., Ambrosino G., Boero M.:"A General Knowledge-Based Architecture for Traffic Control: The KITS Approach". Proc. International Conference on Artificial Intelligence Applications in Transportation Engineering. San Buenaventura, California. June 1992.

[Cuena et al., 95] Cuena J., Hernández J., Molina M.: "Knowledge-based Models for Adaptive Traffic Management Systems". Transportation Research (Elsevier Science). Part-C. Vol 3. No 5. pp 311-337. 1995.

[Cuena et al., 96] Cuena J., Hernández J., Molina M.: "Knowledge Oriented Design of an Application for Real Time Management: The TRYS System". Proc. European Conference on Artificial Intelligence (ECAI'96). Budapest 1996.

[Deeter, Ritchie, 93] Deeter D.L., Ritchie S.G.:"A Prototype Real-Time Expert System for Surface Street Traffic Management and Control". ASCE, Third International Conference on Applications of Advanced Technologies in Transportation Engineering. Seattle, Washington, USA. 1993.

[Eriksson et al., 95] Eriksson, H., Shahar, Y., Tu, S.W., Puerta, A.R., and Musen, M.A.: "Task modeling with reusable problem-solving methods". Artificial Intelligence 79 (2):293-326. 1995.

[Gennari et al., 94] Gennari J.H., Tu S.W., Rothenfluh T.E., Musen M.: "Mapping Domains to Methods in Support of Reuse". International Journal of Human-Computer Studies. 41, 399-424. 1994.

[Gruber, 93] Gruber T.R.: "A Translation Approach to Portable Ontology Specifications". Knowledge Acquisition, 5. 1993.

[McDermott, 88] McDermott J.: "Preliminary Steps Toward a Taxonomy of Problem Solving Methods", Automating Knowledgd Acquisition for Expert Systems, S.Marcus ed., Kluwer Academic, Boston, 1988.

[Macintyre, 93] Macintyre A.: "KREST User Manual 2.5". Vrije Universiteit Brussel, AI-lab. Brussels. 1993.

[Marcus, 88] Marcus S. (ed): "Automatic Knowledge Acquisition for Expert Systems". Kluwer, Boston. 1988.

[Newell, 82] Newell, A.: "The Knowledge Level". Artificial Intelligence Vol 18 pp 87-127.

[Puerta et al., 93] Puerta A.R., Tu S.W., Musen M.A.: "Modeling Tasks with Mechanisms". International Journal of Intelligent Systems, Vol. 8, 1993.

[Screiber et al., 93] Screiber A. Th., Wielinga B.J., Breuker J.A. (eds): "KADS: A Principled Approach to Knowledge Based System Development". Volume 11 of "Knowledge-Based Systems Book Series", Academic Press, London. 1993.

[Shahar, 94] Shahar Y.:"A Knowledge-based Method for Temporal Abstraction of Clinical Data". PhD Dissertation. Program in Medical Information Sciences. Department of Computer Science. Report No. STAN-CS-TR-94-1529. Stanford University. October, 1994.

[Shahar, Molina, 96] Shahar, Y. and Molina, M.: "Knowledge-based spatiotemporal abstraction" Notes of the AAAI-96 Workshop on Spatial and Temporal reasoning, Portland, Oregon, 1996.

[Shahar, Musen, 93] Shahar, Y. and Musen, M.A.: "RÉSUMÉ: A temporal-abstraction system for patient monitoring", Computers and Biomedical Research 26(3):255-273, 1993. Reprinted in: J.H. van Bemmel and T. McRay, eds., 1994, Yearbook of Medical Informatics 1994, 443-461. Stuttgart: F.K. Schattauer and The International Medical Informatics Association.

[Shahar, Musen, 96] Shahar, Y. and Musen, M.A.: "Knowledge-based temporal abstraction in clinical domains". Artificial Intelligence in Medicine 8 (3) 267-298. 1996.

[Shahar et al., 92] Shahar, Y., Tu, S.W., and Musen, M.A: "Knowledge acquisition for temporal-abstraction mechanisms". Knowledge Acquisition 4(2):217-236, 1992.

[Shahar, in press] Shahar, Y. : "A framework for knowledge-based temporal abstraction". Artificial Intelligence. In press. 1997.

[Shoham et al., 87] Shoham, Y.: "Temporal logics in AI: Semantical and ontological considerations". Artificial Intelligence 33(1):89-104. 1987.

[Steels, 90] Steels, L.:"Components of Expertise", AI Magazine, Vol. 11 (2) 29-49.

[Steels, 92] Steels L.: "Reusability and Knowledge Sharing". In "Enhancing the Knowledge Engineering Process: Contributions from ESPRIT". Steels L. and Lepape B. (eds). Amsterdam: Elsevier. 1992.

[Stein et al., 96] Stein A., Shahar Y., Musen M.A.: "A knowledge-acquisition tool for temporal abstraction". Section on Medical Informatics Report No SMI-96-614. Stanford University, CA. 1996.