Compositional Modelling of Reflective Agents

Frances Brazier and Jan Treur

Vrije Universiteit Amsterdam
Department of Mathematics and Computer Science
De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands.
Email: {frances,treur}@cs.vu.nl

Abstract

A formal approach to the design of (meta-level) compositional architectures for multi-agents systems is presented. A structure for reflective agents is proposed within which reasoning about observation and communication, an agent's own information state and reasoning processes, other agents' information states and reasoning processes, and combinations of these types of reflective reasoning are explicitly modelled. To illustrate the approach the wise men's puzzle has been modelled using different types of reflection.

1 INTRODUCTION

Autonomous agents often exhibit a rich variety of reflective behaviour: they reason about various aspects of not only their own behaviour, but also about other agents' behaviour and interaction between agents. More specifically, an agent reasons about:

The importance of the various reflective capabilities of agents has often been stressed in literature from the cognitive psychology perspective (e.g., Castelfranchi, 1994; Skemp, 1971, 1979). A modelling framework for multi-agent systems should include constructs to express these types of reflective reasoning, in particular constructs to model the nesting of different types of reflection. For example, nontrivial nesting is required to model an agent B's reasoning about a decision to communicate to an agent A about its (i.e., agent B's) lack of information about whether A is able to draw a conclusion about the world based on its (i.e., agent A's) observations of the world. To cope with such combinations of reflection a modelling framework is required in which an arbitrary number of meta-levels can be specified.

In the literature on reflection such as (Weyhrauch, 1980; Davis, 1980; Maes and Nardi, 1988; Attardi and Simi, 1994; Clancey and Bock, 1988) a restricted number of the types of reflective reasoning distinguished above, are modelled. Non-trivial combinations of different types of reflective reasoning, however, have not been studied extensively. In the literature on multi-agent systems, such as (Fisher and Wooldridge, 1993; Wooldridge and Jennings, 1995; Cimatti and Serafini, 1995; Wagner, 1995; Dieng, Corby and Labidi, 1994) most often the types of reflective reasoning agents are capable of performing is limited. For example, in the literature mentioned no explicit reflective reasoning about observation and communication is modelled.

Within DESIRE (framework for DEsign and Specification of Interacting REasoning components; cf. (Langevelde, Philipsen and Treur, 1992; Brazier, Treur, Wijngaards and Willems, 1995), a compositional framework for the design and specification of compositional meta-level architectures, such constructs are provided. Complex multi-agent systems are modelled and specified as interacting (hierarchically structured) components. Strategic reasoning required, for example, to guide reasoning, observation, communication and execution of actions, is explicitly modelled and specified. Formal semantics of such compositional meta-level reasoning systems are defined on the basis of temporal logic (see (Engelfriet and Treur, 1994) for an overview, (Gavrila and Treur, 1994; Brazier, Treur, Wijngaards and Willems, 1996) for semantics of compositional reasoning systems, (Hoek, Meyer and Treur, 1994) for semantics of meta-level architectures for dynamic generation and rejection of assumptions, and (Treur, 1994) for semantics of meta-level architectures for dynamic control of reasoning). More details on the formal semantics of the multi-agent case can be found in (Brazier, Eck and Treur, 1996). As implementation generators exist to automatically generate prototype implementations from formal specifications, system designers can focus on specification of the conceptual design of a system: on both the static and the dynamic aspects of the required functionality. A number of different types of multi-agent applications have been modelled and analysed using DESIRE (cf. (Brazier, Dunin-Keplicz, Jennings and Treur, 1995, 1997; Brazier, Eck and Treur, 1996; Brazier and Treur, 1994; Dunin-Keplicz and Treur, 1995).

In this paper an agent model is introduced that models nontrivial combinations of reflective reasoning. This model has been used to model distributed air traffic control. The paper has the following structure. In Section 2 the formal compositional framework DESIRE is introduced. A generic structure for agents is presented in Section 3. This structure is refined in Section 5 for a reflective agent capable of performing the types of reasoning listed above, illustrated for the specification of the wise men's puzzle (introduced in Section 4). A discussion of the formal framework and reflective agents, together with a brief description of current research, is presented in Section 6.

2 A SPECIFICATION FRAMEWORK

Task models define the structure of compositional architectures: components in a compositional architecture are directly related to tasks in a task model. The hierarchical structures of tasks, interaction and knowledge, are fully preserved within compositional architectures. In the formal compositional framework DESIRE for modelling multi-agent tasks: (1) a task (de)composition, (2) information exchange, (3) sequencing of tasks, (4) task delegation, and (5) knowledge structures, are explicitly modelled and specified. Each of these types of knowledge is discussed below.

2.1 Task composition

To model and specify tasks as a composition of more specific tasks, knowledge is required of: a task hierarchy, information a task requires as input, information a task produces as a result of task performance, meta-object relations between (sub)tasks (which (sub)tasks reason about which other (sub)tasks).

Within a task hierarchy composed and primitive tasks are distinguished: in contrast to primitive tasks, composed tasks are tasks for which more specific tasks are identified. More specific tasks, in turn, can be either composed or primitive. Tasks are directly related to components: composed tasks are specified as composed components, and primitive tasks as primitive components, respectively.

Information required/produced by a task is defined as input and output signatures of a component. The signatures used to name the information are defined in a predicate logic with a hierarchically ordered sort structure (order-sorted predicate logic). Units of information are represented by the (ground; i.e., instantiated) atoms defined in the signature.

The different roles information can play within reasoning can be distinguished by indicating different (meta)levels, specified by the level of an atom within a signature. In a two level situation the lowest level is termed object-level information, and the second level meta-level information. Meta-level information contains information about object-level information and reasoning processes; for example, for which atoms the values are still unknown (epistemic information), or for which the values are a goal for the reasoning process (target information). Accordingly tasks that include reasoning about other tasks are modelled as meta-level tasks with respect to object-level tasks. Often more than two levels of information and reasoning are involved, resulting in meta-meta-level information and reasoning, et cetera.

2.2 Information exchange between tasks

Information exchange between tasks is specified as information links between components. Each information link relates output of one component to input of another, by specifying which truth value of a specific output atom is linked with which truth value of a specific input atom. Atoms can be renamed: each component can be specified in its own language, independent of other components. For example, if A is an agent, within A the name "I" can be used to refer to the agent itself, whereas another agent uses the name A. In a communication between the agents renaming takes place. The conditions for activation of information links are explicitly specified as task control information: knowledge of sequencing of tasks.

2.3 Sequencing of tasks

Task sequencing is explicitly modelled within components as task control knowledge. Task control knowledge includes not only knowledge of which sub-tasks should be activated when and how, but also knowledge of the goals associated with task activation and the amount of effort which can be afforded to achieve a goal to a given extent. These aspects are specified as component and link activation together with sets of targets and requests, exhaustiveness and effort to define the component's goals. Components are, in principle, black boxes to the task control of an encompassing component: task control is based purely on information about the success and/or failure of component activation. Activation of a component is considered to have been successful, for example, with respect to one of its target sets if it has reached the goals specified by this target set (and specifications of the number of goals to be reached (e.g., any or every) and the effort to be afforded).

2.4 Delegation of tasks

During knowledge acquisition a task as a whole is modelled. In the course of the modelling process decisions are made as to which tasks are (to be) performed by which agent. This process, which may also be performed at run-time, results in the delegation of tasks to the parties involved in task execution. In addition to these specific tasks, often generic agent tasks, such as interaction with the world (observation) and other agents (communication and cooperation) are assigned.

2.5 Knowledge structures

During knowledge acquisition an appropriate structure for domain knowledge must be devised. The meaning of the concepts used to describe a domain and the relations between concepts and groups of concepts, are determined. Concepts are required to identify objects distinguished in a domain (domain-oriented ontology) , but also to express the methods and strategies employed to perform a task (task-oriented ontology). Concepts and relations between concepts are defined in hierarchies and rules (based on order-sorted predicate logic). In a specification document references to appropriate knowledge structures (specified elsewhere) suffice; compositional knowledge structures are composed by reference to other knowledge structures.

3 GENERIC STRUCTURE OF AN AGENT

To design a generic structure for autonomous agents capable of reflective reasoning, the types of reasoning agents can be expected to perform, must be distinguished.

Autonomous intelligent agents are capable of reasoning about their own processes of reasoning and execution of actions. Agents can reason about their own characteristics, capabilities and goals, of their success or failure in achieving these goals, about assumptions which need to be or have been made and when, about information which has been sought and not yet found, about information which has not yet been explored, about strategic preferences, about control, et cetera.

Agents are also capable of reasoning about other agents' reasoning processes and action executions. Agents can reason about the information available to other agents, about their (reasoning) capabilities, their goals and success (or lack thereof), their strategic preferences, their assumptions, et cetera.

To interact with other agents, agents must be capable of reasoning about interaction between agents. Agents not only reason about which information can be obtained from which other agents, but also about how and when this information can be acquired.

Figure 1 Generic model of an agent

Reasoning about the external world is another type of reasoning agents are assumed to be capable of performing. An agent can reason about a specific situation, extending its own knowledge, confirming or rejecting assumptions made in its previous reasoning, et cetera.

As autonomous agents are capable of interacting with the external world, agents must also be capable of reasoning about interaction with the external world: about, for example, the types of information that can be observed in the external material world, when and how.

Last, but not least, agent specific tasks require reasoning, but also often include reasoning about the tasks an agent is to perform: about the way in which a task is to be approached, about assumptions which can be made, et cetera.

The six types of reasoning distinguished above correspond to the six generic tasks depicted below in Figure 1. These tasks are generic in the sense that all autonomous agents are assumed to be capable of performing these tasks. These generic tasks are most often composed tasks: tasks composed of more specific tasks. The number of levels of reasoning involved depend on the complexity of the subtasks.

4 LEVELS OF REFLECTION WITHIN AN AGENT: AN EXAMPLE

To illustrate the different levels involved in a relatively simple example of reflective reasoning (and the relations between the levels) a simple version of the wise men's puzzle is used. This puzzle requires two wise men (A and B) and two hats. Each wise man has a hat on his head, of which the colour is unknown. Both wise men know that:

Assume for example that both men have a white hat and that A is asked whether he knows the colour of his hat. A must answer that he is incapable of drawing a conclusion about the colour of his own hat. On the basis of this knowledge B can then reason that his own hat is white.

Agent B, in fact, not only reasons about his own state but also about A's reasoning processes. B reasons about observations A could have made and the conclusions A would have drawn on the basis of these observations.

The generic structure of an agent proposed in Section 3, will be used to model an agent that is able to perform the (reflective) reasoning needed to solve this puzzle. Agent B will be used to illustrate the concepts and specifications involved. Reflective elements in B's reasoning include reasoning about:

Note that for convenience quotes to denote an object-meta naming relation have been omitted.

5 A SPECIFIC MODEL OF A REFLECTIVE AGENT

In this section the generic agent model described in Section 3 is refined. For three of the generic agent components more specific decompositions are introduced (see Figure 2). The most illustrative generic component of an autonomous agent in this example is the agent's specific task of determining the colour of his own hat. The refinement of the model of the agent specific task (determine_hat_colour) is briefly described below in Section 5.1. In Section 5.2 the refinement of the generic component maintain_agent_information is described; in this component the subcomponent interpret_agent_information is used to reason about the reasoning of the other agent. In Section 5.3 the refinement of the generic component maintain_world_information is described. In this component the subcomponent interpret_world_info is used to reason about the world. This is an object level component.

Figure 2 Task hierarchy of a reflective agent

5.1 Refinement of the component determine_hat_colour

The component determine_hat_colour is described in terms of the five types of knowledge distinguished in Section 2: the task decomposition, knowledge structures and information exchange, and task control.

5.1.1 Task composition

The task of determine_hat_colour can be divided into three subtasks: determine_method_of_information_acquisition, evaluate_process_state and determine_assumptions. The task determine_method_of_information_acquisition is responsible for the choice of one of the three options: (1) observe the colour of agent A's hat hoping this will provide the information required, (2) communicate with agent A on A's conclusions concerning the colour of agent A's own hat and (3) make assumptions on the colour of his own hat and reason about the conclusions A should have drawn. The subtask evaluate_process_state determines the results of having tried one of the methods: whether the method provided the colour of agent B's hat. The subtask determine_assumptions determines which assumptions to make during reasoning. To this purpose determine_assumptions first generates a possible assumption (the task of determine_assumption's subtask generate_assumptions). It then evaluates this assumption by reasoning about the consequences of the assumption (derived by the component interpret_agent_info described in Section 5.2): the task of determine_assumptions' second subtask, validate_assumptions.

5.1.2 Knowledge structures and information exchange

For each of the subtasks distinguished above, the knowledge structures within the component responsible for task execution are specified, together with the (meta-)level of each input and output atom.

The component evaluate_process_state receives three types of information: (1) information on the agent's own observations (from the component update_epistemic_info_based_on_obs, a subcomponent of own_process_control), (2) information on conclusions agent A has reached and communicated (from the component manage_agent_interaction), and (3) information on the best assumption (if the component generate_assumptions has been able to make a best assumption). On the basis of this information the component evaluate_process_state determines the state of the problem solving process (e.g., whether observations have been made, whether a definite conclusion on the colour of the hat can be drawn). The knowledge with which the state of the process is determined includes both knowledge on which positive conclusions can be based, and knowledge on which negative conclusions can be based. Positive conclusions on the state of the process can be drawn, given that information has been acquired from observation, communication and/or assumption determination. Negative conclusions are based on the lack of positive conclusions; they are drawn by a closed world assumption on the output atoms, explicitly specified at a higher (third) meta-level in the component cwa_evaluate_process_state.

The information on the state of the reasoning process is transferred to the component determine_method_of_information_acquisition. The specifications for the knowledge structures of the components evaluate_process_state and cwa_evaluate_process_state are shown below.

  component evaluate_process_state<
  input atoms:
  known_to_me_based_on_obs(hat_colour(A, C:Colour))  (** meta-level 1 **)
  communicated(A, concludes(A, hat_colour(A, C:Colour)))  (** meta-level 2 **)
  communicated(A, cannot_reach_a_conclusion(A))) (** meta-level 2 **)
  best_assumption(observed(A, hat_colour(I,C:Colour))) (** meta-level 2 **)
  output atoms:
  performed(obs)  (** meta-level 2 **)
  performed(agent_interaction) (** meta-level 2 **)
  performed(assumption_determination )(** meta-level 2 **)
  colour_known (** meta-level 2 **)
  knowledge base:
  if known_to_me_based_on_own_obs(hat_colour(A, C:Colour))
  	then performed(obs)
  if communicated(A, X:Comms)
  	then performed(agent_interaction)
  if best_assumption(observed(A,hat_colour(I, C:Colour))
  	then performed(assumption_determination)
  if best_assumption(observed(A,hat_colour(I, C:Colour))
  	then known_to_me_based_on_comm(hat_colour(I, C:Colour))
  if known_to_me_based_on_own_obs(hat_colour(I, C:Colour)) 
  	then colour_known
  if known_to_me_based_on_comm(hat_colour(I, C:Colour)) 
  	then colour_known

component cwa_evaluate_process_state input atoms: true(X:OA) (** meta-level 3 **) output atoms: to_assume(X:OA, false) (** meta-level 3 **) knowledge base: if not true(X:OA) then to_assume(X:OA, false)

Based on the status information provided by evaluate_process_state the component determine_method_of_information_acquisition determines which method to follow: observation, communication or assumption determination. The conclusions of this component are transferred to the output interface of determine_hat_colour.

  component determine_method_of_information_acquisition
  input atoms:
  performed(obs)  (** meta-level 2 **)
  performed(agent_interaction) (** meta-level 2 **)
  performed(assumption_determination) (** meta-level 2 **)
  output atoms: 
  method_of_acquisition(obs)  (** meta-level 2 **)
  method_of_acquisition(agent_interaction)  (** meta-level 2 **)
  method_of_acquisition(assumption_determination)  (** meta-level 2 **)
  knowledge base:
  if not performed(obs)
  	then method_of_acquisition(obs)
  if performed(obs) and not performed(agent_interaction)
  	then method_of_acquisition(agent_interaction)
  if performed(obs) and performed(agent_interaction)
  	and not performed(assumption_determination)
  	then method_of_acquisition(assumption_determination)
The component generate_assumptions receives explicit information on the agent's lack of knowledge of A's observations (the truth value false for the input atom known_to_me(observed(A, hat_colour(I, C:Colour)) is received from the component update_epistemic_information_on_other_agents). In addition, generate_assumptions receives information on A's conclusions on its own hat colour (received from the component manage_agent_interaction), and information that the assumed observations of A on the agent B's own hat colour, are contradictory. Based on this input information the component generate_assumptions, generates both possible assumptions (which are transferred to the component validate_assumptions and interpret_agent_info (see Section 5.2)) and best assumptions (which are transferred to the component evaluate_process_state).
  component generate_assumptions
  input atoms:
  communicated(A, concludes(A, hat_colour(A, C: Colour))) (** meta-level 2 **)
  known_to_me(observed(A, hat_colour(I, C:Colour))) (** meta-level 2 **)
  contradictory(observed(A, hat_colour(I, C:Colour))  (** meta-level 2 **)
  output atoms:
  possible_assumption(observed(A, hat_colour(I, C:Colour))) (** meta-level 2 **)
  best_assumption(observed(A, hat_colour(I, C:Colour))) (** meta-level 2 **)
  knowledge base:
  if communicated(A, concludes(A, hat_colour(A, white)))
  	and not known_to_me(observed(A, hat_colour(I, white)))
  	then possible_assumption(observed(A, hat_colour(I, white)))
  if communicated(A, cannot_reach_a_conclusion(A))
  	and not known_to_me(observed(A, hat_colour(I, black)))
  	then possible_assumption(observed(A, hat_colour(I, black)))
  if contradictory(observed(A, hat_colour(I, black)))
  	then best_assumption(observed(A, hat_colour(I, white)))
  if contradictory(observed(A, hat_colour(I, white)))
  	then best_assumption(observed(A, hat_colour(I, black)))
The component validate_assumptions receives information on A's conclusions with respect to A's own hat colour from component manage_agent_interaction. In addition, validate_assumptions receives information about a possible assumption (generated by generate_assumptions), together with the conclusions which A would be expected to draw from the given assumption (received from component interpret_agent_info, described in Section 5.2). The component validate_assumptions determines whether these conclusions on the expected conclusions of A contradict the conclusions A actually has drawn (and communicated) on the colour of A's own hat. This information on the existence of a contradiction is transferred to the component generate_assumptions.
  component validate_assumptions
  input atoms:
  communicated(cannot_reach_a_conclusion(A)) (** meta-level 2 **)
  communicated(A, concludes(A, hat_colour(A, C; colour))) (** meta-level 2 **)
  expected(concludes(A, hat_colour(A, C: colour))) (** meta-level 2 **)
  expected(cannot_reach_a_conclusion(A)) (** meta-level 2 **)
  possible_assumption(observed(A, hat_colour(I, C:Colour))) (** meta-level 2 **)
  output atom:
  contradictory(observed(A, hat_colour(I, C: colour))) (** meta-level 2 **)
  knowledge base:
  if communicated(cannot_reach_a_conclusion(A))
  	and expected(concludes(A, hat_colour(A, white)))
  	and possible_assumption(observed(A, hat_colour(I, black)))
  	then contradictory(observed(A, hat_colour(I, black)))
  if communicated(A, concludes(A, hat_colour(A, white)))
  	and expected(cannot_reach_a_conclusion(A)))
  	and possible_assumption(observed(A, hat_colour(I, white)))
  	then contradictory(observed(A, hat_colour(I, white)))

5.1.3 Task control of determine_hat_colour Activation of determine_hat_colour, in combination with activation of the links which can provide the information required by determine_hat_colour, is specified by agent B's task control. Task control of determine_hat_colour determines which internal components and links to activate. Activation of evaluate_process_state is done in combination with activation of the incoming links. If the final evaluation criterium depicting success of determination of colour of the hat, is reached then the task of determine_hat_colour is fulfilled. If, however, the evaluation criterium that specifies that one or more conclusions concerning previous performance have been reached, succeeds, task control specifies that the component determine_method_of_information_acquisition is to be activated, together with the related links. Based on the success or failure of the evaluation criteria, task control determines which component and links to activate next. If, for example, the evaluation criterium observations_required, is successful, then determine_hat_colour sends a request to manage_world_interaction to make observations in the external world. If, for example, the evaluation criterium assumptions_required is successful, then another subcomponent of determine_hat_colour, namely determine_assumptions, is activated. the component determine_assumptions, in turn, activates one of its subcomponents, based on its own task control knowledge.

5.2 Refinement of the generic component maintain_agent_information

The component maintain_agent_information has two subcomponents: update_current_agent_information, which stores information on other agents, and interpret_agent_info, which interprets the available agent information. The first subcomponent only stores and updates information, it does not reason. To interpret agent information the component interpret_agent_info has knowledge with which it can reason about the other agent. In the wise men example the knowledge specifies how the other agent can reason; it gives an explicit representation of A's deduction system and A's knowledge. For example, part of the knowledge on A is the explicit meta-statement that if a fact X is derivable by A and A has knowledge that X implies Y, then Y is derivable by A (modus ponens). Agent B uses this knowledge of A to reason about A's reasoning, as shown in the knowledge base of B's component interpret_agent_info specified below. In this knowledge base the meta-statement rule(A, X, Y) denotes that A has the knowledge that X implies Y. The notation [X,Y] is interpreted as the conjunction of the statements X and Y, and derivable(A,X) denotes that A is able to derive statement X. The (meta-)fact observed(A,X) states that fact X is observed in the material world by A. Note that the I in this knowledge base refers to A, because it refers to A's own knowledge.

  component   interpret_agent_info
  input atoms:
  observed(A, hat_colour(B,C:Colour))  (** meta-level 2 **)
  output atoms: 
  derivable(A, X)  (** meta-level 2 **)
  knowledge base:
  rule(A, hat_colour(B,black), hat_colour(I,white))
  if	observed(A, X)   			
  	then  derivable(A, X)
  if	derivable(A, X)  
  	and  rule(A, X,Y)    
  	then  derivable(A, Y)
  if	derivable(A, X)  
  	and  derivable(A, Y)    
  	then  derivable(A, [X,Y])
5.3 Refinement of the component maintain_world_information

The component maintain_world_information has two subcomponents: update_current_world_information, which stores information on the world, and interpret_world_info, which interprets the available world information. Similar to the decomposition described in the previous section, the first subcomponent only stores and updates information and does not reason. To interpret world information the component interpret_world_info has knowledge with which it can reason about the world. This knowledge is used by B to draw conclusions from information he has obtained from observation of A's hat colour: if B observes a black hat, then his own hat is white.

  component  interpret_world_info
  input atoms:
  hat_colour(A, C:Colour) (** object level **)
  output atoms:
  hat_colour(I, C:Colour) (**object level **)
  knowledge base
  if hat_colour(A,black) 
       then hat_colour(I,white)
Note that both A and B can observe part of the material world, but that they observe different parts. This difference is expressed in the specifications by the different information links defined between the material world and the agents. The difference is also mirrored in the input signatures of the two agents.

6 DISCUSSION

A formal approach to the design of (meta-level) compositional architectures for multi-agents systems has been presented. A structure for reflective agents has been proposed within which reasoning about (1) observation and agent interaction (2) an agent's own information state and reasoning processes (3) other agents' information states and reasoning processes, and combinations of these types of reflective reasoning are explicitly modelled. To illustrate the transparency of the structure, formal specifications of the wise men's puzzle have been presented within which components at different meta-levels are distinguished. Based on the specification an implementation has been generated using the implementation generator within the DESIRE software environment.

A major difference to other approaches is that in the model presented in this paper the dynamics of the combined pattern of reasoning, observation and communication is specified: the specification expresses the strategy to approach the problem in an explicit manner. Specification of the problem abstracting from the dynamics would also have been possible. However, in that case to develop an implementation, either the strategic knowledge to guide the problem solving has to be added at the implementation level, or a theorem prover or other program has to search for the solution through a rather large search space. In the former case no implementation independent description of the dynamics of the system is available. In the latter case the search can be rather inefficient, and, moreover, the system behaviour differs much from the manner in which human agents most often approach the problem: using strategic knowlege to guide the search.

The modelling approach adopted in this paper distinguishes components reasoning at different levels in all cases where semantically distinct meta-levels can be found. An advantage of this approach is that the model shows a rich structure with different constructs for entities that are semantically different. It may be felt as a disadvantage that if the problem is extended with additional meta-levels, the model will have to be extended as well. This may be considered to be the price that has to be paid for the richer structure. An alternative approach is that all meta-levels are encoded in the highest meta-level. The price that is paid in this case is that the finer semantical distinctions between the different meta-levels found in practice are not reflected explicitly.

ACKNOWLEDGEMENTS

This research has been partly supported by the ESPRIT III Basic Research project 6156 DRUMS II.

REFERENCES

Attardi,G., Simi, M. (1994), Proofs in Context, In: L. Fribourg and F. Turini (Eds.), Logic Program Synthesis nd Transformation-Meta-Programming in Logic, Proceedings of the Fourth International Workshop on Meta-Programming in Logic, META'94. Springer Verlag, Lecture Notes in Computer Science, vol. 883.

Brazier, F.M.T., Dunin Keplicz, B.M., Jennings, N.R. and Treur, J. (1995) Formal Specification of Multi-Agent Systems: a Real World Case, In: V. Lesser (Ed.), Proc. First Int. Conference on Multi-Agent Systems, ICMAS-95, MIT Press, pp. 25-32.

Brazier, F.M.T., Dunin Keplicz, B.M., Jennings, N.R. and Treur, J. (1997),, DESIRE: modelling multi-agent systems in a compositional formal framework, In M. Huhns, M. Singh, (Eds.), International Journal of Cooperative Information Systems, special issue on Formal Methods in Cooperative Information Systems: Multi-Agent Systems.

Brazier, F.M.T., van Eck, P.A.T., Treur, J. (1996), Modelling Cooperative Behaviour for Resource Access in a Compositional Multi-Agent Environment. In: J.L. Fiadeiro, P.Y. Schobbens (Eds.), Proc. 2nd ModelAge Workshop, University of Lisboa, Department of Computer Science, pp 27-40.

Brazier, F.M.T., van Eck, P.A.T., Treur, J. (1996), Design of a Compositional Modelling Framework for Multi-Agent Systems. In: R. Albrecht, H. Herre (Eds.), Proc Int. Workshop on New Trends in Theoretical Computer Science, Igls, Oldenburg Verlag, Munich-Wien.

Brazier, F.M.T., Treur, J., Wijngaards, N.J.E., & Willems, M. (1994). Formal specification of hierarchically (de)composed tasks. In: B.R. Gaines, M.A. Musen (Eds.), Proc. of the 9th Banff Knowledge Acquisition for Knowledge-based Systems workshop, KAW'95, Calgary: SRDG Publications, Department of Computer Science, University of Calgary, 1995, vol. 2, pp. 25/1-25/20.

Brazier, F.M.T., Treur, J., Wijngaards, N.J.E. and Willems, M. (1996), Temporal semantics of complex reasoning tasks. In: B.R. Gaines and M.A. Musen (Eds.) Proceedings of the 10th Banff Knowledge Acquisition for Knowledge-Based Systems Workshop, KAW '96, Calgary: SRDG Publications, Department of Computer Science, University of Calgary.

Brazier, F.M.T. and Treur, J. (1994). User centered knowledge-based system design: a formal modelling approach. In: L. Steels, G. Schreiber and W. Van de Velde (Eds.), A future for knowledge acquisition, Proceedings of the 8th European Knowledge Acquisition Workshop, EKAW'94. Springer-Verlag, Lecture Notes in Artificial Intelligence 867, pp. 283-300.

Castelfranchi, C. (1995), Self-awareness: notes for a computational theory of intrapsychic social interaction. In: G. Trautteur (Ed.), Consciousness: Distinction and Reflection, Bibliopolis, pp. 55-80.

Cimatti, A., Serafini, L. (1995). Multi-agent Reasoning with Belief Contexts II: Elaboration Tolerance, In: V. Lesser (ed.), Proceedings of the First International Conference on Multi-Agent Systems, ICMAS-95, MIT Press, pp. 57-64

Clancey, W.J. and Bock, C., (1988) Representing control knowledge as abstract tasks and metarules, in: Bolc, Coombs (eds.), Expert System Applications.

Davis, R., (1980) Metarules: reasoning about control, Artificial Intelligence Volume 15. ??

Dieng, R., Corby, O., and Labidi, S. (1994), Agent-based knowledge acquisition. In: L. Steels, G. Schreiber and W. Van de Velde (Eds.), A future for knowledge acquisition, Proceedings of the 8th European Knowledge Acquisition Workshop, EKAW '94. Springer-Verlag, Lecture Notes in Artificial Intelligence 867, pp. 63-82

Dunin Keplicz, B.M. and Treur, J. (1994), Compositional formal specification of multi-agent systems. Proceedings ECAI'94 Workshop on Agent Theories, Architectures and Languages, Published in: M. Wooldridge, N. Jennings (Eds.), Intelligent Agents, Lecture Notes in AI, vol. 890, Springer Verlag, 1995, pp. 102-117

Dunin Keplicz, B.M. and Treur, J. (1995). Modelling Reasoning and Acting Agents. In: B.R. Gaines, M. Musen (Eds.), Proceedings of the 9th Banff Knowledge Acquisition for Knowledge-Based Systems Workshop, KAW'95, University of Calgary, pp. 22-1 - 22-20.

Engelfriet, J. and Treur, J. (1994). Temporal Theories of Reasoning. In: C. MacNish, D. Pearce, L.M. Pereira (Eds.), Logics in Artifical Intelligence, Proc. of the 4th European Workshop on Logics in Artificial Intelligence, JELIA 194. Springer Verlag, pp. 279-299.

Fisher, M. and Wooldridge, M. (1993) Specifying and Verifying Distributed Intelligent Systems. In: M Filqueiras, L. Damas (Eds.), Progress in AI. Proc. EPAI'93. Lecture Notes in AI, Vol. 727, Springer Verlag, 1993, pp. 13-28

Gavrila, I.S. and Treur, J. (1994). A formal model for the dynamics of compositional reasoning systems. In: A.G. Cohn (Ed.), Proc. 11th European Conference on Artificial Intelligence, ECAI'94, Wiley and Sons, pp. 307-311.

Giunchiglia, E., Traverso, P. and Giunchiglia, F. (1993) Multi-context Systems as a Specification framework for Complex Reasoning Systems, In: J. Treur and T. Wetter (Eds.)

Formal Specification of Complex Reasoning Systems, Ellis Horwood, pp. 45-72.

van der Hoek, W., Chr. Meyer, J.-J. and Treur, J. (1994). Formal Semantics of Temporal Epistemic Reflection. In: L. Fribourg and F. Turini (Ed.), Logic Program Synthesis and Transformation-Meta-Programming in Logic, Proceedings of the Fourth International Workshop on Meta-Programming in Logic, META'94. Springer Verlag, Lecture Notes in Computer Science, vol. 883, pp. 332-352.

van Langevelde, I.A., Philipsen, A.W. and Treur, J. (1992), Formal specification of compositional architectures, in B. Neumann (Ed.), Proceedings of the 10th European Conference on Artificial Intelligence, ECAI'92, John Wiley & Sons, Chichester, pp. 272-276.

Maes,P. and Nardi, D. (Eds), (1998), Meta-level architectures and reflection, Elsevier Science Publishers.

Skemp, R.R., (1971), The psychology of learning mathematics, Bungay, Suffolk.

Skemp, R.R. (1979), Intelligence, learning and action, Chicester.

Treur, J. (1994). Temporal Semantics of Meta-Level Architectures for Dynamic Control of Reasoning. In: L. Fribourg and F. Turini (Ed.), Logic Program Synthesis nd Transformation-Meta-Programming in Logic, Proceedings of the Fourth International Workshop on Meta-Programming in Logic, META'94. Springer Verlag, Lecture Notes in Computer Science, vol. 883, pp. 353-376.

Wagner, G. (1996) A Logical and Operational Model of Scalable Knowledge- and Perception-based Agents. In: W. van der Velde, J.W. Perram (Eds.), Agents breaking away, Proc. MAAMAW'96. Lecture Notes in AI, vol. 1038, Springer Verlag.

Weyhrauch, R.W. (1980), Prolegomena to a theory of mechanized formal reasoning, Artificial Intelligence, Volume 13, pp. 133-170.

Wooldridge, M. and Jennings, N.R. (Eds.), (1995) Intelligent Agents. Lecture Notes in AI, Vol. 890, Springer Verlag.