Beliefs, Intentions and DESIRE

Frances Braziera, Barbara Dunin-Kepliczb, Jan Treura, Rineke Verbrugge

a Vrije Universiteit Amsterdam
Department of Mathematics and Computer Science, Artificial Intelligence Group
De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands
Emails: {frances,treur,rineke}@cs.vu.nl

b Warsaw University
Institute of Informatics, ul. Banacha 2, 02-097 Warsaw, Poland
Email: keplicz@mimuw.edu.pl

ABSTRACT

A generic model for BDI agents, modelled and specified within the declarative compositional modelling framework for multi-agent systems, DESIRE, is described. This model, a refinement of a generic agent model, explicitly specifies motivational attitudes and the static and dynamic relations between motivational attitudes. Desires, goals, intentions, commitments, plans, and their relations are modelled explicity.

1 INTRODUCTION

In the last five years multi-agent systems have been a major focus of research in AI. The concept of agents, in particular the role of agents as participants in multi-agent systems, has been subject to discussion. In (Wooldridge and Jennings, 1995) different notions of strong and weak agency are presented. In other contexts big and small agents have been distinguished (Velde and Perram, 1996). In this paper, a model for a rational agent is proposed: a rational agent described using cognitive notions such as beliefs, desires and intentions.

Beliefs, intentions, and commitments play a crucial role in determining how rational agents will act. Shoham defines an agent to be "an entity whose state is viewed as consisting of mental components such as beliefs, capabilities, choices, and commitments. (...) What makes any hardware or software component an agent is precisely the fact that one has chosen to analyze and control it in these mental terms" (Shoham, 1993). This definition provides a basis to study, model and specify mental attitudes; see (Rao and Georgeff, 1991; Cohen and Levesque, 1990; Shoham, 1991; Dunin-Keplicz and Verbrugge, 1996).

The goal of this paper is to define a generic BDI agent model in the multi-agent framework DESIRE. To this purpose, a generic agent model is presented and refined to incorporate beliefs, desires and intentions. The result is a more specific BDI agent model.

The main emphasis is on static and dynamic relations between mental attitudes which are of importance for cooperative agents. DESIRE (framework for DEsign and Specification of Interacting REasoning components) is a framework for modelling, specifying and implementing multi-agent systems, see (Brazier, Dunin-Keplicz, Jennings, and Treur, 1995, 1996; Dunin-Keplicz and Treur, 1995). Within the framework, complex processes are designed as compositional architectures consisting of interacting task-based hierarchically structured components. The interaction between components, and between components and the external world, is explicitly specified. Components can be primitive reasoning components using a knowledge base, but may also be subsystems which are capable of performing tasks using methods as diverse as decision theory, neural networks, and genetic algorithms.

As the framework inherently supports interaction between components, multi-agent systems are naturally specified in DESIRE by modelling agents as components. The specification is sufficient to generate an implementation. In contrast to general purpose formal specification languages such as Z and VDM, DESIRE is committed to well-structured compositional architectures. Such architectures can be specified in DESIRE at a higher level of conceptualisation than in Z or VDM. Moreover, DESIRE has a logical basis and is provided with a temporal semantics (Brazier, Treur, Wijngaards, and Willems, 1995).

The paper is structured in the following manner. In Section 2, a generic classification of mental attitudes is presented and a more precise characterization of a few selected motivational attitudes is given. Next, in Section 3, the specification framework DESIRE for multi-agent systems is characterized. In Section 4 a general agent model is described. The framework of modelling motivational attitudes in DESIRE is discussed in Section 5. Finally, Section 6 presents some conclusions and possible directions for further research.

2 CLASSIFICATION OF MENTAL ATTITUDES

Agents are assumed to have the four properties required for the weak notion of agency described in (Wooldridge and Jennings, 1995). Thus, agents must:

To cover all aspects of an agent's activities, four main categories of mental attitudes are studied in the AI literature: informational, motivational, social and emotional attitudes, see (Shoham and Cousins, 1994). In this paper the focus is on motivational attitudes, although other aspects are marginally considered. In (Shoham and Cousins, 1994), motivational attitudes are partitioned into the following categories: goal, want, desire, preference, wish, choice, intention, commitment, plan. A number of these categories of motivational attitudes, as well as the static and dynamic relations between them and agents' actions, are modelled in this paper, namely desires, goals, plans, intentions and commitments. Individual agents are assumed to have intentions and commitments both with respect to goals and with respect to plans. Joint motivational attitudes and joint actions are not discussed in this paper.

A generic classification of an agent's attitudes is defined as follows:

In the context of the process of plan determination, i.e., planning, the weakest motivational attitude seems to be desire, reflecting longing, wish and want. An agent may harbor desires which are impossible to achieve. Desires may be ordered according to preferences and, as modelled in this paper, they are the only motivational attitudes subject to inconsistency. At some point an agent must just settle on a limited number of intended goals, i.e., chosen desires, for which to aim. Here, only goals for which the achievement of the goal can be established, are considered (and not, for example, maintenance goals). Moreover, agents are assumed to try to assure consistency of intentions.

With respect to intentions, the conditions elaborated in (Bratman, 1987; Cohen and Levesque, 1990) are assumed. Moreover, agents can be classified according to different intention strategies. In this paper we restrict ourselves to open-minded ones, cf. (Rao and Georgeff, 1991).

On the basis of intentions, an agent commits to itself to achieve both goals and to execute plans. In addition an agent may also make commitments to other agents. Such social commitments (Castelfranchi, 1995; Dunin-Keplicz and Verbrugge, 1996) are also explicitly modelled.

3 A SPECIFICATION FRAMEWORK FOR MULTI-AGENT SYSTEMS

The BDI-architectures upon which specifications for compositional multi-agent systems are based are the result of analysis of the tasks performed by individual agents and groups of agents. Task (de)compositions include specifications of interaction between subtasks at each level within a task (de)composition, making it possible to explicitly model tasks which entail interaction between agents. Task models define the structure of compositional architectures: components in a compositional architecture are directly related to (sub)tasks in a task (de)composition. The hierarchical structures of tasks, interaction and knowledge are fully preserved within compositional architectures. Task coordination is of importance both within and between agents. Below the formal compositional framework for modelling multi-agent tasks DESIRE is introduced, in which the following aspects are modelled and specified:

3.1 Task (de)composition

To model and specify (de)composition of tasks, we require knowledge of the following:

Within a task hierarchy composed and primitive tasks are distinguished: in contrast to primitive tasks, composed tasks are tasks for which subtasks are identified. Subtasks, in turn, can be either composed or primitive. Tasks are directly related to components: composed tasks are specified as composed components and primitive tasks as primitive components.

Information required/produced by a (sub)task is defined by input and output signatures of a component. The signatures used to name the information are defined in a predicate logic with a hierarchically ordered sort structure (order-sorted predicate logic). Units of information are represented by the ground atoms defined in the signature.

The role information plays within reasoning is indicated by the level of an atom within a signature: different (meta)levels may be distinguished. In a two-level situation the lowest level is termed object-level information, and the second level meta-level information. Meta-level information contains information about object-level information and reasoning processes; for example, for which atoms the values are still unknown (epistemic information). Similarly, tasks which include reasoning about other tasks are modelled as meta-level tasks with respect to object-level tasks. Often more than two levels of information and reasoning occur, resulting in meta-meta-... information and reasoning.

3.2 Information exchange between tasks

Information exchange between tasks is specified as information links between components. Each information link relates output of one component to input of another, by specifying which truth value of a specific output atom is linked with which truth value of a specific input atom. Atoms can be renamed: each component can be specified in its own language, independent of other components. The conditions for activation of information links are explicitly specified as task control information: a kind of knowledge of the sequencing of tasks.

3.3 Sequencing of tasks

Task sequencing is explicitly modelled within components as task control knowledge. Task control knowledge includes not only knowledge of which subtasks should be activated when and how, but also knowledge of the goals associated with task activation and the amount of effort which can be afforded to achieve a goal to a given extent. These aspects are specified as (sub)component and link activation together with sets of targets and requests, exhaustiveness and effort to define the component's goals. Subcomponents are, in principle, black boxes to the task control of an encompassing component: task control is based purely on information about the success and/or failure of component reasoning. Reasoning of a component is considered to have been successful with respect to one of its target sets if it has reached the goals specified by this target set according to the specifications of the number of goals to be reached (e.g., any or every).

3.4 Delegation of tasks

During knowledge acquisition a task as a whole is modelled. In the course of the modelling process decisions are made as to which tasks are (to be) performed by which agent. This process, which may also be performed at run-time, results in the delegation of tasks to the parties involved in task execution. In addition to these specific tasks, often generic agent tasks, such as interaction with the world (observation) and other agents (communication and cooperation) are assigned.

3.5 Knowledge structures

During knowledge acquisition an appropriate structure for domain knowledge must be devised. The meaning of the concepts used to describe a domain and the relations between concepts and groups of concepts, are determined. Concepts are required to identify objects distinguished in a domain (domain-oriented ontology) , but also to express the methods and strategies employed to perform a task (task-oriented ontology). Concepts and relations between concepts are defined in hierarchies and rules based on order-sorted predicate logic. In a specification document references to appropriate knowledge structures (specified elsewhere) suffice; compositional knowledge structures are composed by reference to other knowledge structures.

4 GLOBAL STRUCTURE OF A GENERIC AGENT

To model an agent capable of reasoning about its own tasks, processes and plans, its knowledge of other agents, its communication with other agents, its knowledge of the world and its interaction with the world, a generic agent architecture has been devised in which such types of reasoning are transparently allocated to specific components of an agent.

This generic architecture can be applied to different types of agents. In this article this architecture will be refined to model a rational agent with motivational attitudes: other architectures are more applicable for other types of agents. The generic architecture is described in this section, while the refined BDI-architecture is the subject of Section 5.

Four of the five types of knowledge distinguished above in Section 3 are used to describe this generic architecture: task (de)composition, information exchange, sequencing of tasks and knowledge structures. Within an individual agent, task delegation is trivial.

4.1 Task (de)composition

As stated above an agent needs to be capable of reasoning about its own processes, its own tasks, other agents and the world. In other words, an agent needs to be capable of four tasks:

This task hierarchy is depicted below in Figure 1.

Figure 1 Task hierarchy for an agent

4.2 Information exchange

The results of an agent's own_process_control may be of importance for the management of all of its activities. Thus, these results need to be made available to the relevant components. Links are defined for the purpose of such information exchange. The component agent_management receives information from, and sends information to, other agents. The component world_management on the other hand exchanges information with the external world. Both components also exchange information with their own_process_control. Which information is required by an agent specific task depends on the task itself and therefore cannot be predefined. To fully specify the exchange of information, a more specific analysis of the types of information exchange is required. In Table 1, the links defined for information exchange at the top level of the agent, are shown together with the names of the components they connect.

Link name                   From component              To component
import_world_info           agent (input interface)     world_management>
export_world_info           world_management)           agent (output interface)
transfer_comm_world_info    agent_management            world_management
provide_world_state_info    world_management            own_process_control
import_agent_info           agent (input interface)     agent_management
export_planned_comm         agent_management            agent (output interface)
provide_agent_info          agent_management            own_process_control
transfer_committed_acts&obs own_process_control         world_management
transfer_agent_commitments  own_process_control         agent_management
transfer_planned_comm       own_process_control         agent_management

Table 1 Links for information exchange at the top level of an agent

In Figure 2 a graphical representation of the generic architecture for an agent is shown; in this figure the information links and the components they connect are depicted.

Figure 2 Top level decomposition and information links of a generic agent

DESIRE specifications of the structure of an agent include both the names of the components and the links, as shown below:

  task structure AGENT
  subcomponents own_process_control, world_management, agent_management, agent_specific_tasks
  links import_world_info, export_world_info, transfer_comm_world_info,
        provide_world_state_info, import_agent_info, export_planned_comm, provide_agent_info,
        transfer_agent_commitments, transfer_committed_acts&obs, transfer_planned_comm
  end task structure AGENT

  task structure world_management
  subcomponents   world_information_maintenance, world_interaction_management
  links import_world_info, export_world_info, import_comm_world_info,
        export_world_state_info,
        transfer_world_info,
        import_commited_acts&obs
  end task structure world_management

  task structure agent_management
  subcomponents agent_information_maintenance, agent_interaction_management
  links import_agent_info,  transfer_agent_info,
        import_planned_comm,
        export agent_info, export_planned_comm,
        export_comm_agent_info,
        import_agent_commitments
  end task structure agent_management
4.3 Task sequencing

Minimal task control has been modelled and specified for the top level of the generic agent. Task control knowledge specifies that all generic components and links are initially awakened. The awake status specifies that as soon as new information arrives, it is processed. This allows for parallel processing of information by different components. The links which connect an agent to other agents are activated by the agents from which they originate. Global task control includes specifications such as the following rule:

  if start
  then next_component_state(own_process_control, awake)
  and next_component_state(world_management, awake)
  and next_component_state(agent_management, awake)
  and next_link_state(import_agent_info, awake)
  and next_link_state(export_agent_info, awake)
  and next_link_state(import_world_info, awake)
  and next_link_state(export_world_info, awake)
  and next_link_state(transfer_comm_world_info, awake)
  .......
4.4 Knowledge structures

Generic knowledge structures are used within the specification of a generic agent, a number of which have been shown above. In the following section more detailed examples of specifications will be shown for an agent with motivational attitudes.

4.5 Building a real agent

Each of the six components of the generic agent model presented above can be refined in many ways, resulting in models of agents with different characteristics. (Brazier, Jonker and Treur, 1996) describe a model of a generic cooperative agent, based on the generic agent model and Jenning's model of cooperation, see (Jennings, 1995). In (Brazier and Treur, 1996) another refinement of the generic agent model is proposed for reflective agents capable of reasoning about their own reasoning processes and other agents' reasoning processes. In the following section a refinement of the component own_process_control is presented in which motivational attitudes (including beliefs, desires and intentions) play an important role.

5 A MODEL FOR A RATIONAL AGENT WITH MOTIVATIONAL ATTITUDES

The generic model and specifications of an agent described above, can be refined to a generic model of a rational BDI-agent capable of explicit reasoning about its beliefs, desires, goals and commitments. First, some of the assumptions behind the model are discussed (Subsection 5.1). Next the specification of the model is presented for the highest level of abstraction (in Subsection 5.2), and for the more specific levels of abstraction (Subsection 5.3).

5.1 Rational agents with motivational attitudes

Before presenting the model, some of the assumptions upon which this model is based, are described. Agents are assumed to have the extra property of rationality: they must be able to generate goals and act rationally to achieve them, namely planning, replanning, and plan execution. Moreover, to fully adhere to the strong notion of agency, an agent's activities are described using mentalistic notions usually applied to humans. This does not imply that computer systems are believed to actually "have" beliefs and intentions, but that these notions are believed to be useful in modelling and specifying the behaviour required to build effective multi-agent systems (see, for example, (Dennett, 1987) for a description of the "intentional stance").

A first assumption is that motivational attititudes, such as beliefs, desires, intentions and commitments are defined as reflective statements about the agent itself and about the agent in relation to other agents and the world. These reflective statements are modelled in DESIRE in a meta-language, which is order sorted predicate logic. Functional or logical relations between motivational attitudes and between motivational attitudes and informational attitudes are expressed as meta-knowledge, which may be used to perform meta-reasoning resulting in further conclusions about motivational attitudes. For example, in a simple instantiation of the model, beliefs can be inferred from meta-knowledge that any observed fact is a believed fact and that any fact communicated by a trustworthy agent is a believed fact.

A second assumption is that information is classified according to its source: internal information, observation, communication, deduction, assumption making. Information is explicitly labeled with these sources. Both informational attitudes (such as beliefs) and motivational attitudes (such as desires) depend on these sources of information. Explicit representations of the dependencies between attitudes and their sources are used when update or revision is required.

A third assumption is that the dynamics of the processes involved are explicitly modelled. For example, it can be specified that a component is awake from the start, which means that it always processes incoming information immediately. If more components are awake, their processes in principle will run in parallel. But, if tasks depend on each other, also sequential activation can be specified. If required, update or revision takes place and is propagated through different components by active information links.

A fourth assumption is that the model presented below is generic, in the sense that the explicit meta-knowledge required to reason about motivational and informational attitudes has been left unspecified. To tune the model to a given application this knowledge has to be added. In this paper only examples of the types of knowledge are given for the purpose of illustration.

A fifth assumption is that intentions and commitments are defined with respect to both goals and plans. An agent accepts commitments towards himself as well as towards others (social commitments). In this paper we assume a model in which an agent determines which goals it intends to fulfill, and commits to a selected subset of these goals. Similarly, an agent can determine which plans it intends to perform, and commits to a selected subset of these plans.

Most reasoning about beliefs, desires, and intentions can be modelled as an essential part of the reasoning an agent needs to perform to control its own processes. A refinement of the generic component own_process_control described in Section 4 is described below.

5.2 A refined model of own process control

Finally, in order to design a BDI-agent, the component own_process_control will be refined and decomposed into the following three components, that reason about:

The extended task hierarchy for a BDI-agent is shown in Figure 3.

Figure 3 Task hierarchy of own process control within BDI-agent

The component belief_determination performs reasoning about relevant beliefs in a given situation. In the component desire_determination an agent determines which desires it has, related to its beliefs. Intended and committed goals and plans are derived by the component intention_and_commitment_determination. The component first determines the goals and/or plans it intends to pursue before committing to the specific selected goals and/or plans. Following the methodology of hierarchical (de)composition, all three components are the subject of further refinement in Subsection 5.3.

In the model, beliefs and desires influence each other reciprocally. Furthermore, beliefs and desires both influence intentions and commitments. This is explicitly modelled by information links between the components and meta-knowledge within each of the components. In Figure 4, the structure of own_process_control is shown, together with the exchange of information. This structure is specified in DESIRE as follows:

  task structure own_process_control
  subcomponents belief_determination, desire_determination, intention_determination
  links import_ws_info_for_bd,
        import_agent_info_for_bd,
        transfer_belief_info_for_dd,
        transer_belief_info_for_id,
        transfer_desire_info_for_id,
        transfer_desire_info_for_bd
        export_committed_goals, 
        export_committed_plan,
        export_beliefs,
  end task structure own_process_control

Figure 4 Refinement of own process control within the BDI-agent

Task control knowledge of own_process_control determines that: (1) initially all links within own_process_control are awakened, and the component belief_determination is activated, (2) once belief_determination has succeeded in reaching all possible conclusions (specified in the target set ts_goals) with all possible effort, desire_determination is activated and belief_determination is made continually active (awake), (3) once desire_determination has succeeded in reaching all possible conclusions (specified in the target set ts_desires) with all possible effort, intention_determination is activated and desire_determination is made continually active (awake). Task control of intention_determination, in turn, will be described in Subsection 5.3.3.

5.3 Final refinement of components

In the previous subsection the model for reasoning about motivational attitudes was described in terms of the three tasks within own_process_control and their mutual interaction. In this subsection each of the tasks themselves is described in more detail.

5.3.1 Belief determination

The task of belief determination requires explicit meta-reasoning to generate beliefs. The specific knowledge used for this purpose obviously depends on the domain of application. The adopted model specifies meta-knowledge about beliefs based on five different sources:

(1) internal beliefs of an agent.

Internal beliefs are beliefs which an agent inherently has, with no further indication of their source. They can be expressed as meta-facts of the form internal_belief(a), meaning that a is an internal belief.. These meta-facts can be specified as initial facts or be inferred from other internal meta-information (e.g., in the case of wishful thinking internal beliefs may be implied by generated desires).

(2) beliefs based on observations.

Beliefs based on observations are acquired on the basis of observations of the world, either at a particular moment or over time. Simple generic meta-knowledge can be used to derive such beliefs: if observed_fact(X) then belief(X).

(3) beliefs based on communication with other agents.

Communication with other agents may, if agents are considered trustworthy, result in beliefs about the world or about other agents. Generic meta-knowledge that can be used to derive such beliefs is: if communicated_fact_by(X, A) and trustworthy(A) then belief(X)

(4) beliefs deduced from other beliefs

Deduction from other beliefs can be performed by means of an agent's own (domain-dependent) knowledge of the world, of other agents and of itself.

(5) beliefs based on assumptions.

Beliefs based on assumptions may be derived from other beliefs (and/or from epistemic information on the lack of information) on the basis of default knowledge, knowledge about likelihood, et cetera. For example, a default rule (a : b) / c can be specified as meta-knowledge (e.g. according to the approach described by (Tan and Treur, 1992)).

A more sophisticated model to generate beliefs can also keep track of the source of a belief. This can be specified in the meta-language by adding labels to beliefs reflecting their source, for example by belief(X, L). Here the label L can denote a single source, such as observed, or communicated_by(A), but if beliefs have been combined to generate other beliefs, also combined labels can be generated as more complex term structures, expressing that a belief depends on a number of sources.

5.3.2 Desire determination

Desires can refer to a (desired) state of affairs in the world (and the other agents), but also to (desired) actions to be performed. Often, desires are influenced by beliefs. As beliefs can be based on their source, as discussed in Subsection 5.3.1, desires can inherit these sources. In addition, desires can have their own internal source, for example desires can be inherent to an agent. Knowledge on how desires are generated is left unspecified in the generic model.

5.3.3 Intention and commitment determination

Intended and committed goals and plans are determined by the component intention_and_commitment_determination; this component is decomposed into goal_determination and plan_determination. Each of these subcomponents first determines the intended goals and/or plans it wishes to pursue before committing to a specific goal and/or plan.

Task control of intention_and_commitment_determination determines that (1) initially all links are awakened, and goal_determination is activated, (2) once goal_determination has succeeded in reaching all possible conclusions (specified in ts_committed_goals) with all possible effort, plan_determination is activated and goal_determination is made awake, (3) once plan_determination has succeeded in reaching all possible conclusions (specified in ts_committed_plans) with all possible effort, plan_determination is made awake. Within goal_determination task control knowledge specifies that intended_goal_determination and committed_goal_determination are activated sequentially. The same holds for activation of intended_plan_determination and committed_plan_determination specified by task control knowledge in plan_determination.

In the component goal_determination commitments to goals are generated in two stages. In the subcomponent intended_goal_determination, based on beliefs and desires, but also on preferences between goals, specific goals are selected to be intended goals. In the component committed_goal_determination a number of intended goals are selected to become goals to which the agent commits. These committed goals are transferred to the component plan_determination.

In the component plan_determination commitments to goals are analysed and commitments to plans are generated in two stages. In the subcomponent intended_plan_determination plans are generated dynamically, combining primitive actions and predefined plans known to the agent (stored in an implementation, for example, in a library). On the basis of knowledge of the quality of plans, committed goals, beliefs and desires, a number of plans become intended plans. The component committed_plan_determination determines which of these plans should actually be executed. In other words, to which plans an agent commits. If no plan can be devised to reach one or more goals to which an agent has committed, this is made known to the component goal_determination. If a plan has been devised, execution of a plan includes determining, at each point in time, which actions are to be executed. During plan execution, monitoring information can be acquired by the agent through observation and/or communication. Plans can be adapted on the basis of observations and communication, but also on the basis of new information on goals to which an agent has committed. If, for example, the goals for which a certain plan has been devised, are no longer relevant, and thus withdrawn from an agent's list of committed goals, it no longer makes sense to execute this plan.

6 DISCUSSION AND CONCLUSIONS

In this paper a generic model for a rational BDI-agent has been modelled in DESIRE. Interaction with other agents (communication) and the external world (actions and observations), is discussed at a generic level. Communication, action and observation may influence an agent's beliefs, desires, goals and plans dynamically.

The formal specification in DESIRE provides a bridge between logical theory, e.g. (Rao and Georgeff, 1991) and practice of BDI-agents. Another kind of bridge is described in (Rao, 1996), which takes as its departure a language corresponding to the implemented system dMARS and formalized its operational semantics. Our model, in contrast, emphasizes the analysis and design methods of BDI systems, as do the architectures of (Jennings, 1995; Kinny, Georgeff and Rao, 1996). However, there are differences as well: our specification is more formal than the one in (Jennings, 1995). Also, in contrast to the BDI architecture described in (Kinny, Georgeff and Rao, 1996), in our approach dynamic reasoning about beliefs, desires and goals, during plan execution, may lead to the construction of a (partially) new plan. This is partly caused by the parallel nature of specific reasoning processes in this model, but is also a consequence of the nature of explicit strategic knowledge in the model. Strategic knowledge is used to revise, for example, beliefs, during a dynamic process. Revisions are propagated by transfer of updated information on beliefs to the components that need the information: components that reason about desires, goals and plans.

The nature of continual activation of components and links makes it possible to transfer updated or new beliefs "automatically" to the relevant components. (The compositional revision approach incorporated in DESIRE is discussed in more depth in (Pannekeet, Philipsen and Treur, 1992)). In the paper the example of new information received from another agent, which may influence beliefs on which a goal has been chosen, is used to illustrate the effect this may have on the execution of a plan. Retraction of beliefs may lead to retraction of a number of goals that were based on these beliefs, which in turn may lead to retraction of a commitment to these goals. If the belief is the basis for a commitment to a plan, retraction of the belief may result in the retraction of the commitment to the plan and thus to its execution.

The DESIRE framework provides support in distinguishing the types of knowledge required to model rational agents based on mental attitudes. An existing agent architecture provided the basis for the model and the specification language provided a means to express the knowledge involved. By declaratively specifying task control knowledge and information exchange for each subtask, the dynamic process of revision has been explicitly specified.

The model as such provides a basis for further research: within this model more specific patterns of reasoning and interaction can be modelled and specified. Maintenance goals can be considered, joint commitments and joint actions can be modelled, more extensive communication patterns between agents can be analysed and represented, relative importance of intentions can be expressed, et cetera.

7 ACKNOWLEDGMENTS

This work was partially supported by the Polish KBN Grants 3 P406 019 06 and 8T11C 03110.

8 REFERENCES

Bratman, M.A. (1987). Intentions, Plans, and Practical Reason, Harvard University Press, Cambridge, MA.

Brazier, F.M.T. , Dunin-Keplicz, B., Jennings, N.R. and Treur, J. (1995). Formal specification of Multi-Agent Systems: a real-world case. In: V. Lesser (Ed.), Proc. of the First International Conference on Multi-Agent Systems, ICMAS-95, MIT Press, Cambridge, MA, pp. 25-32.

Brazier, F.M.T. , Dunin-Keplicz, B., Jennings, N.R. and Treur, J. (1997). DESIRE: modelling multi-agent systems in a compositional formal framework, International Journal of Cooperative Information Systems, M. Huhns, M. Singh, (Eds.), special issue on Formal Methods in Cooperative Information Systems, vol. 1, to appear.

Brazier, F.M.T., Treur, J. (1996). Compositional modelling of reflective agents. In: B.R. Gaines, M.A. Musen (Eds.), Proc. of the 10th Banff Knowledge Acquisition for Knowledge-based Systems workshop, KAW'96, Calgary: SRDG Publications, Department of Computer Science, University of Calgary.

Brazier, F.M.T., Jonker, C.M., Treur, J., (1996). Formalisation of a cooperation model based on joint intentions. In: Proc. of the ECAI'96 Workshop on Agent Theories, Architures and Languages, ATAL'96. To be published in: Intelligent Agents III, Lecture Notes in AI, Springer Verlag, 1997.

Brazier, F.M.T., Treur, J., Wijngaards, N.J.E. and Willems, M. (1995). Temporal semantics of complex reasoning tasks. In: B.R. Gaines, M.A. Musen (Eds.), Proc. of the 10th Banff Knowledge Acquisition for Knowledge-based Systems workshop, KAW'95, Calgary: SRDG Publications, Department of Computer Science, University of Calgary.

Castelfranchi, C. (1995). Commitments: From individual intentions to groups and organizations. In: V. Lesser (Ed.), Proc. of the First International Conference on Multi-Agent Systems, ICMAS-95, MIT Press, Cambridge, MA, pp. 41-48.

Cohen, P.R. and Levesque, H.J. (1990). Intention is choice with commitment, Artificial Intelligence 42, pp. 213-261.

Dennett, D. (1987). The Intentional Stance, MIT Press, Cambridge, MA.

Dunin-Keplicz, B. and Treur, J. (1995). Compositional formal specification of multi-agent systems. In: M. Wooldridge and N.R. Jennings, Intelligent Agents, Lecture Notes in Artificial Intelligence, Vol. 890, Springer Verlag, Berlin, pp. 102-117.

Dunin-Keplicz, B. and Verbrugge, R. (1996). Collective commitments. To appear in: Proceedings of the Second International Conference on Multiagent Systems, ICMAS-96.

Fagin, R., Halpern, J., Moses, Y. and Vardi, M. (1995). Reasoning about Knowledge. Cambridge, MA, MIT Press.

Jennings, N.R. (1995). Controlling cooperative problem solving in industrial multi-agent systems using joint intentions, Artificial Intelligence 74 (2).

Kinny, D., Georgeff, M.P., Rao, A.S. (1996). A Methodology and Technique for Systems of BDI Agents. In: W. van der Velde, J.W. Perram (Eds.), Agents Breaking Away, Proc. 7th European Workshop on Modelling Autonomous Agents in a Multi-Agent World, MAAMAW'96, Lecture Notes in AI, vol. 1038, Springer Verlag, pp. 56-71

Pannekeet, J.H.M., Philipsen, A.W. and Treur, J. (1992). Designing compositional assumption revision, Report IR-279, Department of Mathematics and Computer Science, Vrije Universiteit Amsterdam, 1991. Shorter version in: H. de Swaan Arons et al., Proc. Dutch AI-Conference, NAIC-92, 1992, pp. 285-296.

Rao, A.S. (1996). AgentSpeak(L): BDI Agents Speak Out in a Logical Computable Language. In: W. van der Velde, J.W. Perram (eds.), Agents Breaking Away, Proc. 7th European Workshop on Modelling Autonomous Agents in a Multi-Agent World, MAAMAW'96, Lecture Notes in AI, vol. 1038, Springer Verlag, pp. 42-55.

Rao, A.S. and Georgeff, M.P. (1991). Modeling rational agents within a BDI- architecture. In: R. Fikes and E. Sandewall (eds.), Proceedings of the Second Conference on Knowledge Representation and Reasoning, Morgan Kaufman, pp. 473-484.

Shoham, Y. (1993). Agent-oriented programming, Artificial Intelligence 60 (1993) 51- 92.

Shoham, Y. (1991). Implementing the intentional stance. In: R. Cummins and J. Pollock (eds.), Philosophy and AI, MIT Press, Cambridge, MA, 1991, pp. 261-277.

Shoham, Y. and Cousins, S.B. (1994). Logics of mental attitudes in AI: a very preliminary survey. In: G. Lakemeyer and B. Nebel (eds.) Foundations of Knowledge Representation and Reasoning, Springer Verlag, pp. 296-309.

Tan, Y.H. and Treur, J. (1992). Constructive default logic and the control of defeasible reasoning, Report IR-280, Vrije Universiteit Amsterdam, Department of Mathematics and Computer Science, 1991. Shorter version in: B. Neumann (ed.), Proc. 10th European Conference on Artificial Intelligence, ECAI'92, Wiley and Sons, 1992, pp. 299-303.

Velde, W. van der and J.W. Perram J.W. (Eds.) (1996). Agents Breaking Away, Proc. 7th European Workshop on Modelling Autonomous Agents in a Multi-Agent World, MAAMAW'96, Lecture Notes in AI, vol. 1038, Springer Verlag.

Wooldridge, M. and Jennings, N.R. (1995). Agent theories, architectures, and languages: a survey. In: M. Wooldridge and N.R. Jennings, Intelligent Agents, Lecture Notes in Artificial Intelligence, Vol. 890, Springer Verlag, Berlin, pp. 1-39.