Mediating Representations for an Agent Design Toolkit

Jeffrey M. Bradshaw1,2,4, Mark Greaves2, Heather Holmback2, Robert Carpenter2, Robert Cranfill2, Renia Jeffers2, Luis Poblete2, Tom Robinson2, Amy Sun2, Yuri Gawdiak3, Alberto Cañas1, Niranjan Suri1, Barry Silverman5, Michael Brooks6, Alex Wong6, Isabelle Bichindaritz4, Keith Sullivan4

1. Institute for Human and Machine Cognition
University of West Florida
Bldg. 79/Rm. 196, Pensacola, FL 32514

2. Applied Research and Technology
Shared Services Group, The Boeing Company
P.O. Box 3707, M/S 7L-44, Seattle, WA 98124

3. NASA Ames Research Center, Code IP
Moffett Field, CA 94035

4. Clinical Research Division
Long-Term Follow-Up, FB-600, Fred Hutchinson Cancer Research Center
1124 Columbia Street, Seattle, WA 98104

5. Staughton Hall (Room 206), George Washington University
707 22nd Street, NW, Washington DC 20052

6. Sun Microsystems
901 San Antonio Road, Palo Alto, CA 94303

Abstract

We have undertaken the development of a powerful, open, and extensible Java toolkit for agent design. In its initial incarnation, the toolkit will consist of three primary components: a Plan Design Tool (PDT), a Communication Design Tool (CDT) and a Security Design Tool (SDT). PDT will rely on the use of mediating representations in the development and debugging of executable elements of a plan library. CDT and SDT will contain graphical and analytical capabilities to help developers understand the effects that different choices in agent communications and security policy will have in the design of an agent, and how to best craft these choices to fit the capabilities and intended context of application of specific agents. We intend that the toolkit be useful for people whose primary expertise is not in the design of agent plans, conversations, or security policies. To enable wide deployment of agent technology, we want to support domain experts as well as experienced agent architects. Thus it is important that our tools be intuitive, graphical, easy to use and customize, and provide automatic plan and policy generation capabilities.

1. INTRODUCTION

1.1. The Need for Better Agent Development Tools

Creating the sophisticated agent-based systems of the future will require research advances on at least two fronts. First, the communication, security, and planning frameworks used by agents must be made powerful enough to support robust high-level coordinated problem-solving activity. Second—and perhaps more importantly—the developers of agents and agent-based systems must be able to efficiently and effectively incorporate these new theoretical developments into their work. From a practical standpoint, this second requirement poses a much more difficult problem. Full appreciation of new developments in agent foundations requires sophisticated knowledge of speech-act theory, formal semantics, linguistic pragmatics, modal logic, security and intrusion detection modeling, planning, and other disciplines that are not likely to be fully present in a typical agent developer’s skill set.

Moreover, it is not realistic to assume that an identical approach will be appropriate for every agent. We expect that the continuing evolution of agent technology will result in the creation of agents of widely varying degrees of reasoning ability, operating under different kinds of resource constraints, and communicating with each other using multiple levels of sophistication (Bradshaw, 1997). In typical agent ensembles, many agents will be small and simple, some will have medium-scale abilities, and a select few will exhibit behavior of extreme sophistication. They will be developed using combinations of agent frameworks, each designed independently by different research groups and commercial vendors. Agents will also vary in their security requirements from simple trusted agents interacting with fully-owned resources to mobile agents spanning public and private systems on the Internet. This diversity of agent contexts means that, for the working agent developer, it is almost impossible to fully understand the consequences of configuring agents with a particular set of communication, security, and collaboration capabilities, and accurately predicting how agents will behave in the complex environment of modern agent ensembles.

1.2. Objectives

To address these challenges, we have undertaken the development of a powerful, open, and extensible Java toolkit for agent design. In its initial incarnation, the toolkit will consist of three primary components: a Plan Design Tool (PDT), a Communication Design Tool (CDT) and a Security Design Tool (SDT). PDT will rely on the use of mediating representations in the development and debugging of executable elements of a plan library. CDT and SDT will contain graphical and analytical capabilities to help developers understand the effects that different choices in agent communications and security policy will have in the design of an agent, and how to best craft these choices to fit the capabilities and intended context of application of specific agents. We intend that the toolkit be useful for people whose primary expertise is not in the design of agent plans, conversations, or security policies. To enable wide deployment of agent technology, we want to support domain experts as well as experienced agent architects. Thus it is important that our tools be intuitive, graphical, easy to use and customize, and provide automatic plan and policy generation capabilities.

Our overall goal is to make it as easy as possible to produce high quality agents that function well in large ad hoc agent ensembles. Achieving this objective will require sustained progress toward the following technical objectives:

 

1. Establish the role of pragmatics as a central pillar of agent communication theory. Though current work on the semantics of basic communicative acts and team behavior provides a good starting point for agent designers (Cohen & Levesque, 1997; Smith, Cohen, Bradshaw, Greaves, & Holmback, 1998), researchers have generally neglected the overarching pragmatics of agent dialogues. Agents of the future will not collaborate by simply firing "performatives" at one another, but will treat conversation as a variety of action, on par with other actions that they perform. Synthesizing results from linguistic pragmatics, conversational analysis, and dialogue planning, we will extend ongoing work on basic communicative acts, teamwork, and conversation policies by investigating agent-based pragmatics and dialogue design.

2. Enable the deployment of fine-grained extensible agent security policies. Though Java is currently the most popular and arguably the most security-conscious mainstream language for agent development, it fails to address many of the unique challenges posed by agent software. Moreover, today’s security policies are typically hard-coded and do not allow the degree of configurability, extensibility, and fine-grained access control required by agent-based systems. In collaboration with colleagues at Sun Microsystems, we will build on the foundation of new advanced Java security models that separate policy from mechanism (Sun_Microsystems, 1998).

3. Employ mediating representations in an open extensible Java toolkit to simplify plan, security and conversation policy design while guaranteeing robustness. Combining mediating graphical representations and rigorous logical support in agent design systems is key to our being able to support the wide range of skill sets in agent developers (Ford, Bradshaw, Adams-Webber, & Agnew, 1993; Greaves, 1997). The internal logic and external representations will be modular and replaceable to allow for continued evolution of agent theory and practice. Our approach will result in a toolkit which is rigorous and yet usable without extensive training in logic and verification techniques. In most cases, we expect that designers will be able to simply specialize existing plans, conversation policies, and security policies and protocols in the toolkit’s "starter set."

4. Provide support for diverse levels of agent sophistication, and interoperability between different agent frameworks. Our toolkit will aim to provide a means by which agent applications can simultaneously support multiple levels of agent sophistication in a principled way. As part of our work within the DARPA Agent-Based Systems program, we plan to work with others to demonstrate how the toolkit enables interoperability between agent frameworks from different research groups and commercial vendors.

1.3. KAoS and the Agent Design Toolkit

In order to better understand the agent design toolkit, we will describe its context of use within the KAoS (Knowledgeable Agent-oriented System) agent framework (Bradshaw, Dutfield, Benoit, & Woolley, 1997). KAoS is an embodiment of lessons learned in the endeavor to investigate principles on which industrial-strength open agent frameworks can be developed. It is unique in that it attempts to draw on both the latest research in intelligent systems and the continuing evolution of robust distributed object, middleware, and Internet technology and standards (e.g., CORBA, Java, COM). Basic characteristics of KAoS agents include a consistent structure providing mechanisms allowing the management of knowledge, commitments, choices, and capabilities. Agent dynamics are managed through a cycle that includes the equivalent of agent birth, life, cryogenic state, and death. Support for agent mobility is also a desirable feature for specific agents in some KAoS applications.

Each KAoS agent contains a component called the generic agent , which implements behavior that is shared and reused across all agents (figure 1). This includes the basic infrastructure for security and transport-level communication. Unlike most agent communication architectures, KAoS explicitly takes into account not only the individual message (e.g., request, promise), but also the various sequences of messages in which it may occur. In our current implementation, shared knowledge about message sequencing conventions (conversation policies) enables agents to coordinate frequently recurring interactions of a routine nature simply and predictably. In a parallel vein, we are working to incorporate explicit declarative security policies to enable communicating agents (and their hosts) to interact according to the safety and confidentiality requirements of their designers.

Figure 1. KAoS agents consist of two parts: a generic agent and an agent extension.

The CDT and SDT components of the toolkit help agent developers design, verify, and generate conversation and security policies for use by the generic agent (or for use by some analogous component in other frameworks). Interoperability between frameworks developed by different groups is technically feasible. However the efforts to achieve it require consensus on a wide range of issues. Many of these have been addressed by previous efforts such as the Knowledge Sharing Initiative, which sponsored the initial development of KQML (Finin, Labrou, & Mayfield, 1997; Genesereth, 1997). Consensus on essential theoretical issues and agreement on common conversation and security policy will enable a higher level of interoperability than has previously been possible (figure 2).

Figure 2. Interoperability between agents in different frameworks enabled by common conversation and security policies.

The component of the agent that contains capabilities specific to particular agents is called the agent extension. In the applications that we have built to date, the agents have varied greatly in their capabilities. Typically these applications consist of a "rainbow coalition" with large numbers of "dumb" distributed objects, a smaller number of simple agents, and a yet smaller number of agents with some degree of "intelligence." For agent developers requiring a base level of planning capabilities in particular agents, we are providing KPL (KAoS Planner Lite) as an optional planning component (see section 2 below). PDT is used to develop plans for KPL.

Section 2 discusses PDT, currently the most mature of the toolkit components. Section 3 summarizes our direction for the work we have recently begun on the CDT and SDT. Section 4 concludes the paper.

2. PDT AND KPL: TOOLS FOR PLAN DESIGN AND EXECUTION

2.1. Background and Objectives

KPL and its accompanying design tool, PDT, have been designed in response to requirements for a reusable component containing a basic hierarchical goal-driven planning capability that can be used with KAoS agents and, in principle, with other agent frameworks. We envision that this optional "lite" planning component will typically be encapsulated within an agent; thus it will not be obvious to any within a set of interacting agents which if any of them contain an embedded planner and which do not.

KPL is optimized for size, performance, ease of authoring, and comprehensibility by end users. The ultimate goal is that the implementation of the "lite" planner will offer sufficient performance to run several agents’ planners at once on garden variety computing platforms, and that it will be sufficiently small to allow it to move around the network with a mobile agent. To achieve these goals we cannot use some of the general and powerful approaches to planning that have been developed for more complex planning problems. However KAoS agents can be easily adapted to be used in conjunction with these more powerful planners for applications that demand them.

The design for KPL is based largely on an activity-graph-based planner implemented that was implemented in Smalltalk-80 2.5 as part of the application of Axotl to R&D investment decision making in Boeing’s PIE project (Bradshaw, Covington, Russo, & Boose, 1990; Bradshaw, Holm, Kipersztok, Nguyen, Russo, & Boose, 1991). Activity graphs proved to be a good mediating representation for non-programmers to develop hierarchical goal-driven plan structures. The activity graph representation was complemented by a "heuristic advisor" built from our own implementation of MRS (Russell, 1985), a representation language and inference engine based on predicate logic and a predecessor to KIF (Genesereth & Fikes, 1992).

KPL consists of four major components:

Executive, which coordinates interaction among the various components and handles interaction with other agent components (and end users);

Advisor, which has the responsibility for selecting, modifying, or composing suitable activity graphs for the planner to operate on; for responding to external event triggers that may affect planning and execution; and for monitoring progress and acting in the role of plan critic during plan execution;

Planner, which has the responsibility for constructing agendas from activity graphs or activity graph fragments submitted to it; and for propagating the effects of successful or unsuccessful execution of agenda items; and

Agenda Manager, which has the responsibility for executing agenda items and signaling their success or failure.

 

PDT consists of three editors that can operate on data structures used by KPL:

Activity graph editor, which allows users to create, modify, and execute activity graphs;

Agenda editor; which allows users to view the status of the agenda, set breakpoints, pause and resume execution of items, and preemptively assert success or failure of particular items; and

Status board editor, which is used to view and operate on the advisor’s knowledge base.

We explain selected ones of these KPL and PDT components in more detail below.

2.2. Activity Graphs and Agendas

The central knowledge representation of KPL and PDT is the activity graph. An activity graph represents a plan as a hierarchy of goals and activities. The topmost goal in the hierarchy represents the successful completion of a plan; subgoals and activities to support them are added, re-ordered, and pruned from the hierarchy dynamically as the agent does its work. Each goal in the hierarchy has an associated set of conditions that must either be satisfied by the completion of supporting activities or explicitly overridden by the individual.

Activity graphs are similar in some respects to AND/OR graphs familiar to knowledge-based system researchers, but contain features that are specialized to their use in this application (figure 3). Properties are assigned to goals to indicate which subgoals and activities must succeed, the order in which they must be executed, and whether iteration is required. Connector types (AND, OR) determine whether all or just one subgoal must succeed for the goal to be satisfied. Markers for subgoal precedence indicate whether the subgoals must be executed in fixed order (directed), according to priority weights that have been assigned or calculated dynamically (undirected), or according to some arbitrary selection procedure (CASE-OR). CASE-OR subgoal precedence arcs are necessarily undirected. Activity graph developers may specify whether a given set of child nodes may be executed in parallel. Precedence arcs for AND node subgoals are solid while arcs for OR and CASE-OR node subgoals are dashed.Iteration of subgoals can continue until the attainment of a fixed number of successes out of a maximum number of trials (k/n iteration) or indefinitely until the satisfaction of some arbitrary condition (conditional iteration).

Figure 3. Activity graph features.

Figure 4 displays a simple activity graph that could was used in testing the PIE application to build an influence-diagram-based decision model for a plant expansion problem. The major steps in the process are model formulation, evaluation, and appraisal. Each of these steps are decomposed into subgoals. At runtime, users typically would not see any activity graphs, but the agents working on their behalf would rely on their execution to formulate and carry out tasks.

 

Figure 4. An activity graph for the plant expansion decision problem.

The activity graph editor allows users to build, modify, and test plans. From the editor an agenda can be generated and executed in a standalone manner separate from the agent.

There are typically several ways to satisfy a given activity graph. The agenda manager shown in figure 5 displays an agenda constructed by the Axotl planner listing a set of activities that will satisfy all the goals of the graph shown in figure 4. In this case, there is only one disjunction in the graph (add chance node), so the agenda consists of each of the leaf activities with the exception of the unselected branch of the disjunction (perform probability assessment).

Leaf-level activity nodes, as well as certain goal nodes (e.g., CASE-OR and conditional iteration nodes) have scripts associated with them to specify the procedures that the system carries out during execution of that node as an agenda item. If the script has been successfully executed, it returns the value of "true" to the agenda manager.

 

Figure 5. The agenda manager displays the agenda derived by the planner from the graph shown in figure 4.

2.3. Plan Execution and the Executive

In our design, goal definition and control are explicitly separated: the goal and activity components consist of strictly declarative activity graph pieces, while the procedural aspect of session control is allocated to the executive, the advisor, and the agenda manager. Figure 6 is a high-level conceptual view of the flow of control. The advisor is responsible for selecting one or more activity graphs to be executed. The advisor not only selects activity graphs, but also may use information from its knowledge base to constrain the kinds of agendas that can be built from them by preemptively failing activities, modifying their weights, and so forth. The activity graph is sent to the planner, which attempts to construct a valid agenda, one item at a time. As soon as the first item has been placed on the agenda, the agenda manager begins execution and the planner continues its work in parallel. The planner algorithm guarantees that agenda items will be placed on the agenda in the order in which they should be executed.

Agenda items are typically executed one by one by the agenda manager. The activity graph developer may also designate that a set of children from a common parent node may run in parallel. If the agenda is successfully completed, the advisor supplies a new activity graph. If an agenda item fails, the current agenda is declared invalid and is modified or replaced by the planner. If the agenda manager runs out of agenda items to execute while the planner is still working to complete the agenda, it will wait for a response from the planner in the form of one or more new or revised agenda items to execute. The agenda manager will also suspend activity in response to preset breakpoints or dynamic "suspend" event sent by the user, the agent, or the advisor in response to some change in conditions it notes which necessitates a modification of the agenda. Using current information, the heuristic advisor may also preemptively delay specific agenda items by temporarily asserting their success and placing them on a special stack for later execution.

Recursive plan execution may take place through activity graphs calling other activity graphs, thereby dynamically instantiating sub-instances of KPL components at runtime .

Figure 6. High-level conceptual view of the flow of control in KPL.

3. CDT AND SDT: CONVERSATION AND SECURITY DESIGN TOOLS

The concept of operation for the Conversation Design Tool (CDT) and the Security Design Tool (SDT) are illustrated in figure 7:

 

1. Past work by the research community on agent communication is being extended upwards into a theory incorporating the pragmatics of agent dialogue. The CDT will incorporate aspects of this theory along with graphical mediating representations and a "starter set" of conversation policies into Stanford’s Java-based Openproof environment currently under development at CSLI.

2. Agent designers, who are conversant with the requirements of their domain of application but not necessarily with the details of agent conversation theory, will interact with the CDT to tailor existing policies in the starter set to the unique context in which their agents will function. This interaction will be primarily through the intuitive diagrammatic and form-filling facilities provided by the CDT rather than through standard sentential logic. The usability and adequacy of the CDT (and the SDT) will be formally evaluated.

3. When agent designers are satisfied that the models they have built adequately capture the assumptions and intent of the conversational structures they set out to develop, they will use the CDT to generate the actual conversation policies. These policies are then ready for use by specific agents as required by the application.

4. We are collaborating with Sun Microsystems to adapt emerging advanced Java security models that separate policy from mechanism to develop an agent-specific security framework. This framework will comprise a starter set of security policies and protocols addressing authentication, secure communication, mobility, and resource management issues. These policies and protocols will allow the degree of configurability, extensibility, and fine-grained access control required by agent-based systems.

5. Agent designers, who are familiar with the general security requirements and restrictions of their applications, but not necessarily with the complexities of robust security model design, will interact with the SDT to tailor security policies in the starter set to their unique context. As with the CDT, interaction with the SDT will be made intuitive through the use of mediating representations, primarily intelligent forms.

 

Figure 7. Concept of operation for the conversation and security design tools.

 

6. As with the CDT, the SDT will generate appropriate security policies for use by specific agents.

 

In the following subsections, we will describe four elements of our technical approach:

 

• steps toward a pragmatics of agent dialogue;

• establishment of fine-grained extensible agent security policies;

• development of an open extensible Java toolkit employing meditating representations to simplify security and conversation policy design while guaranteeing robustness; and

• providing support for diverse levels of agent reasoning and conversational ability.

3.1. Steps toward a pragmatics of agent dialogue

The most salient feature of our vision for agent technology of the future is the way in which the agent coordination would be supported by a powerful underlying language. The foundations of such strong languages are already being laid in work on agent communication performed by Labrou and Finin (Labrou, 1996), the work of the Foundation for Intelligent Physical Agents (FIPA) on SL (FIPA, 1997), and our ongoing collaborations with OGI on semantics for KAoS agent conversation policies (Smith & Cohen, 1995; Smith, Cohen, Bradshaw, Greaves, & Holmback, 1998). However these efforts by-and-large address only the basic semantics of messages, and leave the overarching pragmatics of agent dialogue either entirely implicit or only briefly described. For example, although each of the above approaches describes a protocol for an agent to make or accept an offer of services, none precisely describe the complete conditions of use under which an offer should be made, nor adequately address the role that an offer conversation would play within larger dialogue structures. Our research is attempting to provide a framework complementing and extending previous approaches to encompass pragmatics and dialogue management, and used to support the types of sophisticated conversation which are necessary for agents to form teams and intelligently act in ensembles.

The KAoS agent framework helped pioneer the use of conversation policies in agent interaction. We are working to supplement the current KAoS design by specifying semantic and pragmatic conditions relevant to KAoS speech acts and conversation policies, and by extending the types of conversations allowed in the system. When complete, the CDT will allow agent developers to specify and verify various types of agent conversation. This specification and verification will take place at two levels: at the level of individual speech acts and their usage conditions, and at the level of overall conversation structures.

The definition of speech acts for agent conversation and their usage conditions. As do most other agent researchers, we assume a speech-act-based approach to agent communication. However, experience has shown that a precise semantic definition of speech acts and conversation policies may not be sufficient to help the agent designer specify everything that is relevant to how the agents in a system will communicate. We believe that the specification of pragmatic usage conditions on speech acts in the context of conversations is crucial to overall agent communication.

One poorly understood pragmatic issue that is vital to the creation of sophisticated agents are the "usage conditions" for particular speech acts. Below are some types of usage conditions that we believe will be important to agent conversation and which an agent designer might want to specify as part of the overall agent conversational behavior. Most of the examples below deal with the speech act "offer" and conversation policies that begin with an offer.

In real usage, offers often have an implicit or explicit expiration, sometimes based on time, sometimes based on other conditions. It is not part of the Searle’s (1969) original description of an offer that it have an expiration condition, i.e. a felicitous offer can be made without explicitly mentioning when it expires. However, this issue arises when we consider how the speech acts might be used by agents, or humans and agent teams, in a conversational exchange. In those situations, expiration conditions may be very important.

Expiration can be looked upon as a usage condition or, from the point of view of the CDT, a particular parameter governing the behavior of an agent with respect to a particular kind of offer. Such information would be used by an agent receiving the offer in its decision to accept, decline, ignore, or ask for clarification about the offer, and also to let the offering agent off the hook at some clearly defined point so it is free to do other things. For some kinds of offers if the expiration is not made clear, time and resources might be wasted and miscommunication may occur. In human communication, the expiration conditions of an offer are often implicit in the larger context of utterance or derivable by the culture or common sense reasoning of the conversational participants. But for an agent system, it is often better to make it explicit, either by some default expiration, by having specific expiration conditions for a given offer, or by requiring an explicit withdrawal when the offer no longer stands.

It might be claimed that the expiration conditions on an offer could just be written into the propositional content of the offer, but this approach is problematic. Even if this could be done, we (as developers of a tool for agent conversation design) would want to aid users by listing the kinds of important usage conditions that they should consider in using a given speech act, rather than just hoping that they will remember to include everything that might be relevant in the content of the offer in a particular agent ensemble. Also we would like conditions (possibly like expiration) which are found to be common and important to be specified in a uniform way across agents and across systems. Other reasons that the expiration conditions on an offer should not just be written into the propositional content of the offer include:

 

• the expiration must necessarily refers to the offer itself, but many first-order logic-based languages don’t have the capability to self-refer in this way.

• many expiration conditions (such as temporal ones) affect the agents on the level of the overall conversational behavior, independently of the content of the offer: a recipient can check the expiration conditions on an offer when it first receives notice of it and may not have to process further to know whether or not to consider the offer.

• expiration is a general property of all offers (at least potentially) so it makes sense to have the agent designer manipulate it independently of the actual content of the offer—the expiration on offers might change independently of the content of the offer. For example, an offer to provide a certain number of cycles of CPU time might expire at different times depending on the day or time and this might differ from week to week. The content of the offer is the same, but the expiration time needs to be modified more frequently. In general it would be more intuitive and efficient to just change certain parameters on a speech act than to have to restate the entire propositional content.

 

Other types of usage conditions can be important. For example, take the common assumption "that agents are acting sincerely." In an actual system, we would like to be able to explicitly specify what it means for a given agent to act sincerely—in other words, the agent designer may want to include what properties or behavior an agent needs to exhibit to be acting sincerely at run-time. Consider the example of resources. Current definitions of "offer", limited as they are to consideration of semantic issues, do not include any specification for how many offers an agent can have outstanding, they only assume (often implicitly) that agents are acting sincerely. Somewhere it must be specified that if an offer involves the use of a large amount of resources, then acting sincerely means that even though an agent may be able to perform the act mentioned in the offer for one agent it may not be able to do so for more than one. If the offering agent knows he will only be able to accommodate the first acceptance, then this should be explicitly stated as a parameter on the offer, so that the recipient agents know they should respond quickly if they need the service. In the case of brokering, this information needs to be available to the broker. In either case, the offering agent is acting insincerely if he makes an offer which is likely to be withdrawn for many of the agents who might accept it. This may be looked at as a kind of expiration condition which needs to be made explicit (e.g., expiration after the first three takers).

Another parameter on the use of an offer might be the conditions under which an offerer should inform that an offer has been taken and issue an explicit withdrawal. Again, it should not be in the semantic definition of offer whether or not a withdrawal is required, but in a given system, a designer might want to explicitly send a withdrawal to all original offerees, because it may be more efficient to withdraw the offer than to deal with all the possible "accepts" and "declines" that might continue to be sent. Also, all the recipients of the original offer do not have to waste time attempting to accept the offer if it is no longer on the table.

Many other types of usage conditions for offers (and, of course, other basic speech acts such as request, inform, and promise) could be important to agent designers as they seek to specify in the design of the agent and its conversation behavior. A simple system might not need to have all these conditions specified, but they seem relevant for systems that are more complex and to facilitate both agent-agent and human-agent communication. Without the guidance of a tool, assumptions that are important for a given context might be overlooked, leading to serious consequences. And, assuming future ensembles of agents with mixed abilities, it is important that choices concerning these pragmatic and usage conditions be made in accordance with well-established principles, rather than be dictated by developer convenience.

Overall conversation structures: conversation policies and emergent conversation. Current work on agent communication languages involves messages that are exchanged between agents in the context of conversations, with verbs naming the speech acts represented by the message. Unlike most agent communication frameworks, KAoS explicitly takes into account not only the individual message, but also the conversation policies which determine various sequences of messages in which it may occur (Bradshaw, Dutfield, Benoit, & Woolley, 1997). We are designing our Conversation Design Tool to help the agent designer select the relevant conversation policies necessary for the agent’s goals and to verify the intended behavior of a chosen conversation policy. We are also expanding the inventory of conversation policies as necessary to handle the different types of conversations agents may need to engage in when more types of applications are considered.

The core concept of a conversation policy is also being extended. The current state-machine-based implementation of KAoS conversation policies deals with sequences of messages in fixed, predetermined orders. Though this is sufficient for many types of communication between simpler agents, it will not suffice for more complex agent-agent and human-agent communication situations that may involve more than a set of a few, pre-specified sequences. Rather such conversation is characterized by "emergence," i.e. the number and exact sequencing of exchanges cannot be specified ahead of time. We are using the notion of conversation policies to investigate emergent agent conversations.

An emergent conversation consists of:

 

• a sequence of points or landmarks in the conversation where a set of specifiable properties can be said to hold of the agents involved (e.g. an offer has been made; an offer has been accepted), at least involving the initial speech act and the final state of the conversation.

• an optional set of zero or more exchanges which can occur between any two sequenced landmark states in the conversation. Such exchanges can include clarifications, questions, counteroffers, denials, etc. depending on the type of conversation policy involved.

 

Given this, we can view the conversation policies in KAoS as the simplest form of emergent conversation. The initial versions of the CDT will support designing KAoS style conversations, and we hope to gradually include more emergent conversation capabilities. While the emergent dialogues are not totally specifiable ahead of time as to the number and exact sequence of speech acts involved, they still will be describable as a relative sequence or pattern of speech acts (e.g. can’t accept an offer until one is made) and will have some restrictions on their structure. They will likely have some properties of human conversation as it has been studied in the fields of linguistic pragmatics and conversational analysis . Phenomena such as turn-taking and Gricean rules of conversation may be relevant in defining emergent agent conversation.

For example, a segment of KAoS’ Offer conversation policy is an "offer" by one agent, followed by an "accept" or "decline" by another. While offering and accepting may be a two-step process, it very well may not be. In between A’s offer and B’s acceptance, B may ask A about his own resources required, or how long it will take A to do the task or if the offer will still be around if A delays acceptance. In any of these cases, there could be any number of subsequent exchanges between A and B until the acceptance (or non-acceptance) of the original offer. This constitutes "emergent" conversation in that the exact content, number, and sequence of exchanges is not known in advance, but the conversations do follow some structure or guiding principles that can be explicitly described and specified. Some of these are characteristic of conversation in general, and some are specific to the type of conversation policy and the state of that policy.

To provide a useful agent design tool, we are assessing the inventory of current conversation policies to see where it needs to be expanded. It is also important to have a specification of the precise semantics of the speech acts involved, as the emergent dialogues are defined around landmark speech acts, where the agents are assumed to have the beliefs, goals, intentions, commitments entailed by the semantics of those speech acts. We also will need to have some way for the agent designer to balance the usefulness of emergent dialogues with the overall reliability and efficiency of the system.

3.2. Establishment of fine-grained extensible agent security policies

Though Java is currently the most popular and arguably the most security-conscious mainstream language for agent development, it fails to address many of the unique challenges posed by agent software. Moreover, today’s security policies are typically hard-coded and do not allow the degree of configurability, extensibility, and fine-grained access control required by agent-based systems. We are collaborating with researchers at Sun Microsystems on emerging advanced Java security models that separate policy from mechanism. Our research will establish robust extensible industry-wide agent-specific security policies required for the most demanding of settings.

To support our vision, we must provide support in the SDT for at least the following basic security standards (Neuenhofen & Thompson, 1998). A protocol would provided whereby two arbitrary agents can reliably verify each other’s identity, as well as the authority by which they are acting. Two arbitrary agents may set up a secure communication link, enabling them to safely exchange confidential information within the constraints of pre-defined profiles. The resource usage of mobile agents may be constrained at a fine-grained level and accounted for by the hosting agent system. It would be safe for mobile agents and the hosting agent systems to migrate agents from one network node to another. Through the use of secure transparent Java checkpointing, agent mobility would be transparently available "anytime" at the demand of the server or the agent rather than just as specifically pre-determined entry points.

Advances in the Java security model. The security model in Sun’s Java Development Kit (JDK) is rapidly evolving to provide the increased flexibility required while maintaining the "sandbox" and signed applet capabilities provided in the architecture of current versions of JDK software. The security model in the new JDK software is permission-based. Unlike the "all or nothing" approach typical in previous releases, Java applets and applications can be given varying amounts of access to system resources, based upon security policies created by the developer, system or network administrator, or even the end user. For example, using the SDT a developer will be able to customize a security policy so that certain agents in an application can have greater access to resources than others.

In future iterations of Java’s security model, security policies will be distinct from implementation mechanism. Policies will be expressed in a persistent format such as ASCII text so they can be modified and displayed by any tools that support the policy syntax specification. This allows policies to be configurable, flexible, fine-grained, and extensible. Developers of applications (such as agents) will no longer have to subclass the Security Manager and hard-code the application’s policies into the subclass. Agents will be able to make use of the policy file and the extensible Permission object to build an application whose security policy can change without requiring changes in source code. We are specializing and evaluating Sun’s extensive research in basic security mechanisms and policies to learn how to handle the special difficulties raised by software agents in the areas of agent authentication, secure communication, resource management, and mobility.

Agent authentication. Reliable authentication of all involved parties is the basis for nearly every security mechanism that is used in agent-based scenarios, and a good approach needs to satisfy at least two criteria (Neuenhofen & Thompson, 1998). When agent Alice wants to verify agent Bob’s identity, they will engage in an authentication protocol. After completing the necessary steps, Alice should be highly confident that Bob either is or is not who he says he is, meaning the outcome of the protocol should be a clear acceptance or rejection. Additionally, it must be infeasible for anybody but Bob to get approved by the authentication procedure. Different agents will require different levels of authentication, depending on the kinds of risks involved.

Through future enhanced core API support for X.509v3 certificates in future versions of Java, the foundations for standard public-key authentication between agents will have been laid. To the concept of agent authorization and certificate signing by certificate management and certificate authority systems managed by humans, we will evaluate the notion of flexible, adaptive certificate management by chains of security agents. Our work will complement that of Finin and others that are seeking to extend KQML with additional language security primitives—we are seeking to establish a foundational starter set of Java policies for agent security whereas they are seeking to allow agents to communicate and reason about such policies by elevating the visibility of security mechanisms to the "knowledge level."

Special consideration will be given to mobile agents who would be at risk from potentially hostile hosts when traveling with private keys. To get around this problem, we will evaluate the general idea of performing a function on dynamic data provided by the authenticating party, since it effectively implements a replay detection, and at the same time enables a zero knowledge proof in the form of a transformation on the originally submitted data. Note that the function does not necessarily have to be tied to encryption.

Another issue in agent authentication is presented by the need for different kinds of lDs to authenticate the agent as well as the people who created and authorized its actions. This latter issue important for liability issues that may arise from misbehaving agents, or users trying to revoke transactions conducted through agents acting on their behalf. We will also implement the concept of sub lDs for agent clones.

Secure communication. Once two parties desiring a secure communications channel have reliably authenticated each other, they need to find a way to keep third parties from listening in (Neuenhofen & Thompson, 1998). This is typically achieved through encryption. Although asymmetric encryption offers better security than symmetric approaches like the DES (Data Encryption Standard), it comes with a significant performance penalty. A practical way to deal with this is usually to do the initial exchange of the DES key using asymmetric encryption, and then switching to the symmetric algorithm. The keys generated this way are also called session keys, since they are generally used only for encryption and decryption of one session. This approach offers a higher level of security, since a compromised key only affects the current session.

Again, the problem of mobile agents carrying private keys might seem to arise here, but actually the situation is slightly different. The agents engaging in a conversation may in this context actually be viewed as stationary, since they are not moving while conversing. So asymmetric encryption could actually be used, provided that the private keys involved are generated as needed, and discarded before the migration of a mobile agent actually occurred. Symmetric session keys might again be used on top the asymmetric scheme, for reasons of better performance and security.

Resource management. Monitoring resource usage of mobile agents is important for two reasons (Neuenhofen & Thompson, 1998). With regards to security, it is mandatory that agents do not have unrestrained access to system resources like the hard disk. Additionally, when resource usage is not tracked on a per-agent basis, it becomes difficult to detect and interrupt denial of service conditions, where poorly programmed or malicious agents clog up one or more resources. Lastly, tracking of resource usage serves as the foundation of any kind of accounting and billing mechanisms that the agent system might use to calculate charges for the agents that it is hosting.

Although the changes currently contemplated to the Java security model will be a tremendous boon in some areas of resource management, we anticipate that agent developers will require ever greater levels of flexibility and host systems will need ever greater protection against vulnerabilities that could be exploited by malicious agents. It is possible that some of these may ultimately require changes to the virtual machine. For example, while new iterations of the Java security model are expected to support configurable directory access by supplying the equivalent of access control lists to the Java Security Manager, there is no way to impose limits on how much disk storage or how many I/O operations or how many simultaneous print jobs may be performed by agents. Nor are there ways of controlling thread and process priorities, memory allocation, or even basic functions such as the number of windows that can be opened. A unique opportunity of this research will be to explore techniques for dynamic negotiation of resource constraints between agents and the host.

Mobility. Until recently, each mobile agent system has defined its own approach to agent mobility. Though new proposals such as the Mobile Agent Facility currently in the final stages of approval at the OMG are a step forward, some of the required elements of security cannot be implemented without foundational support in the Java language standard. The ultimate goal is to define a set of standard underlying Java security policies and mechanisms that will make agent mobility as safe as possible for all participating parties. The mobile agent may need to concern itself with being shipped off to the wrong address or discovering that the destination proves to be a hostile environment or that expected resources are not available (Neuenhofen & Thompson, 1998). Agent hosts may become unavailable at a moments notice, and the agent may need to immediately migrate to a safe place or "die." Also, there is to very real possibility of being inspected, and robbed of information, through unauthorized eavesdroppers while the agent is basically defenseless in its passive travel state. The manager agents also have issues to concern themselves with, such as allowing malicious or poorly programmed mobile agents onto their sites.

To provide the most flexible and robust approach to these problems, we believe that agent mobility must be made fully transparent. This means that mobility must become an "anytime" concept, meaning that an agent can in principle (and in accordance with its unique policies) move or be moved on demand, in the middle of an arbitrary points of execution.

The standard term describing the act of transparently saving the state of a running program so that it may be reconstructed later is checkpointing (Plank, 1997). One of the powerful features of Java as a programming language is that its bytecode format enables checkpointing in a machine-independent format allowing it in principle to be restored on machines of differing architecture (Puening & Plank, 1997). A similar but somewhat less general approach was originally implemented in General Magic’s Telescript language (White, 1997). Our research will require experimental modification of a Java Virtual Machine. We expect to perform our initial work using the "Kaffe" a virtual machine for which the complete source code exists in the public domain.

Providing such a transparent mechanism for agent mobility requires that the method stack of the running agent be securely transported along with the agent from the current host to the new host. The new host needs to be sure that alterations en route will not compromise its integrity. The agent needs to deal with issues of reliably releasing resources on the old host, acquiring them on the new one, and handling the situation gracefully if expected resources are unavailable.

Transparent mobility would be extremely useful in situations where there are long-running or long-lived agents and for reasons external to the agents, they need to be moved from one host to another. Different policies for different agents could be specified ranging from informed consent, to notification, to complete transparency to the agent being moved. In principle, such a transparent mechanism would allow the agents to continue running without any loss of their ongoing computation. The same mechanism could also be used to replicate agents without their explicit knowledge. This would allow the support system to replicate agents and execute them possibly on different hosts for safety, redundancy, or other reasons.

3.3. Development of an open extensible Java toolkit employing mediating representations to simplify security and conversation policy design while guaranteeing robustness

Merely constructing theories begs the question about how these accomplishments can inform actual agent implementations in non-research environments. How can we ensure that the developers of agent-based systems incorporate these new theoretical structures into their systems? Full appreciation of theoretical developments would require sophisticated knowledge of disciplines not normally part of a developer’s skill set. Combining mediating graphical representations and rigorous logical support in the CDT and SDT is a powerful idea. The internal logic and external representations will be modular and replaceable to allow for continued evolution of agent theory and practice. Our objective is to create a toolkit which is rigorous and yet usable without extensive training in logic and verification techniques. In most cases, we expect that designers will be able to simply specialize existing conversation and security policies and protocols in the toolkit’s "starter set."

Logical design and architecture of the CDT. Formally, the CDT will implement a species of heterogeneous reasoning system (HRS) which has been specialized to support reasoning using the sorts of representations and deductions which are relevant the semantics and pragmatics of agent dialogues. The basic concept of a heterogeneous reasoning system has been described in several places (Allwein & Barwise, 1996; Barker-Plummer & Greaves, 1994; Barwise & Etchemendy, 1994). In brief, a HRS is a composite reasoning support system which includes multiple component logical subsystems, each with its own syntax, semantics, and proof theory, and which also includes inference rules which operate between the different subsystems. The goal of an HRS is to provide a logically rigorous environment in which reasoning using multiple different representations can proceed simultaneously, and which supports the logically sound transfer of intermediate results among the component reasoning systems.

Stanford’s Openproof system is a state-of-the-art framework that allows the user to build a custom HRS for an identified domain. Openproof includes implementations of several simple types of logical subsystems, both graphically-based (for reasoning involving, e.g., Venn diagrams) and sententially-based (for reasoning using the representations of classical logic). Importantly, though, Openproof also includes a sophisticated framework for linking the various component subsystems together to yield heterogeneous proof trees. It contains an architecture for proof managers for different logics, and a set of pre-defined inference rules which bridge the different deductive subsystems. Openproof is implemented in Java, and is designed around an architecture (based on Java Beans) which allows different user-defined deductive subsystems to be smoothly integrated into the overall HRS framework.

Within the CDT, we will to provide at least the following types of deductive systems to the user:

 

1. A Fitch-style introduction/elimination deductive system for standard first-order logic. This will allow the user of the CDT to support reasoning using classical logical techniques. This first-order subsystem will be augmented by the addition of an automatic theorem-prover for a full first-order language with equality. One of the personnel (Greaves) has previously written such a theorem-prover, based on the matrix method. There are also several public-domain theorem-provers which can be used within the Openproof framework, including Otter from the Argonne National Laboratory. This module will be supplied by Stanford as part of the basic Openproof package.

2. A Petri net deductive system. Petri nets are a common graphical formalism for modeling systems which are driven by discrete events, and whose analysis essentially involves issues in concurrency and synchronization. They have a fairly simple and intuitive semantics, and a well-understood set of update rules. Thus, they are an important tool with which to investigate communication and coordination in agent systems (Chauhan, 1997). We will implement a Petri net subsystem as a plug-in module for Openproof.

3. An Enhanced Dooley Graph (EDG) deductive system. Enhanced Dooley Graphs are a type of graphical formalism developed by Parunak (1996), and are specifically designed to capture the pragmatic and speech-act properties of agent conversation systems. They are similar in many ways to state-transition networks, but explicitly allow for side conversations and high-level dialogue patterns. We will implement an EDG subsystem as a plug-in module for Openproof.

4. A Venn deductive system. This is an enhancement of the classical Venn diagram reasoning system, along the lines of Venn-II (Shin, 1994). It is a very general system for reasoning about groups of objects and their set membership relations, and so will support simple reasoning about the properties of groups of agents. This module will be supplied by Stanford as part of the basic Openproof package.

 

Through the integration of these representational and deductive tools, the CDT will support reasoning about several different types of properties of agent communication languages. For example, one of our tasks is to use the Smith and Cohen (1995; 1998) semantics to verify that the conversation policies currently contemplated for the KAoS system result in actual team formation (in the Cohen/Smith sense). Currently, deriving these results requires familiarity with technical modal logic, plus a fair amount of cleverness at logic proof techniques. However, by importing the same problems into the CDT framework, we expect to see large decreases in proof complexity, coupled with significant increases in proof readability and usability for those not trained in logic. These sorts of results were observed in studies of the Hyperproof logic environment when compared with more traditional methods of logic instruction, and we believe that the Hyperproof results can be extrapolated to the environment we propose with the CDT, as the reasoning techniques will be sufficiently similar. We therefore expect that the use of structured graphics to carry some of the reasoning load will result in a dramatically simpler and more intuitive environment in which to design and reason about the large-scale properties of agent communication languages. This will, in turn, yield an environment which can be used by agent developers to experiment with and explore the formal properties of their proposed conversation policies.

Mediating representations in the SDT. Using the SDT to specify the security parameters and then generate the associated policies will be very straightforward from a user interface point-of-view. We expect the SDT user interface to be mainly of the "intelligent form-filler" type for the initial versions of the tool, though we think that more sophisticated representations would be useful for runtime monitoring of things like security domain properties and flags raised by intrusion detection mechanisms. Once the security policy has been generated for a particular agent, its behavior will be regulated by standard Java security mechanisms.

3.4. Providing support for diverse levels of agent reasoning and conversational ability, and interoperability between different agent frameworks

Our tools are being designed to support different levels of conversational sophistication, from simple to emergent, in a principled way. Interoperability between different kinds of agents is established through the use of common policies that govern agent conversations and security. We expect the continuing evolution of agent technology will result in the creation of agents of widely varying degrees of sophistication. It isunreasonable to require that all of these agents, from the simplest to the most complex, will be able to reason at the advanced levels required by the semantics of many of these protocols. Our above characterization of conversation policies as a set of required states or "landmarks" in a dialogue, with different types of sub-conversations which transition between the states, will provide a communications framework which can support diverse levels of reasoning and communication sophistication, from simple to emergent, in a principled way. This is an area not directly addressed in a general way by any other effort of which we are aware.

4. CONCLUSIONS

The computer scientist Donald Knuth once claimed that, "a really powerful tool can change its user." That, in a nutshell, is what we hope to achieve with our agent design toolkit: we believe our research will lead to a change to the process of agent development. As computer scientists, our proposal casts us in a familiar role: we create the tools which implement the underlying theory, which in turn will leverage and extend the capabilities of the domain experts who will develop the individual agents. Just as the advent of verification tools led to digital circuits that were more dependable, we expect that our tools will lead to agent-based systems which are more robust and reliable.

We also expect that this research will feed back into the theoretical debates currently surrounding the foundations of agent communication and behavior. By creating powerful tools which allow users to explore the semantic and pragmatic dimensions of specific decisions about planning, security or communications policy, we will have thereby created a mechanism with which to explore the theories themselves. In this way, we anticipate that our tools will be useful as a testbed for different theories of agent behavior. Our focus on interoperability between different frameworks is designed to encourage this sort of attitude toward our tools, and thereby to stimulate the further development in these theories.

 

Acknowledgements

We extend our appreciation to faculty (David A. Umphress, William Bricken) and members of the Seattle University software engineering program students who are implementing the first version of KPL and PDT: Nazeer Ahmed, Dave Brownell, Paul Carmichael, Justin Cole, Greg Malley, Neil Olsen, Rob Parris, Russ Perkinson, Dan Peters, and Carlo Villongco. Thanks to Michael Brooks of Sun Professional Services for permission to adapt selected passages on security requirements from a paper by Kay Alexander Neuenhofen and Matthew Thompson.

References

 

Allwein, G., & Barwise, J. (Ed.). (1996). Logical Reasoning with Diagrams. New York: Oxford University Press.

Barker-Plummer, D., & Greaves, M. (1994). Architectures for heterogeneous reasoning: On interlinguae. Proceedings of the First Conference on Inference in Multimedia and Multimodal Interfaces (IMMI-1). Edinburgh, Scotland.

Barwise, J., & Etchemendy, J. (1994). Hyperproof., Stanford, CA: CSLI Publications.

Bichindaritz, I., & Sullivan, K. (1998). Reasoning from knowledge supported by more or less evidence in a computerized decision support system for bone-marrow post-transplant care. Proceedings of the AAAI 1998 Spring Symposium on Multimodal Reasoning,. Stanford, CA: Stanford University, AAAI Press ,

Bradshaw, J. M. (1997). An introduction to software agents. In J. M. Bradshaw (Ed.), Software Agents. (pp. 3-46). Cambridge, MA: AAAI Press/The MIT Press.

Bradshaw, J. M., Carpenter, R., Cranfill, R., Jeffers, R., Poblete, L., Robinson, T., Sun, A., Gawdiak, Y., Bichindaritz, I., & Sullivan, K. (1997). Roles for agent technology in knowledge management: Examples from applications in aerospace and medicine. Proceedings of the AAAI Symposium on Knowledge Management, . Stanford, CA: Stanford University, March 24-26.

Bradshaw, J. M., Covington, S. P., Russo, P. J., & Boose, J. H. (1990). Knowledge acquisition for intelligent decision systems: integrating Aquinas and Axotl in DDUCKS. In M. Henrion, R. Shachter, L. N. Kanal, & J. Lemmer (Ed.), Uncertainty in Artificial Intelligence. (pp. 255-270). Amsterdam: Elsevier.

Bradshaw, J. M., Dutfield, S., Benoit, P., & Woolley, J. D. (1997). KAoS: Toward an industrial-strength generic agent architecture. In J. M. Bradshaw (Ed.), Software Agents. (pp. 375-418). Cambridge, MA: AAAI Press/The MIT Press.

Bradshaw, J. M., Gawdiak, Y., Cañas, A., Carpenter, R., Chen, J., Cranfill, R., Gibson, J., Jeffers, R., Mathé, N., Poblete, L., Robinson, T., Sun, A., Suri, N., Wolfe, S., & Bichindaritz, I. (1998). Toward an intelligent aviation extranet. Proceedings of HCI-Aero 98,. Montreal, Quebec, Canada, May 27-29.

Bradshaw, J. M., Holm, P., Kipersztok, O., Nguyen, T., Russo, P. J., & Boose, J. H. (1991). Intelligent interoperability in DDUCKS. Working Notes of the AAAI-91 Cooperation Among Heterogeneous Intelligent Systems Workshop,. Anaheim, California, July.

Chauhan, D. (1997) JAFMAS: A Java-based Agent Framework for Multiagent Systems Development and Implementation. ECECS Department, University of Cincinnati.

Cohen, P. R., & Levesque, H. (1997). Communicative actions for artificial agents. In J. M. Bradshaw (Ed.), Software Agents. (pp. 419-436). Cambridge, MA: The AAAI Press/The MIT Press.

Finin, T., Labrou, Y., & Mayfield, J. (1997). KQML as an agent communication language. In J. M. Bradshaw (Ed.), Software Agents. (pp. 291-316). Cambridge, MA: The AAAI Press/The MIT Press.

FIPA (1997). FIPA 97 Specification. Reston draft (ver. 1.0), 197 April. Online reference at http://www.cselt.stet.it/fipa/spec/fipa97reston.htm.

Ford, K. M., Bradshaw, J. M., Adams-Webber, J. R., & Agnew, N. M. (1993). Knowledge acquisition as a constructive modeling activity. In K. M. Ford & J. M. Bradshaw (Ed.), Knowledge Acquisition as Modeling. (pp. 9-32). New York: John Wiley.

Genesereth, M. R. (1997). An agent-based framework for interoperability. In J. M. Bradshaw (Ed.), Software Agents. (pp. 317-345). Cambridge, MA: The AAAI Press/The MIT Press.

Genesereth, M. R., & Fikes, R. (1992). Knowledge Interchange Format Version 3.0 Reference Manual. Logic Group Report, Logic-92-1. Stanford University Department of Computer Science, June.

Greaves, M. (1997) The philosophical status of diagrams. Ph.D. Dissertation, Stanford University.

Labrou, Y. (1996) Semantics for an Agent Communication Language. Doctoral Dissertation, University of Maryland, Baltimore County.

Neuenhofen, K. A., & Thompson, M. (1998). Contemplations on a secure marketplace for mobile Java agents. SunLabs Internal Technical Report. Mountain View, CA: Sun Microsystems.

Nwana, H. S., Ndumu, D. T., & Lee, L. C. (1998). ZEUS: An advanced tool-kit for engineering distributed multi-agent systems. Proceedings of PAAM ‘98,. London, England, March.

Parunak, H. V. D. (1996). Visualizing agent conversations: Using enhanced Dooley graphs for agent design and analysis. Proceedings of ICMAS-96.

Plank, J. S. (1997). An overview of checkpointing in uniprocessor and distributed systems focusing on implementation and performance. Technical report UT-CS-97-372. Department of Computer Science, University of Tennessee, July.

Puening, M., & Plank, J. S. (1997). Checkpointing Java. Online reference at http://www.cs.utk.edu/~plank:

Russell, S. (1985). The compleat guide to MRS. Stanford Knowledge Systems Laboratory, Report No. KSL-85-12. Stanford University Computer Science Department.

Searle, J. R. (1969). Speech Acts: An essay in the philosophy of language., Cambridge, England: Cambridge University Press.

Shin, S.-J. (1994). The Logical Status of Diagrams., New York: Cambridge University Press.

Smith, I. A., & Cohen, P. R. (1995). Toward a semantics for a speech act based agent communications language. T. Finin & J. Mayfield (Ed.), Proceedings of the CIKM Workshop on Intelligent Information Agents,. Baltimore, MD: ACM SIGART/SIGIR.

Smith, I. A., Cohen, P. R., Bradshaw, J. M., Greaves, M., & Holmback, H. (1998). Designing conversation policies using joint intention theory. Proceedings of the Third International Conference on Multi-Agent Systems (ICMAS-98),. Paris, France, July 2-8.

Sun_Microsystems (1998). Java Security Architecture; http://java.sun.com/products/jdk/1.2/docs/guide/security/spec/security-specTOC.d oc.html; http://java.sun.com/security/handout.html.

White, J. (1997). Mobile agents. In J. M. Bradshaw (Ed.), Software Agents. (pp. 437-472). Cambridge, MA: The AAAI Press/The MIT Press.