• Criteria for Building Interface Agents
  • Approaches to Building Interface Agents
  • Rule-Based Approach
  • Knowledge-Based Approach
  • Machine Learning Approach
  • Machine Learning Example
  • Using Intelligent Agents
  • Interoperation of Agents
  • Group Reasoning of Agents
  • Distributed Communicating Agents
  • Agent Issues

    Criteria for Building Interface Agents

    Competence

    How does an agent acquire the knowledge it needs to decide when to help the user, what to help the user with and how to help the user?

    Trust

    How can we guarantee the user feels comfortable delegating tasks to an agent?

    Approaches to building Interface Agents

    1. Rule-based Approach

    The rule-based approach features a collection of user-programmed rules for processing information related to a particular task. An end user program allows users to program rules for processing information for a task. However, this approach has several problems. The agent acquires its knowledge about how, when, and how much to help the user by extensive programming by the user, which negates the purpose of an agent as a tool which will save effort on the part of the user. Another drawback is that the user needs needs to recognize the opportunity for using an agent, program the rules and give the agent explicit knowledge, as well as maintain these rules over time.

    2. Knowledge Base Approach

    This interface agent has domain-specific background knowledge about the application and user to recognize plans and contribute to users' tasks. This approach has several problems as well. Where an end-user program approach requires a great deal of work on the part of the user, this approach requires a huge amount work for a knowledge engineer. The knowledge engineer must outfit an interface with large amounts of knowledge about the application, the domain and how the agent can help the user. The knowledge of the agent is fixed and cannot be customized to individual users, so its use by users may be limited. For example, in highly personalized domains, the knowledge engineer cannot anticipate how best to aid the user.

    3. Machine Learning Approach

    This approach addresses problems encountered by the rule-based and knowledge-engineered approaches. This approach requires less initial work, and adapts over time. The agent acts as a personal assistant to cooperate with a user on a task, but makes allowances for user override. The agent learns by:

    a) observing and imitating user
    b) adapting based on user feedback
    c) trained by user by example
    d) ask for advice from other agents

    Machine learning uses memory-based reasoning combined with rules to model each user's habits. This approach achieves a level of personalizaton impossible previously available except through user intervention. However, these agents have their problems as well. Learning agents have a slow learning curve, requiring a sufficient number of examples before it can make accurate predictions. These agents also encounter problems when dealing with completely new situations. To address these problems, agents may learn from existing agents to get up to speed quickly. Over time, agents learn to be selective when learning from other agents, by learning to trust the suggestions of other agents more than others for various classes of situations.

    An Example of a Machine Learning Agent

    Machine learning agents apply several approaches to acquire knowledge. Learning by demonstration involves continuously looking over the shoulder of the user as the user performs actions. By monitoring the user's actions over time, the agent finds recurrent patterns, and depending on the agent's confidence in its assessment, the agent may offer to automate the pattern of actions.

    The confidence level of the agent is assessed by measuring situation proximity by applying a weighted sum of the distance between corresponding fields of two situations, then gathering the closest matching situations in memory to calculate a prediction for an action in the new situation. The agent's confidence in its prediction is calculated by considering factors such as the number of situations in its memory and how close the situations are to the present situation. The user can set two confidence thresholds to control an agent's actions: the tell-me threshold and the do-it threshold. An agent will make a suggestion if the confidence level of an agent's prediction is higher than the tell-me threshold. Likewise, the agent will autonomously take action if the confidence level of the prediction is above the do-it threshold.

    As an agent's experience base grows, the agent's confidence in its predictions will grow. Thus, the user learns to trust the agent as the agent becomes more 'trustworthy'. Monitoring facilities further reassure the user of the accuracy of the actions of an agent. The activity monitor is usually implemented as a caricature, which uses expressions to indicate an agents state, such as alert, thinking, and working. The explanation facility gives the user english descriptions of why an agent suggested or performed an action. An interface to browse and edit the agent's memory is useful to correct misconceptions on the agent's part.

    Using Intelligent Agents

    Interoperation of Agents

    First, independent agents must be implemented. Learning agents use AI planning to express logical expressions of goals as input, then generate a sequence of actions based on action schemata describing available resources to achieve these goals, then synthesize and execute plans to achieve goals.

    Second, to take advantage of diverse agents with different abilities, there is a growing demand for interoperation between agents. Integrating these agents to exchange information and services with other programs independent of individual agents' internal data structures and algorithms requires a universal communication language to eliminate inconsistencies and arbitrary notational variations. The procedural approach uses communication as the exchange of procedural directives, which may require information about the recipient that is not available to the sender, and cannot share information both ways between sender and receiver. The declarative approaches uses communication as the exchange of declarative statements such as definitions, and assumptions.

    An agent communication language(ACL) consists of its vocabulary, an inner language called KIF (Knowledge Interchange Format) and an outer language called KQML (Knowledge Query and Manipulation Language). An ACL message is a KQML expression whose arguments are sentences in KIF formed from ACL's vocabulary.

    Group Reasoning in Multi-Agent Systems

    When agents have been enabled to interoperate using ACL, their collective experience may be used to make decisions. Agents in multi-agent systems perform different activities. Using group reasoning, agents deliberate problems based on their current collective knowledge. Possible actions are analyzed by communicating claims and arguing to support them based on individual agents' experiences.

    Agreement over an action is subject to dispute, which is settled by feedback control systems, physical control systems and feedback phenomena. Feedback control systems use conclusions pertaining to protocol and are led back to the process by which they are generated. Physical control systems use numerical analysis to understand what is going on. Feedback phenomena relate back to human behaviour in making decisions to take action. The beliefs held by individual agents dictate the arguments put forth, and the outcome of disputes dictate what action to take, as well as creating new beliefs.

    Distributed Communicating Agents (DCA)

    DCA allows expert-systems to communicate with each other and with humans to solve problems. Agents use models of each other and of the resources available to them in order to cooperate, thus allowing for the shared use of databases, knowledge bases, models of the business environment, and process models.

    DCA solves problems by using computational agents, each with their own domain specific knowledge, and a number of users, assisted by an interface agent or another agent to give problems to one of these computational agents to solve. The computational agent proceeds to solve the problem by:

    1) solving the problem itself using local knowledge and reasoning
    2) decomposing the problem into subproblems, then
    distributing subproblems to appropriate agent(s) and
    integrating the subproblem solutions returned by these agents

    Agent Issues

    When developing and using agents, the following issues should be considered:

    Should information be filtered by agents?

    New clips to a mailbox are still only accessible through that mailbox. A mailbox with 100 items in it, even if the best of millions, are still too many to be useful to someone with only 5 minutes to scan the news before their next meeting. making folders can be as bad because it is still just as inaccessible.

    Should agents tell the truth?

    If an agent can serve the purposes of their user by lying to less intelligent agents, less intelligent agents may be manipulated into serving the interests of a less truthful or more intelligent agent.

    What will you trust agents to do?

    Given your previous habits, you may allow an agent to decide for you what you read, what kind of music you listen to, and who you interact with. This may lead to a detailed although narrow view of the world, instead of keeping you informed of a more general picture.

    Should agents be merely slaves to its user?

    Given that computers have the potential to learn so much more quickly than humans, should we disable the full potential of our agents so that agents will serve only our needs as opposed to their own interests?

    Will we rely too much on agents, assuming they will do what we ask of them?

    Mark's Stuff
    Rob's Stuff
    Back to report list