By: David Dattner (934265)
Cpsc 547 - Intelligent Agents Class Presentation
Partners, Graham Neumann & Chris Marta.
More general information about Personal Software Assistants
Personal software assistants help users find information in databases,
schedule
calendars, or manage work flow will need significant customization for each
individual user. An example, from the readings, is an assistant that helps
schedule a user's calendar will have to know that user's scheduling
preferences.
- CAP (Calendar Apprentice)
An example of this is CAP(Calendar Apprentice), that learns it's
users scheduling preferences from experience. With CAP results are summarized
from about 5 years of experience, during which it has learned several thousand
rules that characterize scheduling preferences for it's different users. It is
not difficult to imagine a future where assistants will operate as software
secretary, providing service at work and at home, such as paying bills, making
travel arrangements, and locating information from electronic libraries. Just
as the success of a real secretary depends on the his or her knowledge of the
employer's particular habits and goals so does the agent's success dependent on
these abilities to learn and adapt to the user. A perfectly customizable
software assistant which will work for any individual is not a simple task to
create.
- explicit customizability vs. auto-learning
Many programs provide simple parameters that allow users to customize
behavior explicitly. We see this quite often with text editors which allow
users to set default type fonts and directories etc... Yet this type of
approach is limited, in that continual updating is tiresome and detailed..
Usually, the more sophisticated an assistant the less the users needs to
program their preferences explicitly. Instead the software assistant should
learn through experience, just as humans do.
COACH (A Teaching Agent that Learns)
Cognitive Adaptive Computer Help, is a system that records user
experience to create personalized user help. For example if a user is learning
a new operating system or application COACH watches the user's action to build
an adaptive user model (AUM) which chooses the appropriate advice. It is
important to note that just like a coach on a football team, the COACH does not
interfere with the user's action but instead only supplies advice along the
way. Moreover, it would also decide on which particular method to convey it's
information, hence using description, example, syntax, timing, topic, style and
level of help according to the user's experience and proficiency.
An example: LISP Implementation COACH
Developed by Dr. Ted Selker at IBM's Almaden Research
Center in California, this teaching assistant holds programming students by the
hand as they learn the intricacies of Lisp. In a test using IBM programmers
with no Lisp experience, student/Coach teams completed five times more training
exercises involving database function calls than did students who didn't have
the aid of an electronic tutor.
At the end of the training sessions, most Coach pupils said they liked Lisp;
their untutored counterparts weren't so enthusiastic. Skeptics may scoff at
agent hype--and the dearth of commercial programs--but if an agent can actually
make using Lisp an enjoyable experience, maybe anything is possible.
Selker, who is manager of USER (User Systems Ergonomics Research) at the IBM
center, has been evolving Coach along these lines over the past decade. Coach,
which is being converted from Lisp to C++, contains three independent but
related sets of knowledge to do its work. The first is its user model, where
the program builds and maintains information about a user's Lisp-programming
proficiency, his or her mistakes (and the chosen fixes), and what worked and
what didn't work in terms of coaching. Coach also tracks the rate of change in
each user's proficiency (and whether it's getting better or worse), and for
each learnable task, how long it's been since it was last used. If a user has
done something correctly in the past and has trouble with it now, Coach can
present an example from the user's own code.
- User Model
The user model is persistent--it remains associated with a user from session to
session. In a teaching environment, the user model could be made available to a
teacher to help him or her understand just where a student might be having
difficulties. In a commercial implementation, the coach agent might pass the
user model onto a human customer-service representative if the agent determines
a customer's problems are beyond the agent's capabilities.
- Domain Knowledge
Second, Coach has knowledge about its subject matter. In the case of Lisp
programming, Coach knows Lisp syntax, library functions, and concepts (e.g.,
evaluation, iteration, and recursion). This knowledge base grows over time,
automatically incorporating user-defined functions into its repertoire. Such a
facility would be a welcome addition to any programming environment,
particularly in a team setting in which you're writing code for and using code
from other programmers. Your agent might suggest using an existing function or
object before you reinvent one yourself. Keeping the domain knowledge separate
from knowledge about coaching has made it easy to apply Coach's framework to
new domains. For example, Selker says Coach helped a summer intern
inexperienced in programming create a Unix help system in just 10 weeks.
- Coaching Rules
The third knowledge set consists of coaching rules that tie user knowledge and
the domain together. These rules help Coach gauge a user's level of experience.
Update rules determine when the program should refresh the user model to
indicate that he or she has mastered a problem or when it's appropriate to
present more advanced usage options for a particular feature. Consistency rules
make sure that the user model doesn't contradict itself when the model is
applied to related subjects. Finally, presentation rules determine how help
will be presented to the user. For example, a user who's just starting out
would want basic information, while an out-of-practice user may only need a
reminder.
Agents (Programs that Simulate a Human Relationship)
The job of the agent is to take the place of another experienced human user and
teach. This means that the agent should perform tasks for the user that the
user does not know how to do yet, with emphasis on the fact the user will then
learn how to perform this task. One of the main concerns with agents is the
creation of a user dependency on the agent. In other words the agent should be
used much like a coach in that it should offer advice but should not be the
sole voice and dependency of the user. How Might People Interact With Agents
Many people fear the new technology of agents. Many fictitious and extravagant
claimed have been as seen in the
example of 2001 a Space Odyssey with Hal. This view of the agent especially
when modified by the term intelligent brings forth images of human like
computers working without supervision on tasks thought to be for our benefit
but not necessarily to our liking, Yet some actually believe that the main
difficulties with the development of intelligent agents will be social and not
technical. The question then becomes how will intelligent agents interact with
people and how might people think about agents? For people to feel comfortable
with automatic, autonomous actions some factors that must be considered are:
Factors that must be met to ensure users feel comfortable with agents:
-ensuring that people feel in control of their computational systems
- the nature of the human-agent interaction
- built in safe guards to prevent run away computation
- providing accurate expectations and minimizing false hopes
- privacy concerns (a subset of the feeling of control problem)
- hiding complexity while simultaneously revealing the underlying operation
The Role of Emotion in Believable Agents
Many will argue that for a person to
put faith in the ability of an electronic agent, the agent must exhibit human
characteristics and qualities. Providing the illusion of life. AI researchers
are attempting to find these essential qualities of humanity. In the article,
artists (animators) and AI scientists were both questioned as to what traits
they thought believable characters should have. The qualities the animators
judged important are not identical to those that AI researchers emphasized,
though there was some overlap. Emotion was deemed very important by the
animators. They argued that an agent which is not emotional gives the feeling
of indifference and lack of caring and motivation.
- Case study with AI scientists and Disney animators
According to Thomas and
Johnston, of Disney, to portray emotion the animator must remember several key
points:
-The emotional state of the character must be clearly defined.
-The thought process reveals the feeling.
-Accentuate the emotion.
-Use time wisely to establish the emotion, to convey it to viewers, and let
them savor the situation.
Trust
Building and maintianing trust between user and agent requires several
issues to be addressed:
- User knowledge and Domain knowledge
Two components of user knowledge are included in our model:
domain knowledge
technical knowledge.
Both domain and technical knowledge of intelligent agents impact the user's
ability to predict the agent's actions and assess the agent's technical
compentance.
- Predictability
Predictability is based on consistency (Rempel, et.al., 1985).
It is reasonable for the user to expect the agent to perform consistently on
frequent tasks. When so, the agent is more predictable, which has a
positive impact on trust. Ensuring that users are comfortable with
agents' changing behavior is important. Designers of intelligent agents need to keep this in
mind when deciding upon how much of the agent's actions should be visible to
the user.
- Dependability
As a relationship grows, people judge the
characteristics of another, rather than specific actions. One of these
characteristics is dependability. Predictability provides evidence of
dependability, but goes deeper. Dependability deals with a specific In the
context of intelligent agents, ideally users are confident that agents will
step in when their help is needed. If they are not, they will spend more time
monitoring, or performing tasks themselves, making the user - agent
relationship less effective.
- Technical Competence
For a successful user - agent relationship, users must feel certain that their
agents are technically competent of performing assigned tasks. Otherwise,
trust will suffer and
monitoring will increase, or use will decrease.
There are several ways users can judge technical competence. Frequent
interaction with the agent can demonstrate the agent's competence, or lack
thereof. Early in the relationship extra monitoring may be necessary, until the
agent "proves" itself.
- Fiduciary Responsibility
Fiduciary responsibility comes
into play when an individual lacks sufficient knowledge to judge the actions
taken on their behalf by others (Barber, 1980). The usual relationship
between an attorney and a client is an example where fiduciary response in a
user - agent relationship, the user (e.g. a novice) may not always be in a
position to accurately assess the appropriateness of the agent's actions.
Assessments of fiduciary responsibility will often be made based on the user's
feelings towards Monitoring Behavior One of the judgments individuals must
make in client - agent relationships is the proper level of monitoring. In
general, proper monitoring helps ensure the efficiency of the relationship.
Trust affects this monitoring behavior. Frequency of Use It seems reasonable
that a user's assessment of an intelligent agent's trustworthiness impacts how
frequently a user will employ the agent. This assumes that the user can choose
to use the agent on some circumstances, and not in others.
References
Articles
Bates, Joseph. "The Role of Emotion in Believable Agents" Communications of the ACM, 122, 7:122-125
Norman, Donald A. "How Might People Interact With Agents" Communications of the ACM, 68, 7:68-71
Selker, Ted. "COACH: A Teaching Agent that Learns" Communications of the ACM, 92, 7:92-99
Email Address
You can mail any comments or suggestions to
dattner@cpsc.ucalgary.ca
neumanng@cpsc.ucalgary.ca