COMPETENCE IN HUMAN BEINGS AND KNOWLEDGE-BASED SYSTEMS

Renaud Lecoeuche Olivier Catinaud Catherine Gréboval-Barry
PSI-LIRINSA Rouen
B.P. 08
76131 Mont-Saint-Aignan
France
PSI-LIRINSA Rouen
B.P. 08
76131 Mont-Saint-Aignan
France
PSI-LIRINSA Rouen
B.P. 08
76131 Mont-Saint-Aignan
France
lecoeuch@insa-rouen.fr catinaud@insa-rouen.fr greboval@insa-rouen.fr

Abstract
Second generation expert systems are designed to overcome first generation expert system deficiencies in various domains such as knowledge acquisition or explanation. Following those efforts, we will focus our attention on competence and try to find the cause of brittleness. We will first define competence in individuals and organisations. Then, we will study competence in expert systems, and we will point out some causes of insufficiency. We will particularly stress the fact that systems are ignorant of the way their knowledge is structured. We will conclude by providing some clues for improvements and we especially advocate the creation of a competence assessment module.

Keywords: Competence, Meta-knowledge, Evolution, Co-operation.


1. INTRODUCTION

It is well known that expert systems are now achieving a variety of tasks with almost human expert performance. In some cases, expert systems are even better than humans. For example, expert systems are able to diagnose diseases, to fix failures in electrical networks, to create geological maps, etc. However they still are deficient in several domains. For example, they often remain brittle. Neither do they integrate smoothly in industrial organisations. Therefore they require specialists to operate them. The evolution towards second generation expert systems has improved this situation. Especially, research has been focused on better knowledge acquisition techniques, better explanations, better reusability. However, even second generation expert systems have still deficiencies.
In this paper, we will expose some of the causes that, to our mind, are responsible of that situation. After having defined the most important notions that we need to expose our ideas (Sect. 2), we will study performance and competence involving human beings. We will first focus on individuals (Sect. 3.1). We will stress the point that human beings are endowed with meta-knowledge that enables them to learn a great variety of tasks or to compile their knowledge. We will also stress the point that competence is a relative notion based on reciprocal evaluation. We will then study the competence resulting from the co-operation in a team of experts (Sect. 3.2). We will point out the need for an expert to model the experts he/she is acting with. During the study of competence in humans and organisation, we will pay a particular attention to the evolution of competence in time. We will try to understand how competence can be maintained and improved at both levels. This will lead us to a more complete definition of competence. Finally, by comparing computer-based and human-based competence, we will show some of the deficiencies of actual expert systems (Sect. 4.1). By analysing the knowledge level hypothesis and the knowledge level approaches used to create expert systems, we will particularly stress the fact that systems are ignorant of the way their knowledge is structured (Sect. 4.2). We will then give some clues for improvements (Sect. 4.3) and we will especially advocate the addition of competence assessment knowledge to the systems. This knowledge may be grouped into a dedicated module, in the way explanation knowledge is grouped in a particular module (Sect. 4.4).
Many terms used in this paper are employed in the literature with several meanings. To make our discourse as clear as possible, we have added a series of definitions at the end of the paper. We think that those definitions could also form a basic ontology for the competence assessment module described above.

2. SOME DEFINITIONS

In this section, we define terms that have been extensively used in the literature. It seems to us that defining those basic terms would avoid some misunderstandings when they are used in the next sections. We tried to define them so that their new semantic is as close as possible to their usual meaning while being more explicit. The definitions are complemented by remarks and examples in an effort of making them as straight and clear as possible. Those definitions, as well as the ones presented at the end of the article, are the result of numerous and difficult deliberations we had inside our working group. They also have been used to clarify the discussion during the early stage of the modelling of an expert system devoted to architectural design.

Definition of knowledge: information that can be used in a decision process.

Knowledge can be distinguished from mere facts by the way it is used in decision processes. A fact can not be used in a deciding process whereas a piece of knowledge, either scientific or from common sense, can. As shown in Figure 1, facts can become knowledge and vice-versa depending on the situation at hand. The situation provokes an activation of certain parts of the memory transforming the facts stored there into knowledge. This selective activation is described in (Kolodner, 1982; Minsky, 1988). Usually, knowledge is situation-independent and is thus more generic. This distinction is not easy to make and the frontier between facts and knowledge is blurred.

Figure 1
Figure 1: Knowledge can be considered as the usable part of an iceberg of facts. Only knowledge is accessible to be used in decision processes.

If the way to use the knowledge during the decision process is given with the knowledge itself, we say that the knowledge is compiled. If no indication is given on the way to use it, the knowledge is said to be declarative (Pitrat, 1990).

Definition of meta-knowledge: knowledge on knowledge.

Sentences such as « Let me stress an important point... » are examples of meta-knowledge. In fact they give knowledge (what I will say is important) on other pieces of knowledge (what I actually say). This kind of sentences is so common, especially in spoken communication, that we can speak about meta-discourse.
There exists an infinite number of levels of meta-knowledge because meta-knowledge can characterise knowledge that already characterises knowledge that char... For example: He knows something, I know that he knows something, He knows that I know he knows something, etc. However, human beings are not accustomed to use more that three or four levels of meta-knowledge. This limit is certainly due to the difficulty of handling recursive processes (for example, see (Minsky, 1988), p. 300) and to the intrinsic limitations of the human short term memory (Miller, 1956). As far as we are concerned, we will almost never make any distinction between the various meta-levels. In the seldom cases we do, we will make the distinction explicit.
As with knowledge, meta-knowledge should be distinguished from meta-facts that are statements about knowledge that cannot be usefully used in a decision process.

Definition of competence: capability to decide and act.

This definition is common among computer scientists speaking for example about the « competence of an expert system » choosing a medical treatment.
In the next sections, we will mostly focus on competence as a capability to decide. However, it should be noted that the notion of decision cannot be totally separated of the notion of act. Indeed, deciding refers to an act (What will we do? How? When?) and acting requires a decision process (at least to control the act, e.g., when we move one of our arms, a feedback of the motion is sent to decision processing centres to control it).

Now that the basic terms we will use are defined, we can try to understand what knowledge or meta-knowledge is required for achieving a competent behaviour.

3. COMPETENCE AND HUMAN BEINGS

In this section, we will defend the hypothesis that competence is a kind of meta-knowledge enabling individuals to learn new knowledge and to compile it (Sect. 3.1). We will then study how several experts can co-operate to achieve tasks they could not perform alone (Sect. 3.2). In both sub-sections, we will stress the importance of communication, either to assess competence or to develop it. To clarify our presentation we will give examples coming from several domains, notably computer science and sociology.

3.1. Individual Competence As Meta-Knowledge

By studying how human beings solve problems, we will point out that competence is due to meta-knowledge (Sect. 3.1.1). Indeed, meta-knowledge enables individuals to structure their solving processes (meta-experts) and then to compile them (domain experts). We will then try to understand how meta-knowledge is acquired and how it is used to compile knowledge. We will also stress the fact that competence is a relative notion depending on external judgements and communication (Sect. 3.1.2). This will lead us to revise our definition of competence.

3.1.1 Novices vs. Experts.

A. Schoenfeld (Schoenfeld, 1985) studied, in some individuals of various levels (from novices to experts), the importance of control in the resolution of mathematical problems. In one experiment, the individuals received the following problem:

Let T be a triangle of basis B. Show that it is always possible to construct, with a ruler and a compass, a line, parallel to B, which divides T in two parts of the same surface.

Three major types of behaviours appeared. They are described on Figure 2.
In the first type, the individuals lose their time in a wild-goose chase and promising directions are ignored. Hypotheses are accepted without discussion and after having lost time to make constructions, they are rejected as obviously false. These people do not arrive to the solution although they have all the necessary knowledge.
In the second category, individuals control their reasoning. They evaluate every new idea either to eliminate it, to put it on the side or to develop it according to their evaluation of the odds of success. They avoid the numerous opportunities of wild-goose chase and the control does not only serve to avoid mistakes: it also ensures a good utilisation of knowledge. The individuals of this category are not experts of the domain, but are meta-experts in control. (The term « meta-expert » stresses the fact that those individuals have a meta-knowledge expertise but no particular knowledge of the problem domain.)
The last category of individual is composed of experts of the domain. The meta-knowledge of control does not serve anymore: all the knowledge is compiled and the individuals go straight to the solution.

Figure 2
Figure 2: The three types of search: novices, meta-experts, experts.

From this example, we see that competence is mostly present in two forms:

Two important questions arise from this remark:
  1. Can we acquire control meta-knowledge? And, if we can, how?
  2. How is the knowledge compiled?
As far as we know, there are no definite answers to those two questions. Nevertheless, we can give some clues that are in part contradictory with one another.
Human beings seem to have much innate meta-knowledge. For example, babies are not able to speak but have the acquisition meta-knowledge required to learn languages. In fact, this meta-knowledge even enables us to learn several languages and, beyond languages, the whole culture in which we live. As stated by Ruffié ((Ruffié, 1972), p. 129(2)) : « From now on, every baby will benefit from the experience of the human beings that preceded him/her. This experience is not innate (it escapes almost all instincts) but acquired. It is the capability to acquire it that is innate ».
Right now, there does not seem to be any theory of learning that answers the question of acquisition convincingly. Some researchers think that learning results not only in acquiring knowledge but also in loosing capabilities, e.g., experts have usually enormous difficulties to learn knowledge in another domain (Mehler, 1972). Another approach is proposed in (Papert, 1971). In this paper, Papert defends the hypothesis that human beings increase their knowledge by creating ontologies to speak about their acts: « A fundamental problem for the theory of mathematical education is to identify and name the concepts needed to enable the beginner to discuss his mathematical thinking in a clear articulate way ». Deciding if a theory is correct or not is a difficult task, since several processes may act together.
The second question has been more extensively researched and some answers have been given. In some experiments on how people learn a programming language, researchers have demonstrated that giving examples enables students to learn more quickly. An hypothesis explaining this fact is that examples are compiled knowledge. Therefore, in receiving knowledge that is already compiled, students may have less work to produce to integrate it. Other experiments have also shown that experts tend to group a series of acts in a unique operation, in what is called « chunking ». This process, which has already been implemented in some computational architectures (e.g. SOAR (Laird, Newell and Rosenbloom, 1987), or (Tambe, Johnson, Jones, Koss, Laird, Rosenbloom and Schwamb, 1995)), plays certainly a major role in knowledge compiling. Chunking also enables experts to focus on significant aspects of the initial data by taking into account only the important aspects of the problems. Finally, it should be noted that introspection is certainly the most important and primordial aspect underlying the process of knowledge compilation. Unfortunately, human beings have a limited introspection capability (Morin, 1986). Computer programs are more prone to self-examination of both their knowledge and their program (that can be regarded as a special case of knowledge). Therefore, they should have less trouble examining and compiling their knowledge.

3.1.2 Individual Competence in Time.

We have just seen that meta-knowledge is needed by experts to compile their knowledge so that it can be used efficiently. However, it is not clear that the resulting knowledge will be adapted to the problems for long period. Yet, maintaining a level of competence during a long time is a major goal of experts, be they human or artificial. Therefore, we think that:
  1. Even experts should try to keep some meta-knowledge to be able to recompile their knowledge if need be. However, since we do not know how to acquire or preseve meta-knowledge (see the previous section), we cannot expand this idea further.
  2. Experts should continously be judged to determine if their knowledge does not become out-dated. This point is apparently feasible (It could nevertheless disturb cultural habits which place experts in ivory towers).
An anecdote is fairly instructive on this matter: when they are afraid hedgehogs roll themselves into a ball. This technique which was rather efficient until the invention of cars is now the first cause of death of this animal. However, none of the living hedgehogs realises that, even if they do see one of their fellows get run over.
This anecdote is partly true for human beings. Human beings are more aware that their competence is out-dated than hedgehogs because they can better interpret their experience, but also and above all because they benefit from others' comments thanks to the intense communication between them. In other words, human beings remain competent (or else know they are not competent anymore) because of external judgements on their decisions. We do not want here to play down the role of consciousness: human beings are often able to decide by themselves if they are not competent anymore. However, it seems to us that the external judgement has not yet been recognised as the principal and most trustworthy indicator of competence. Therefore we have to change somewhat our definition of competence.

Definition of competence: capability to decide and act, established by someone else.

This definition stresses that competence is a relative notion. Therefore, experts are not supposed to know everything in their domain or to never make any mistake (Voß, Karbach, Drouven and Lorek, 1990; Voß, Karbach, Coulon, Drouven and Bartsch-Spoerl, 1992); they are recognised as being the most capable to achieve good results. It is surprising to realise that computer programs and especially expert systems rarely take into account external judgements on their behaviour. Indeed, they either do not enable users to enter feedbacks, or, when they do, the mechanisms to change the systems are very limited. Moreover they are viewed as passive tools (De Greef and Breuker, 1992), i.e., not as competent assistants, and therefore do not receive comments or advises. Therefore they are doomed to become incompetent and quickly out-dated without even being able to realise it. This point will - should - certainly be developed to endow expert systems with their true value.

3.2. Competence And Organisations

In this section, we will examine the competence in organisations through the concepts of communication and co-operation. Indeed, because of the complexification of the scientific knowledge, experts can no longer solve problems on their own: experts are more and more specialised and problems are increasingly global. There is a need for co-operation among experts of different fields to tackle the new problems. There is also a need for computer programs to be more integrated in organisations. This integration can only be achieved by a better co-operation between human beings and machines. We will describe co-operation among several experts (Sect. 3.2.1) through the concept of solving as modelling. Then we will study the place computer programs can hold in such a co-operative environment and the requirements they should fulfil. (Sect. 3.2.2).

3.2.1 Mixing different competences.

We will now focus on co-operation between autonomous and intelligent agents. Other kinds of co-operation can exist where identical non-intelligent agents, such as ants or simple robots, co-operate to create a thriving colony (Ferber, 1992). However, since we are interested in co-operation between human beings or between human beings and computer programs, we will not study this kind of co-operation.
Co-operation between experts of different domains is needed in almost all the tasks that are somewhat complex. For example, the study of a car accident requires the co-operation of three experts: an infrastructure engineer, a vehicle engineer and a psychologist (Dieng, 1995). Together, they will be able to determine the causes of the accident by taking into account the configuration of the road, the speed of the car and the temperament of the driver. An expert alone could not do this work with the same accuracy, i.e., with the same competence. Our hypothesis is that effective co-operation between experts of different domains leads to the emergence of a new competence that outpaces their individual competence: they solve more problems, more accurately, more rapidly (Chaudron and Tessier, 1995). However, to be effective co-operation places some conditions on the problem to be solved and requires some extra knowledge to be known by the experts. Some of these are described below.
Conditions on the problem:
Extra knowledge for the experts:
To our minds, the solving process corresponds to the creation and the evolution of a mental model of the problem into a state where the problem is judged solved (see (Van de Velde, 1993) for a complete description of solving as modelling). Each expert has a personal mental model of the solving process. Thanks to communication and co-operation, those different models have common parts that describe the overall state of the solving process (see Figure 3). This overall state evolves through a process of proposals/critics involving the experts. By respecting the constraints presented above, experts gain a better understanding of the problem (mainly because they work in their domain) and thus can reach more quickly the state of resolution.

Figure 3
Figure 3: Solving as Modelling

3.2.2 Evolution of Competence in Organisations.

As for individuals, a major problem for organisations is to keep or increase their competence. As we just explained, experts are much more than simple repositories of knowledge: they interact, co-operate, participate in the organisation. Therefore, knowledge-based systems should demonstrate a co-operative behaviour and have a sufficient degree of complexity (Trousse and Vercors, 1992). Let us explain this concept: in order to interact with a robot that has a degree of liberty n, a computer should have at least the same degree of liberty. In the same way, to interact with an individual, a computer should be at least as complex as the individual. This does not mean at all that the program is difficult to use. This degree is a measure of the integration of an artefact with a complex system, or an organisation (Le Moigne, 1990; Morin, 1977). The degree of complexity is mostly based on the usefulness of the artefact and its capacity to increase a symbiotic relation with its users. In spite of the crucial importance of such an integration, very few knowledge-based systems have been designed with this aim, resulting in brittle and rejected systems. Even second generation expert systems do not fully take into account these requirements (David, Krivine and Simmons, 1993). We will try, in the next section, to point out some of the requirements to achieve a truly integrated system and present different works heading this way.

3.3. Conclusion On Human Competence

We would like to summarise the important ideas that have been stressed in the two previous sections. We have pointed out that:
We have also pointed out that competence is a relative notion that can always be challenged. Therefore even the best experts always need to keep a certain amount of meta-knowledge to evolve.
We will now discuss those points in computer programs.

4. COMPETENCE AND COMPUTER PROGRAMS

In this section we will draw comparisons between human beings and machines in order to point out the requirements for competent systems. We will also try to give some solutions fulfilling those requirements (some of them can be found in the literature) or at least some indications and ways of search. We will not discuss real, symbolic implementation but we will take a knowledge-level point of view. However, our idea of the knowledge-level is somewhat different from the usual idea of knowledge-level. We describe our perspective on the following page.

4.1. Critiques Of Present Programs

We will present here the problems that, to our minds, plague actual programs. Since we will focus on problems, the image of the state-of-the-art programs will be much bleaker than really is. A pro computer program comment can be found in (David, Krivine and Simmons, 1993).

We think that pinpointing those problems and their causes is important because they seem to hamper the development of true co-operative systems, able of integrating in an organisation and of evolving in time. Our critics are the following:

4.2. The Main Cause Of Deficiency

To our mind, the main cause of deficiency is that expert systems use structured knowledge (see The Knowledge Level Hypothesis: Our point of view) without knowing how or why the knowledge was structured and compiled. Therefore, they are unable to realise when their reasoning cannot be applied to a problem. They are also unable to recompile their knowledge to improve their skills. As a result, expert systems do have compiled knowledge as do experts but without the underlying meta-knowledge. Therefore, although they can solve efficiently some problems, they cannot carry out tasks that require meta-knowledge. In other words, the are unable to evolve. This incapacity is further exacerbated because they usually do not take full account of the feedbacks made by users.
Moreover, expert systems do rarely have meta-knowledge to co-operate. Because they do not adapt to the users (or else in a superficial manner at the level of the explanation module), expert systems cannot fully collaborate with human experts in a complex organisation. Therefore, they cannot be part of a global competence resulting from co-operation.
As a result, expert systems remain brittle and are difficult to use for non specialists.

4.3. Ideas For Improvement

We will give here some ideas to solve the problems described above. Our main points are that more explicit meta-knowledge should be added to the systems., in particular, meta-knowledge to co-operate (Sect 4.3.1) with the users and meta-knowledge to develop (Sect 4.3.2). Most of those ideas have already been implemented in some research projects. However, it seems to us that a wider use of these ideas is unavoidable for achieving more competent systems.

4.3.1 Co-operation and communication: creating a model of the system and its environment.

In our view, improvements of co-operation and communication can only be achieved through creating a model of the program and its environment in the program itself. This model should at least contain a model of the system's own knowledge and decision mechanisms, a model of the user's knowledge and behaviour, and a model of the organisation. If a system is « conscious » of the span of its knowledge, it could know when it is out of its depth and let the user take charge of a task. It could also know when it is apt at giving exact or very competent answers and then takes the initiative of helping the users, even if it implies some critics. Finally, the explanations provided would be more accurate and understandable: not only does the system know what it has done but also why. Co-operation still relies on communication. Therefore works in this domain are important too. We will not discuss them here, since an abundant literature is available (e.g., (Gréboval, 1995a; Swartout and Moore, 1993)).
Modelling the system's knowledge is not enough. To interact efficiently with its users, the system must understand them, that is to say take into account their background, their intentions, their habits, etc.
This modelling could enable a better co-operation and a better communication (Cohen, Jones, Sanmugasunderam, Spencer and Dent, 1989). Finally, modelling the organisation itself would enable the system to understand the position it holds. Although this model is mostly useless for improving communication, it is very important to permit an effective co-operation: the system could then understand the motivations, the competence, the responsibility of the different users it interacts with. Almost no works have been done on this topic (a notable exception is (Dieng, Corby and Labidi, 1994) in which a multi-agent modelling is proposed. Each agent represents either an expert, or a user or the system itself. Expert and user agents include an organisational information). Nevertheless, it seems to us that it should seriously be taken into account.

4.3.2 Brittleness and Evolution: meta-knowledge.

As we explained at the beginning of this report, maintaining a good level of competence requires to acquire knowledge, change decision processes, compile knowledge, communicate, etc., in other words to re-structure the knowledge. This can only be done if the system possesses meta-knowledge. It could then recognise when, having all the necessary knowledge, it made a mistake and therefore could change, if need be, its decisional processes. This meta-knowledge is related to the first phase we outlined in knowledge level modelling. By understanding why its knowledge was structured in such or such way, the system is more able to adapt and transform it to improve its results.
To our minds, computer programs receive a disproportionate part of « innate » knowledge compared to the meta-knowledge they have. Meta-knowledge can be given either in a compiled or in a declarative way. This second way seems more promising than the first since it would enable the systems to reason about their own learning and improving capabilities. Since it is quite difficult to program meta-knowledge in a computer, an iterative approach may be the best solution. An example of the result of such a programming process is Macist from Pitrat.
Other programs use meta-knowledge to modify themselves. An example is Eurisko from Lenat which has been rewritten by Haase (Haase, 1986) in a program called Cyrano. Ideally, programs should be able not only to modify their knowledge but their meta-knowledge too. (This has been the case in Macist and Eurysko, where meta-knowledge was transformed by itself). To be able to program systems that can deal with themselves, an ontology comprising the notion of system, knowledge, meta-knowledge, etc. should be created. This aspect is further detailed in the next section.

4.4. A New Module

To our mind, the first two solutions proposed above (modelling and meta-knowledge) can be inserted into a new module that would monitor the system competence. The same kind of augmentation has already be done with explanation modules. Like competence, explanation requires its own knowledge and processes (e.g., (Paris, Wick and Thompson, 1988; Gréboval, 1995b). Therefore we propose an architecture in at least three parts (see figure4):
  1. the expert system;
  2. the explanation module;
  3. the competence assessment module.
The competence assessment module can be refined into two parts(see figure5). The first part would monitor the system and the second would modify it if need be. This first task can be done in two main ways:
  1. internally, by a sort of « B Brain », as Minsky calls it ((Minsky, 1988), p.98), that would ideally monitor the system independently from the application domain;
  2. externally, through feedback from the users.
The second task seems much more difficult, especially if a change in the knowledge level model has to be done.
This module should be able to reason on the system. Therefore, it should posses an appropriate vocabulary. This vocabulary has to be formal to insure a symbolic exploitation of the concepts consistent with the semantic attached to them (Bouaud, Bachimont, Charlet and Zweigenbaum, 1994). We think that the definitions presented at the end of this paper can form the base for the ontology required by the new module.
The repercussions of the competence assessment module addition on the expert system are not clear. As far as we can see, certain form of the system implementation may render the communication between the expert module and the competence module very difficult. In particular, for reasons described in (Clancey, 1983) (particularly because rules do not contain the reasons why they were created and why they work), rule-based systems could be difficult to assess. The implementation of the competence assessment module should also be carefully planned. Because of the great adaptability needed to achieve the control, blackboard like architecture may be a good approach (Lemaire, 1992). Even for more amenable formalisms, creating a competence assessment module seems rather some way off.

Figure 4
Figure 4: A three part architecture...

Figure 5
Figure 5: ...and a two part module

5. CONCLUSION

In this paper we have defined competence as a capability to decide and act, established by someone else. This capability is mostly due to meta-knowledge. Meta-knowledge permits to control the use of problem solving methods (meta-experts are more prone to find results in new domains than pure novices do), and to compile knowledge (domain experts do not use meta-knowledge for simple problem anymore. They directly go from the data to the solution). Coupled with modelling, it enables experts to co-operate and results in an increase of competence. We stressed the fact that expert systems lack that sort of meta-knowledge. Moreover, when systems possess meta-knowledge, this meta-knowledge deals with the knowledge they have, not its structure. Therefore, systems are able to partly compile their knowledge and become more efficient. But, they still cannot really evolve and co-operate. This lack of meta-knowledge dealing with knowledge structuration accounts in part for the brittelness of expert systems and for their difficult use by non specialist users. Therefore, we advocate the addition of meta-knowledge for:
  1. modelling the users (their knowledge, preferences, etc.);
  2. modelling the organization;
  3. assessing the competence of the system.
This meta-knowledge can be grouped into a dedicated module that would monitor the behaviour of the system (and of itself).
To summarise, our view is to endow expert systems with more generic (meta-)knowledge on how to structure knowledge and to co-operate, and let them progress. This is a long term view where immediate competence and performance would be reduced in the hope to obtain more robust, co-operative and evolutive programs.

Definitions relative to competence

We present here some definitions constituting an ontology in the domain of competence. These definitions are useful in order to achieve our two following goals:
We have refined these definitions during our different conversations with experts. We have decided to present the main definitions in alphabetical order.

Action: Material manifestation.
Examples: Blinking of eyelids, the wind blowing, a child playing.

Competence: Recognised capacity to decide and act, established by someone else.
Example: Competence in managing a company.
Note: The competence is assessed through the results of the actions performed (i.e., it is interpreted by an external observer).

Conscious element: An element having knowledge of its own existence and of its environment.
Examples: An ant, a man, a tree.

Efficacious: Whose action leads to the expected results.
Note: In our definition, we do not include the notion of minimum wasted effort.

Efficient: Who/which gets good results.
Note: Our definition includes the two notions of being efficacious while producing sound results.

Element: Something (or someone) that exists or can be thought of.
Examples: A stone, a star, a tree, an ant, a man, a law.
Note: An element can evolve in time, in other words he/she/it can have an inner-mobility. It is important to keep in mind the idea of evolution in the definition of an element.

Environment: Set of elements and events that do not belong to the system being considered.

Event: Perception of an action.
Example: A stone falling.

Experience: Knowledge acquired through a long practice.

Expert: Someone who has got experience in a specific domain.

Goal: Imposed purpose that one tries to reach.
Example: An employee (element who has got the purpose) tries to do a good job for his company (element who sets the goal).
Notes:

  1. The term `one' used in the definition is considered as a conscious element, as he/she/it tries intentionally to reach the imposed purpose. A stone cannot have any goal, since it cannot be considered as a conscious element.
  2. « To give a goal » is a notion that must be used carefully. A goal can be given to an element by a subject (let us preferably use the verb « to impose ») or by an observer (let us use the verb « to attribute »). The observer may associate another goal that the one effectively imposed by the subject (especially if the subject is not the observer). There is a problem of point of view and information.

Information: What is inherent to or represented by particular arrangements, sequences or interpretable sets, and can be recorded, transferred and for what an answer can be given by inanimate agents.
Examples: (1) A sentence dealing with a subject. (2) A bit sequence, for instance a Morse code.
Note: What is important to keep in mind in this definition are the idea of support and the idea of any origin (object or living element). The fact to give an information is not a willing action. There is thus no purpose in information: a plant when greening gives the information about its cellular activity; it is probably not dying.

Interpretation: The model that results from the fact of giving a sense to a representation.
Examples: Interpretation of the sketch, interpretation of a painting.
Note: Interpretation is a notion that includes the idea of freedom for the receiver in the conception of its model.

Knowledge: Information that can be used in a decision process. Examples:
fact: the guy over there does not like jazzmeta-fact: I know that the guy over there does not like jazz
knowledge: I don't like jazzmeta-knowledge: my friend knows that I don't like jazz

Model: Mental conception in incessant evolution allowing people to comprehend a system. A model is a partial view of a system that puts forward its emergence; but it is a filtered view. It is only concerned by a few particular points.
Examples: (1) The model of car, a Ferrari is one representation. (2) An expert's knowledge, an expert system being its representation.
Notes:

  1. The system exists before the model, since, according to us, every concept is born from a pre-existent element.
  2. A model as a `definition of a system' makes the `subject' intervene: « So, there is always in the extraction, the selection and the definition of a system something uncertain or arbitrary: there are always decisions and choices, thus implying a notion of subject in the concept of system. The subject appears in the definition of the system(3) through and because of his interests, his selections and his aims. He brings the cultural, social and anthropological over-determination into the concept of system, through his subjective over-determination »(4)(Morin, 1977).
  3. A model can be communicate only through a representation (see Figure 6).

Figure 6
Figure 6: System - Model - Representation. The observation of the same system (by two individuals) leads to the elaboration/development of two different models. They can communicate thanks to their representations. This communication makes their model evolve too.

Organisation: arrangement of relations between elements and events that produces a system showing known and unknown properties at the level of its elements.
Notes:

  1. « The organisation binds various elements, events or individuals through interrelations. They then become the components of a whole. The organisation maintains solidarity and sturdiness in these links. Therefore, it ensures a possibility of duration in spite of unpredictable disturbances. So the organisation transforms, produces, binds, maintains. »(5) (Morin, 1977). We can notice the difference between an organisation (transformation, production, junction, maintenance) and a structure (the elements are already produced, transformed and bound).
  2. The organisation casts the roles to the elements and/or interrelations that cause it.

Performance: Quick realisation under certain constraints.
Note: The definition contains the ideas of yield (`quick') and of equity in realisation (`constraints').

Representation:

  1. Mental or graphic image of a model making it communicable. It is then a means to make the model more intelligible.
  2. Process that aims to get a « representation ».
Examples: (1) A drawing. (2) A spoken or written explanation.
Notes:
  1. In general, the goal of a representing is to create an element (that can be considered as a system) that embodies the model (see Figure 6). However, since a system contains an idea of emergence, it is not possible to create definitely a system that represents the model. Unexpected behaviours may appear (the emergence is not controllable). In this view, the systems cannot be created from elements, even in a goal of representation; they are too rooted in reality.
  2. In the notion of representation, time is fixed. The system « is » in evolution, the model « evolves » mentally, the representation is « fixed ».
  3. Representations and models are two different notions. A model is personal whereas a representation is a means of communication (note that we can make a representation for ourselves).

State: Limited set of data on a system at a given point in time.
Examples: (1) For the piston system: the temperature, the position are some states. (2) For an individual: the state of tiredness.
Notes:

  1. A state change comes from the modification of a datum. For example, the piston state changes if the piston temperature or position evolves.
  2. The number of data in a state is upper-limited by the observer's perception (data he can know).

System: Organised unit constituted of interrelations between elements and events (see Organisation).
Examples:

  1. The solar system, with its elements: the sun and its satellites (physical organisation).
  2. The respiratory system, whose elements are organs of the human body that establish a system whose emergence is breathing (physical/biological organisation).
  3. The ant-hill whose elements are ants (hierarchical organisation).
Notes (see Figure 7):
  1. Nearly all the elements can be considered as systems. The question to know whether this consideration is useful depends on the problem.
  2. A system is considered as a global entity. It is also an element. A company for example can be considered either as a system (organisation of elements) or as an element (if we consider the whole entity).
  3. A system has got its own evolution. It is then necessary to take into account the temporal dimension in the setting-off of a system.
  4. An individual (observer) puts forward a system. We should not say that he creates it.
  5. « The system is then here conceived as the complex basic concept concerning the organisation. It is, if we could say, the simplest complex concept. »(6) (Morin, 1977, p. 149)
  6. A system is open (continuous evolution, permanent touch with the environment), that is to say the system changes (impossible absence of interrelations between the system and its environment). In this view, we can consider a whirlpool constituted of particles always different (flux). However, we will see that a model is closed.

Thinking element: An element having knowledge of its conscience.
Note: Mankind is the only thinking element that we recognised.

Figure 7
Figure 7: Simplification and complexification in a system. Every system has got something complicated and identifiable in itself. According to the point of view adopted, a particular character brings it towards chaos (arrow drawn coming out the « system »), but the system does not evolve into a chaotic state because of the action symbolised by the arrow drawn in the opposite direction. These characters are all useful to consider an element as a system.

References

  1. Blot, E., Durand, S., Jézéquel, L., Malandain, S., and Moulin, C. (1995). Knowledge Level : retour sur l'hypothèse d'Allen Newell, Rapport du L.I.R., available by e-mail (durand@excalibur.univ-rouen.fr).
  2. Bouaud, J., Bachimont, B., Charlet, J., and Zweigenbaum, P. (1994). Acquisition and Structuring of an Ontology within Conceptual Graphs, DIAM Rapport interne RI-91-142.
  3. Chaudron, L., and Tessier, C. (1995). SuprA-Cooperation: When Difference And Disagreement Are Constructive, COOP'95 Proceedings of the International Workshop on the Design of Cooperative Systems, 452-470, Antibes-Juan-les-Pins, France, INRIA (Ed).
  4. Clancey, W. (1983). The epistemology of a rule-based expert system: a framework for explanation, Artificial Intelligence n°ree;20.
  5. Clancey, W. (1985). Heuristic Classification, Artificial Intelligence n°ree;27, 289-350.
  6. Cohen, R., Jones, M., Sanmugasunderam, A., Spencer, B., and Dent, L. (1989). Providing Responses Specific to a User's Goals and Background, International Journal of Expert Systems, Vol. 2 n°ree; 2.
  7. David, J.M., Krivine, J.P., and Simmons, R. (1993). Second Generation Expert Systems: A step Forward in Knowledge Engineering, in J.M. David, J.P. Krivine, and R. Simmons (Eds), Second Generation Expert Systems, Berlin : Springer Verlag, 3-23.
  8. Dieng, R., Corby, O., and Labidi, S. (1994). Agent-Based Knowledge Acquisition, in L. Steels, G. Schreiber, and W. Van de Velde (Eds), A Future for Knowledge Acquisition, 8th European Knowledge Acquisition Workshop, EKAW'94, Hoegaarden, Belgium, 63-82.
  9. Dieng, R. (1995). Specifying a Cooperative System through Agent-Based Knowledge Acquisistion, COOP'95 Proceedings of the International Workshop on the Design of Cooperative Systems, 141-160, Antibes-Juan-les-Pins, France, INRIA (Ed).
  10. Ferber, J. (1995). Les Systèmes Multi-Agents, Paris : InterEditions.
  11. De Greef, H., and Breuker, J. (1992). Analysing system-user cooperation in KADS, Knowledge Acquisition 4.
  12. Gréboval, M.H. (1995a). Vers une représentation du dialogue explicatif, available by e-mail (mhgrebo@hds.univ-compiegne.fr).
  13. Gréboval, M.H. (1995b). Representation and construction of explanations: application to the AIDE project, available by e-mail (mhgrebo@hds.univ-compiegne.fr).
  14. Haase, K. (1986). Discovery Systems, M.I.T., A.I. Laboratory, Memo n°ree; 898.
  15. Kolodner, J. (1982). The Role of Experience in Development of Expertise, Proceedings of the Second National Conference on AI, Pittsburgh, PA.
  16. Laird, J.E., Newell, A., and Rosenbloom, P.S. (1987). SOAR: an architecture for general intelligence, Artificial Intelligence n°ree;33.
  17. Larkin, J., McDermott, J., Simon, D., and Simon, H. (1980). Models of Competence in Solving Physics Problems, Cognitive Science 4.
  18. Le Moigne, J.L. (1990). La modélisation des systèmes complexes, AFCET Systèmes, Paris : Dunod.
  19. Lemaire, B. (1992). Hypothetical Reasoning within the Blackboard Model for Constructing Explanations, ECAI'92, 10th European Conference on Artificial Intelligence, B. Neumann (Ed), John Wiley & Sons, Ltd.
  20. Matta, N. (1995). Méthodes de Résolution de Problèmes : leur explication et leur représentation dans MACAO-II, thèse de l'Université Paul Sabatier, Toulouse, France.
  21. Mehler, J. (1974). Connaître par désapprentissage, in E. Morin, and M. Piattelli-Palmarini (Eds), L'Unité de l'Homme 2. Le cerveau humain, Paris : Editions du Seuil, 25-37.
  22. Miller, G. (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information, The Psychological Review Vol. 63 n°ree; 2.
  23. Minsky, M. (1988). La Société de l'Esprit, Paris : InterEditions.
  24. Morin, E. (1977). La Méthode 1. La nature de la nature, Paris : Editions du Seuil.
  25. Morin, E. (1977). La Méthode 3. La connaissance de la connaissance, Paris : Editions du Seuil.
  26. Musen, M. (1993). An overview of Knowledge Acquisition, in J.M. David, J.P. Krivine, and R. Simmons (Eds), Second Generation Expert Systems, Berlin : Springer Verlag, 405-427.
  27. Newell, A. (1982). The Knowledge Level, Artificial Intelligence n°ree;18.
  28. Paillet, O. (1993). Multiple Models for Emergency Planning, in J.M. David, J.P. Krivine, and R. Simmons (Eds), Second Generation Expert Systems, Berlin : Springer Verlag, 161-173.
  29. Papert, S. (1971). Teaching Children to be Mathematicians vs. Teaching about Mathematics, M.I.T., A.I. Laboratory, Memo n°ree; 249.
  30. Paris, C., Wick, M., and Thompson, W. (1988). The Line of Reasoning versus the Line of Explanation, Proceedings of the AAAI'88 Workshop on Explanation, 4-7.
  31. Pitrat, J. (1990). Métaconnaissance, Paris : Hermès.
  32. Reynaud, C., and Tort, F. (1994). Connaissances du domaine d'un SBC et ontologies : discussion, Proceedings of « cinquièmes Journées Acquisition des Connaissances, Strasbourg ».
  33. Ruffié, J. (1974). Le mutant humain, in E. Morin, and M. Piattelli-Palmarini (Eds), L'Unité de l'Homme 2. Le cerveau humain, Paris : Editions du Seuil, 107-169.
  34. Schoenfeld, A. (1985). Mathematical Problem Solving, Academic Press.
  35. Schreiber, A., Wielinga, B., and Breuker, J. (1991). The KADS Framework for Modelling Expertise, EKAW'91.
  36. Smyth, M. (1995). Human Computer Co-operative Systems - Empowering Users Through Partnership, COOP'95 Proceedings of the International Workshop on the Design of Cooperative Systems, 37-55, Antibes-Juan-les-Pins, France, INRIA (Ed).
  37. Swartout, W., and Moore, J. (1993). Explanation in Second Generation Expert Systems, in J.M. David, J.P. Krivine, and R. Simmons (Eds), Second Generation Expert Systems, Berlin : Springer Verlag, 543-585.
  38. Tambe, M., Johnson, W.L., Jones, R.M., Laird, J.E., Koss, F., Rosenbloom, P.S., and Schwamb, K. (1995). Intelligent agents for interactive simulation environments, AI Magazine, n°ree;16, available by ftp.
  39. Trousse, B., and Vercors, A. (1992). Contribution à l'Intelligibilité de l'Activité de Conception, ERG-IA 92, Biarritz, France.
  40. Van de Velde, W. (1993). Issues in Knowledge Level Modelling, in J.M. David, J.P. Krivine, and R. Simmons (Eds), Second Generation Expert Systems, Berlin : Springer Verlag, available on the WWW: http://arti.vub.ac.be/~walter/papers/issues/doc/doc.html.
  41. Voß, A., Karbach, W., Drouven, U., and Lorek, D. (1990). Competence assessment in configuration tasks, ECAI 90, 9th European Conference on Artificial Intelligence, L. Aiello (Ed), London ECAI, Pitman, 676-681.
  42. Voß, A., Karbach, W., Coulon, C.H., Drouven, U., and Bartsch-Spoerl, B. (1992). Generic specialists in competent behaviour, ECAI 92, 10th European Conference on Artificial Intelligence, B. Neumann (Ed), John Wiley & Sons, Ltd., 567-571.
  43. Vreeswijk, G. (1995). Self-government in multi-agent systems: experiments and thought-experiments, Department of Computer Science, University of Limburg, P.O. Box 616, 6200 MD Maastricht, The Netherlands, available by ftp (ftp.cs.rulimburg.nl: /pub/papers/vreeswyk).
  44. Wielinga, B., Van de Velde, W., Schreiber, G., and Akkermans, H. (1993). Towards a unification of knowledge modelling approaches, in J.M. David, J.P. Krivine, and R. Simmons (Eds), Second Generation Expert Systems, Berlin : Springer Verlag, 299-335.
  45. Worden, R., Foote, M., Knight, J., and Andersen, S. (1987). Co-operative expert systems, in B. Du Boulay, D. Hogg and L. Steels (Eds), Advances in Artificial Intelligence - II, North-Holland : Elsevier Science Publishers B.V., 511-526.