Toward Adaptive Information Systems: considering concern and intentionality

Alain Cardon1 Franck Lesage2

1LIP6 Paris VI, UPMC, Case 69, 4 Place Jussieu, 75252 Paris Cedex 05, France (Alain.Cardon@lip6.fr)

2PSI-LirINSA, INSA de Rouen BP 08, 76131 Mont Saint Aignan, France (Franck.Lesage@insa-rouen.fr)

Abstract

Unlike problem solvers infering with symbolic rules, adaptive systems force us to raise the notion of concern and intentionality. The effective representation of these categories is made with a peculiar notion of emergence in MAS: the reification of agents landscape. We propose the architecture of adaptive systems based on interactive agents organizations. Each of them reifies its form (landscape) as an emergence and then stimulates the other organizations. We apply such adaptive systems to the Information and Communication Systems.

Introduction

We can point out three cases of problem solving with computerized systems. The more classical being a single actor - in our case, a decision maker - is using an Information System. This is the same when we consider many actors simultaneously accessing an Information System but without interaction between them. The second case is two actors using a computerized system for cooperative problem solving. They have to express their perceptions concerning the different knowledge they have on the problem. They have to adapt their points of view and some facts concerning the evolution of the problem resolution. Eventually, the last case is a generalization to numerous actors, having to simultaneously solve a problem concerning themselves in a cooperative way with the aid of a distributed computerized system.

In this last case, the Information System allowing the cooperative solving has all the characteristics of badly structured problems [Winograd and Flores(1986)]. The information, or the knowledge given by each user may not have the same meaning for the others: the exchanged knowledge is one's own. Each exchanged objective information or knowledge has to be linked with many subjective characteristics. In this way, the system can express the meaning of the exchanged information: one kind of meaning for the sender and another kind for each actor receiving the information. Then, each exchanged information has a complex form, made of objective facts or characteristics and augmented with the interpretation of these facts: the subjective qualifications expressed by the sender. So, any recipient can appreciate the message and its context, in fact, its pragmatics [Eco(1993)], with the sender's intentionality.

The Information System, in order to transmist this extended knowledge, has to adapt the knowledge representation to the actors' various intentionalities and concern. Thus, it has to represent the very variable structure of exchanged knowledge, with the intentionality and concern expressed by the actors. The right way to express such categories of extended knowledge is to endow communications with fundamental categories like intentionality and concern in a real manner, not just "as if" [Dennett(1987)].

Then, the structure of this extended knowledge must be very plastic, to conform to the changing actors' interpretations. In fact, the Information System must be able to adapt its own knowledge representation to the evolution and increase of the extended knowledge during communications. It must adapt its structure, like actors adapt their perceptions about exchanged knowledge, with its own concern, using its own interpretation process. It will eventually generate a collectively perceived knowledge synthetizing the global meaning of all messages.

We present the characteristics of such an adaptive Information System, achieving an interpretative process on the exchanged knowledge. The model we propose is based on distributed and dynamic multiagent systems. A prototype of this system is developed in Distributed SmallTalk(tm).

Systems for solving symbolic problems vs adaptive systems

Traditionally, the problems addressed in Artificial Intelligence lead to a dichotomy between knowledge of the domain and the rules operating on it. According to this approach, the system is considered as a general problem solver that provides objective results from clearly identified available data describing a part of reality [Winograd and Flores(1986)]. The problem domain is well known, well structured and the current data allows us to choose a precise sub-domain that will be subsequently studied by the solver. The systems operating for such cases are finely structured solvers, mainly operating on data. Their structure and organization are stable with time, and their answer to any event occurring in the environment does not depend on the current system organization system but only on the analyzed event. The typical case of such systems is the "single actor case of Information System".

The case of a single user: systems solving symbolic problems

When a single user communicates with a system, he uses it like a mere factual knowledge database. This knowledge, whatever its form, is represented in the system by symbolic structures. Inference mechanisms select some of these structures and the user understands - by interpreting - the expressed signs that the system displays on the screen after its computation (Figure 1). We are typically in the case of the de Saussure or Frege's diadic semiotics, where knowledge is represented in a system by experts, is symbolically treated and eventually returned to a human operator who will use it.


  
Figure 1: The case of a single user
\begin{figure}
 \begin{center}
 \includegraphics[height=3.5cm,width=2.5cm,angle=270]{fig3.eps} \end{center}\end{figure}

The symbols refer to the real objects they denote [Evaert-Desmedt(1990)]. For the system, knowledge is nothing but an organized set of symbols and for the user, these symbols denote, with no ambiguity, objects of the real world. Statement context is usually not present nor considered as necessary. The system is made for the solving of well defined problems and operates using inferences and pre-defined heuristics for its solving strategy. But the usage of such a system by an isolated decision maker often leads to a one way adaptation of reality to the model of knowledge used by the system, with some trouble in case of inadequacy.

The case of two communicating actors using one system for cooperation

Difficulties occur when we consider two distant users of the same system and when they must communicate knowledge to set up common decision making. Indeed, the information given by one user may not have the same meaning for the other. It is the case when pragmatics - the context of the message - is absent, if the concern and the expectations of the sender are not known. In other words, if only an objective fact is sent and read by the other actor, it is sometimes hard for this last to figure out what exactly the message means. The speech act [Searle(1969)] is then reduced to a simple symbolic form and it may lead de facto to some misunderstanding between the two actors (Figure 2).


  
Figure 2: The case of two users
\begin{figure}
 \begin{center}
 \includegraphics[height=3.5cm,width=2.5cm,angle=270]{fig4.eps} \end{center}\end{figure}

But with time, the questioning between two users about their concern, their real intentionality, may solve the common problem they raised, adapting their point of view about the problem they have to cooperatively solve. They may enter in a dialogic loop and adapt their personal understanding of the problem through the exchange of messages toward a common understanding, using multiple statements. And such statements are not simultaneous, but sequential: they send and read messages.

The case of many cooperating actors using a distributed system: toward adaptive systems

This situation is more complicated in the cooperative resolution of problems including many decision makers operating via networks. The state of the initial situation is not well structured for cooperation because each actor does not know the opinions of the others. The information about the initial situation are partial, not clear nor sure and always strongly interpreted by the actors. For decision makers merging different parts of the observations about the problem, the fact they belong to different institutions, are geographically widespread, will lead to some very peculiar structuration of the problem domain, in fact one partial structuration for every one. The information and knowledge exchanged on the network about the situation are very dependent on the emission context of each knowledge statement: they are restrained by their necessary pragmatics [Cardon(1997)] [Eco(1993)]. The exchanged knowledge is not composed of well identifiable and objective facts we can easily associate with well memorized information, but is a complex set of interpretations around some actors' perceived facts. These objective facts are bound to many subjective characteristics.

The system merging knowledge can not be reduced to a centralized symbolic solver because the problem is not a priori well defined. The system, representing the exchanged knowledge, must adapt itself to the evolution and enhancement of knowledge provided by the decision makers (Figure 3). Not only taking into account the objective facts but also the different interpretations of these perceived facts. It must adapt its structure to the evolution of the actors' perceived knowledge.


  
Figure 3: The case of many users
\begin{figure}
 \begin{center}
 \includegraphics[height=\columnwidth,width=6cm,angle=270]{fig5.eps} \end{center}\end{figure}

In the case of two users, we can imagine the achievement of some kind of sufficient mutual understanding in a relatively short amount of time. The problem becomes much harder during a session involving a greater number of participants. We can not afford the amount of time necessary for a dialogic adequacy of each of the users relatively to others, let apart the problem of its feasibility. Two new problems arise about the knowledge representation in the Information System:

1.
how an user reading a message can, from the received symbolic form, clearly find back the real object perceived and designated by the author ?
2.
how a system can take into account this knowledge interpretation ?

First, it seems necessary to introduce pragmatics knowledge representation in the system that is in charge of the communications among actors. An actor emits data that is a representation of a perceived situation but charged with his intentions, judgments and opinions, according to his culture. He makes an interpretation of some observed fact from reality, linking the object to its designation using an interpretation process as it is done in Peirce's triadic semiotics [Evaert-Desmedt(1990)] [Peirce(1958)]. A message is not sent for nothing nor in a strictly objective manner. It's thus necessary, as a beginning, to take into account the characteristics of the message pragmatics: the speech act of each message sent to another actor [Searle(1969)]. It is possible to represent a categorization of statement contexts [Eco(1993)] by asking each actor for a qualification of his message, with appropriate and explicit hints: the actor builds a Communicational Data [Cardon(1997)], that in turn qualitatively improves the knowledge of his message.

Secondly, the system must be able to interpret all the simultaneously extented messages (the communicational data) in order to build the qualitative, functional and temporal structures by using an inner interpretation. Then it sends, at computers and high throughput networks speeds, the result of this interpretation to the different recipients. The system's communicational layer must necessarily operate an artificial interpretation process. Such a system is adaptive.

Thanks to the interpretation process, the system then generates meaning for itself (expressed by the signs it is manipulating) during an adaptation relatively to the outer world from which it only takes extended knowledge: it binds the object (the extended knowledge) to the sign (the internal representation) according to some kind of inner intentionnality, thus producing a Communicational Knowledge.

Considering what has been previously exposed, we can propose a definition of what an adaptive system is:

An adaptive system has an organisation (a structure which evolves with time) it modifies according to its own perception (a motivated interpretation and conception) of its environment and by taking into account the current state of its organisation (it is actually self-refering) [Le Moigne(1990)].

From ontology to plastic architecture in adaptive information systems

In an adaptive communicating system, a communication among actors is composed of the objective subject of the message (a text with drawings, sounds ..) and of qualifications about the statement, specifying semantic traits from the actor's pragmatics context [Jackendoff(1983)]. Objects and relations allowing the characterization of the meaning of exchanged messages are made of different categories. For the knowledge representation, we can consider that every message is wrapped in a set of cognitive entities representing the modalities of meaning that the system takes from information.

The categories specifying the meaning of the exchanged meaning are much like the fundamental categories of cognitive systems. They are considered as semantically invariant [Jackendoff(1983)]. This categories define the fundamental cognitive elements of the system, representing the meaning of the whole situation for the actors, and were used to build the software architecture.

We have to study evolving dialogs, where the organizational characteristics of the exchanged knowledge are generated only when required. These organizational characteristics will be specified in the communication act: the categorization of action in the actor's communication is a qualified act, as the speech act [Searle(1969)]. It is represented with three fundamental categories, composing the Communicational Data [Cardon and Durand(1997)]:

1.
the object of the communication (its subject, for the emiting actor),
2.
the qualification of the communication, in fact the quality, in the meaning, of the judgment value, linked to the object of the message, expressing the pragmatics of the object,
3.
the intensity of the qualification, the importance given by the actor to the qualification of his communication.

The sending actor of a communication expresses, by his act, some intention. Of course, is not directly accessible, because it takes place in the subject. But the extension by qualification of the communication, clarified by the actor, will allow the regular representation of his intention in the Communicational Data: using some predefined characteristics of the communication act, we can qualitatively increase the exchanged knowledge.

We specify that there is no direct correspondence between the actors' qualifications of a message and the communicational knowledge built by the system wrapping the communication. The structure of the extended message is static and very limited. The communicational knowledge is an extension, situating the Communicational Data in the context. An actor specifies his message, encapsulates the text with the characteristics of his intentionality and, with time, the information system operates itself an interpretation process, modifying the organization of its local knowledge, and generates the corresponding communicational knowledge. By this, the significant current state of the adaptive system will be emerging from the different qualifications wrapping the communications, at the level of the meaning of the perceived situation, rather than at the level of the objective facts. The transmission of information and knowledge is, for the system, a modification of its inner organization through an interpretation process.

The construction of an adaptive information system with dynamic agents

The adaptive communicational system has to interpret communications between actors, and must exhibit a representation of this global representation. This characteristic of adaptability involves a special structure for knowledge: it is represented by an organization of multiagent systems (MAS). Each category, sub-category and semantic trait about knowledge is reified in some communicating agent. To each message corresponds a communicational data and each element in this communicational data activates some different specific agent capable to match this entities. The agents constitute an organizational wrapping of the symbolic knowledge of the message: the communicational knowledge, part of the knowledge environment of each actor. This set of agents is able to self-modify its organization in order to allow the adaptation of the MAS to the meaning of the communication supplied by each actor. This characteristic of self-modification expressing the adaptation is much like the operational closure in autopoietic systems [Varela(1996)], where the environment is made of communicational data and the organization of the system is made of communicating agents from which the communicational knowledge will emerge.

For each communication, a first static structure is created, according to the qualifications provided by the actor. This structure is a pattern created from the different categories and sub-categories of the ontology expressing the details of the message qualifications. This pattern initiates the sender's agents system, with the creation of new agents and the modification of existing agents: the local MAS adapts its organization to the communicational data. We remark that we do not have creation of a corresponding cognitive pattern to the communicational data but we have self-adaptation of the system of agents to a perceived external event: the communicational data.

We now have to define a motivated behavior for this set of agents, which not only reacts to an external stimulus but interprets the communicational data according to its own intentionality.

The notion of motivated behavior for agent systems

An interpretation process, like in the interpretation that is supposed to take place in human brains, is based on the representation of the motivation initiating the interpretation process, starting from some object of attention and producing the designation of the so-called "sign" in semiotics [Nutin(1991)]. Usually, in the domain of cognitive agents, we introduce a motivational level with a specific structure acting as a function representing the will to fulfill some task [McFarland and Bosser(1993)]. This willpower function is quantatively measurable, allowing the evaluation of the advancement of the agent's task. We are in strictly rational and symbolic fields, using so-called essential variables, describing unlimited numerical domains but having an attracting point, the null value [Ashby(1960)]. The cognitive agent has a finite set of well described motivations, a priori stated, and a regulation system makes choices among the set of candidate motivations. These motivations are structured in hierarchies according to the different needs [Tzafestas(1995)]. The fundamental need of an agent is to fulfill its task, secondary needs only allowing this function. Agents must have a behavior leading to a minimal need, corresponding to the null value of its essential variables, in which the agent is satisfied and its task completed.

In the case an agent has only one task to accomplish and if its will is to fulfill this task, it has the behavior of an automaton and any characteristics of generation of new motivations can not be distinguished. But, in the case this agent has to self-assign some non-trivial tasks, according to a perceived and conceived situation, it has to operate a cognitive process much like a human would do [Nutin(1991)]:

1.
it has to worry about some expressive sign of its context,
2.
it must conceive this sign like a readable representation, understandable for itself,
3.
it must elaborate some adapted plans,
4.
it must act,
5.
it must evaluate the differences between the last evaluations and the transformations in the context in order to analyze its actions.
In this more complex case of motivation, we focus on a point we consider as central: before solving some well-stated problem and defining some goals, the agent interpreting something from reality in a motivated act must generate a concern about its environment and on its own organization. And this is not the location of some expected form in the context, but it is the fact that the agent must situate itself in its environment, in order to conceive the attendance of the form, identified itself as a subject conceiving its situation in its world. In fact, we consider as central in the behavior of an agent achieving the motivated interpretation process, the generation of its situation, understood as a continuous process of "situatedness" in its environment.

The ability to be concerned for something, some fact, some idea at any moment, is the first action of any motivated agent. From that, the agent can develop its conception, decision, action and evaluation process about any physical or spiritual act. If we elude this question of the concern, we suppress any reason of motivation [Stafford(1997)]. In such case, the agent's action is reduced to some mechanical choice in problem solving, without any action from the agent for itself, like a sentient subject can do.

Then, we think that the system producing a motivation - concern and intentionality - is not expressed by only one fine well-strutured agent, but is the result of the self-organization of a lot of agents. The entity able to express some kind of real motivation has a complex organization made of numerous communicating agents. Such a system can be self-reorganized by only using the set of agents as a plastic morphology.

We describe below the architecture of the set of agents of such a system as we have developed it.

The agent based system architecture

The system architecture we propose is based on an organization of four MAS (Figure 4). These MAS are strongly linked to each other, with one that is actually distributed in others.


  
Figure 4: The system architecture
\begin{figure}
 \begin{center}
 \includegraphics[height=\columnwidth,width=4.5cm,angle=270]{fig6.eps} \end{center}\end{figure}

The aspectual agents

Communicational data qualify every stated message by operators. They are fuzzy, inacurate and insufficient to characterize rigorous behavior, reliable advice or a global acurate judgment. They strongly depend on the actor's mental state. In order to fit these dynamical, fragmentary and incomplete characteristics, they will be represented by cognitive agents: the aspectual agents. Every aspectual agent reifies a cognitive category or a semantic trait relativelity to the talk on the situation. Its action consists in expressing a tendency, a virtuality, in order to exhibit the category it reifies. It must do that by taking into account its action context: the state of the other active aspectual agents.

In each emiting actor's environment, a first multiagent system will grasp all communicational data in order to extract their characteristics. It must be done according to the actor's last communications, by taking into account the knowledge of all past qualifications of the messages along with the informations provided by the specific knowledge based systems. Every agent represents a precise meaning category and is linked to the corresponding characteristic of the communicational data that urged it to act. The agents system represents what the interpretation system can dynamically perceive from its context which is made of communicational data and knowledge bases.

Through every actor's interfaces, all communicational data are semantically interpreted by aspectual agents. These agents represent, by their states and numbers, both the characteristics (semantic traits and their links) and the partial meaning of the exchanged message. In other words, its semantical situation relatively to the current knowledge of the Information System. Aspectual agents' actions on knowledge bases (plans, logs, scenarii ...) will enhance the knowledge inherent to the communicational data.

Let us examine this example, a communicational data "generates" the following aspectual agents:

At this stage of the plan, the interpretation of the situation allows to inform the sender about the low chances for a correct acknowledgement of his message. He will then be attentive on the incoming events. Every recipient will not only have the text of the message but also an engaging request in a hierarchically controled action. He will be able to answer by emiting doubts on the correctness of the predicted situation.

The action of the aspectual agent system is, at the interpretation process level, a pre-conception process [Nutin(1991)]. In other words, a generation of a partial conception of the situation thanks to dynamic knowledge particular to the communicational data (instanciated [sub-]categories). This location characteristic of the messages understanding means that the organisational meaning of the whole set of communicational data, the effective meaning of the whole situation, is not yet exhibited since the need or global goal notions are not yet taken into account.

The aspectual agent is modelized by a four states automaton: a macro-automaton [Cardon and Durand(1997)]. The first state represents the agent's initialisation. In the second state, the agent tries to gather informations on its context in order to know if its development is viable. If yes, it switches to the third state. Here, it tries to enforce its own point of view by communicating in order to alter its competitors. If it succeeds, it eventually reaches the fourth state: the action state. At this point, the agent evokes a signigicant and permanent tendency. Every state of the macro-automaton is also modelized by an automaton that is called "local automaton" in the following lines. This complex structure allows a refined representation of the subjective opinions generated during the communications.

The macro-automaton has four states (Figure 5): initialization, deliberation, decision and action. They correspond to the linear decision pattern [Searle(1969)]:

1.
In the first state, the agent awakens. It tries to determine if the quantity and the strength of the messages are sufficient for the process to be initialized.
2.
In the second state, the agent tries to determine is its context is favorable. For that, it will ask the other aspectual agents for some information on their state. In function of the answers, it will switch to the next state or will go back to the first state.
3.
In the third state, the agent will try to enforce its own point of view to the other aspectual agents and will try to infect the other actors' MAS. It operates on the graph of the other aspectual agents and sends clones of itself in the neighbouring actors' MAS. The aspectual agent is able to kill some weak agents or to strengthen others in order to reach its goal: switching to the fourth state.
4.
Once an agent has succeeded, it switches to this state. Here, it will operate on the other agents' macro-automaton, trying to alter them. When an agent has reached this state, we can say that the category it reifies characterizes a significative, permanent and pertinent actor's point of view.


  
Figure 5: The macro-automaton
\begin{figure}
 \begin{center}
 \includegraphics[height=\columnwidth,width=2.5cm,angle=270]{macro.eps} \end{center}\end{figure}

Each state is implemented by a augmented transition network (ATN) allowing to filter the analyzed message's category, by lexically and semantically analyzing the communicational data. Each time an agent receives an message or a part of a message, it computes the proximity between the semantic traits of the message and its own category. According to this proximity and by taking into account the message intensity, a transition will be possibly reached. The semantic proximity of the agent's category is given by a lexic under a matricial shape. A special structure represents in the automaton the states' development contexts (whether the switching from the previous state has been hard, fast, ...), this constitutes the agent's velocity.

The notion of agents landscape

An agents landscape is the whole set of communicating agents, considered as well understandable in the following way, it is:

1.
the set of reifications, by agents, of the semantic traits contained in the messages received and sent by actors,
2.
the set of relations between agents, representing the semantic proximities of the reified categories,
3.
all the structures allowing to sort these agents according to typical characteristics determining the general characteristics of the states and of the relations of the agents set.

The landscape represents the messages' characteristics and qualifications, in other words, the semantic traits characterizing the subjective perceptions and the conception of the situation according to the actors.

Such an agents landscape has, for us, the very same signification and dynamical characteristics of the mental representation that an actor may have of a real landscape. This one, through his observations, perceives the landscape characteristics and conceives a particular mental representation from it: he extracts some characteristics, skips others, binds, abstracts, agregates, disconnects them though remaining ignorant of numerous forms. The exhibited structures in the observed reality have some dynamical correspondance with those from the observer's mental representations. The communicational data made of text, canvases, images and noises, considered as exterior to the system, has a dynamic representation in the agents system.

An agents landscape is graphically represented in the system by specific projections according to some characteristics chosen by the actor among all the semantic categories in the ontology. Generally speaking, these projections represent the number and the strength of the agents with dual characteristics (increasing seriousness vs problem being solved, ...) with time, from the beginning to the current state of the communications.

Every time a message is being sent or received, the aspectual agents operate, for themselves and by taking into account close agents that reinforce them or remote ones that weaken them according to the semantic proximities. They also alter the agents landscape by representing the global state of this landscape, the dynamical memory of all the emited categories or semantic traits.

We can note that this landscape has (apart from its semantic characteristics) geometrical shapes that represent regularities, ruptures, salient aspects in the evolving agents set. This geometrical aspect is, linked with the messages' semantics, introduced by placing the agents in a metric space. Each agent is represented by a state vector where its typical characteristics reside. This set of vectors is represented in $\mathbb{R}^{n}$, with the use of various measures of distance. These distances denote both the semantic traits and the distinguished geometrical shapes. We will give enough importance to this geometrical apsect in order to achieve an analyse of the agents landscape.

The morphology agents

The aspectual agents landscape constitutes, for the interpretation system, a pre-conception of the communicational data meaning. A second agents system will, by operating on this landscape, exhibit typical shapes and traits, by taking into account the interpretation system's elementary motivations.

The aspectual agents are represented by vectors indicating their characteristics. The shapes located in the morphology of the agents landscape represent the ways of reading the generating characteristics of the agents landscape. This distinction of specific shapes in the landscape represents the interpretation system's intention to consider some geometrical elements. These geometric elements semantically correspond to meaning ruptures, diverging points of view, special points of view in the set of communicational data. These shapes are distinguished by morphology agents whose task is to exhibit what might draw the attention in the structure of aspectual agents. Their action is thus a mechanism of generation of tendencies to "make sense" for the available aspectual agents landscape.

In the previous example, the morphology agents that are activated on the aspectual agents landscape will be:

Morphology agents operate on the vectors representing the aspectual agents through distributed analysis algorithms, by representing the distinguished characteristics in spaces of reduced dimensions. A cell network whose input is a set of vectors produces the input of the morphology agents in order to inform and activate them. These cells are the mediators between the vectors representing the aspectual agents and the morphology agents. The reason of doing so is to take into account only the alterations of the aspectual agents morphology and to operate only on the agents landscape's modifications.

Since the main goal of the morphology MAS is to reify the geometrical shapes formed in the aspectual MAS, we have to find a way of plastically follow the evolution of the MAS. There are two kinds of agents in the morphology MAS: the pre-morphology agents and the actual morphology agents.

The pre-morphology agents create the possible precursors of future morphology agents. They tend to aggregate into morphology agents that will in turn try to aggregate into chreods. The mechanism is rather simple. A pre-morphology agent has three characteristics: its position in the universe, its weight (or strength) and its action range. When a pre-morphology agents appears it will be submited to the possible attraction of already existing agents (Figure 6). Then, step by step, bigger and bigger "morphology wells" will appear and reify the actual geometrical shapes of the aspectual MAS.


  
Figure 6: Agents landscape morphology at a given time
\begin{figure}
 \begin{center}
 \includegraphics[height=\columnwidth,width=5cm,angle=270]{morph.eps} \end{center}\end{figure}

Aspectual agents exhibit ponctual and particular characteristics. The morphology agents must try to exhibit general shapes, a priori pertinent in the aspectual agents landscape, reifying themselves the semantic traits. In order to achieve this extraction of a less ponctual characteristic that is distinguised by morphology agents, these last will have to bind, associate themselves in order to generate chreods. A chreod is an emergence in a set of similar and ramified agents, significative of a general geometrical shape, that must produce the generation of meaning. In the interpretation process we develop, it is the expression of a possible motivation, a virtual consideration of some general characteristics of the aspectual agents, the meaning of the comunicated facts.

A chreod is made of morphology agents agregated in an emerging way. But this is a new kind of emergence, one that applies to adaptive systems:

Emergence is the acknowledgement at a certain time, of a global stabilization aspect in the organisation's modifying process.

The chosen chreod shape is that of a star. It is centered around a morphology agent more or less strongly linked to other agents. It is generated via the communication between morphology agents that try to find bindings with the others by reinforcing some relations and by destroying others. This mechanism is like the struggle among the aspectual agents trying to bind to each other, but with a more agregative tendency.

In the easy example we present, the chreods are lists of morphology agents, instanciated according to four types: singularities, contradictions, concordances and non-situability. In these groups that are being generated, the morphology agents reinforce themselves in order to reinforce their action of shape distinction. It works the same way as when the human concern drives the observation by focusing on a point of view instead of a neutral and objective lecture of reality as a whole.

We can note that if the aspectual agents landscape constitutes some kind of pre-conception of the situationg expressed by communicational data, morphology agents' role is to make the emergence of some salient traits of this landscape. In other words, it is to exhibit what will constitute the triggering aspects of the meaning of the landscape.

It is thus really an artificial motivation process before any reasoning (part one of the motivation process in §5.2) where the system has to set norms, define thresolds, exhibit geometrical traits, differences and similarities so that it can procede with a well-conceived rational elaboration. This part has been particularly studied in the system we have developed.

The analysis agents

A third agents system will consider the chreods generated by the morphology agents and will produce an analysis from these particularities, in order to generate a global perception of the aspectual agents landscape, in other words, the situation as it is characterized by the communicational data.

These analysis agents will provide the characteristics of the aspectual agents landscape's organisation, that is, the general and significant traits of the communications, correlations between the factual elements, the inadequacy with the plans, the divergences in the actors' perception of the situation.

They will mainly use the set of chreods to conceive the situation as it is represented by the communications among the actors, by situating it in the phenimenon's time and space: they will generate analysis in order to conceive the situation in its very evolution, by refering to knowledge and scenarii of reference.

They will propose probable evolutions of the situation expressed by the set of communicational data, by using knowledge bases made of pre-established plans, historical data and by making hypothesis. These agents operate according to rational goals, starting from the chreods and by analysing the aspectual agents in function of what they want and can determine.

They constitute system of rational analysis agents on the communicated situation and are made of reduced knowledge bases. It corresponds to the part five of the motivation process (§5.2). They form a part that is easily upgradable since it is made of independant agents. Our system actually has a limited number of such agents.

The analysis agents system is made of reactive agents containing validated informations on the global characteristics of the situation. Almost all of these agents, that follow the categories of the ontology, have the structure described in Figure 7.

They are informed by constructing agents that operate like small specific knowledge based systems. These constructing agents work in a concurrent way to inform the various categories of analysis agents. This is then like in distributed expert systems in which the rules that operate on facts are replaced by concurrent communicating agents working on upgradable and adaptive structures like the analysis agents.


  
Figure 7: Analysis agent structure
\begin{figure}
\begin{center}
\begin{tabular}
{\vert l\vert}
\hline
\bf Analysis...
 ...xt\\ \ \ \vline \ \ evolution\\ \ \\ \hline\end{tabular}\end{center}\end{figure}

In the same way, the meta-rules operating on the selection of the rules are replaced by the actions of the chreods and of some decisional agents on the aspectual agents.

For instance, an analysis agent of type "action", categorizing an element of the situation corresponding to a physical action led on the ground, has the structure shown in Figure 8. Here are stated the categories of the analysis agent and the actions the constructing agents will have to do to inform it.

In the following example, the analysis agents activated bu the chreods will be of four types:


  
Figure 8: The structure of an "action" analysis agent
\begin{figure}
\begin{center}

\setlength {\parindent}{0em}
 
\begin{tabular}
{\...
 ...through the use of knowledge bases\\ \hline\end{tabular}\end{center}\end{figure}

The interpretation of the situation is like placing the engagement of actions in its context: holidays rush, celebration day, no similar meteorological case for this period of the year. We must then make a precise summary of the induced risks and produce a possible evolution schema if some parameter is to change during the first communications.

The analysis agents then have to determine the general meaning of the situation as it is perceived and conceived by the actors in the communicational data. They will do that by using a dynamical representation of these communicational data and the morphological traits extracting agents. The agents system formed exhibits a very reduced number of agents, in opposition with the aspectual agents system which hosts many. These agents represent the typical traits of the situation. Their morphology is thus simple, but relies, through the dynamical links of every analysis agent, on the aspectual agents, chreods, various constructing agents and knowledge based reactive agents. This morphology thus represents, by emergence, the global meaning of the situation with its main traits justified by all necessary elements. It is really the generation of the global meaning of the situation as it is perceived by the actors.

However, the system thus represented will only achieve a linear generation process of the global meaning of the situation, and will preserve, the differents actors' points of view but without operating according to a self assigned goal. A last agents system, very fundamental, will endow the interpretation system with a behavioral concern.

The analysis MAS is being worked on, the agents' structure are stabilized and we are currently implementing them.

The decisional agents

The set of the three agents systems: aspectual, morphology and analysis, with their associated structural agents - the situating agents - operates a linear interpretation process on the representation of the interpretation of the situation as it is proposed by the communicational data. The system must, in order to effectively operate an interpretation process, make alone the choices of action of interpretation. Clearly, the choice mechanism can not be located in the situating agents (they must not judge and being judged), but must be represented by the action of a specific agents system at the level of the situation MAS's morphology: these are the decisional agents. This is corresponding to the representation, in the system, of decisional patterns. These agents will make some actions at high and low levels, by operating on some situating agents and even on their morphology. This particularity of being able to focus their actions on two representation levels, is what makes the originality of these agents.

These decisional agents will be able to operate on chreods in order to achieve a choice on which analysis agents are to be activated. They will create new chreods, inhibiting some, reinforcing others, by taking into account the situating agents landscapes and their states.

They will also operate on analysis agents, by altering the actions of their constructors thus selectionning the components to inform, to study. They will operate on the aspectual agents by altering their states. They will thus operate in a global manner, by making decisions through the inhibition or the reinforcement of some developments, considering the three agents landscapes as a whole.

These decisional agents actually achieve the coherency of action of the three MAS, by making a loop in the interpretation process. They typically operate according to the habits acquired by the system, by taking into account the different conceived knowledge categories hence linked. One way of doing this is to see the decisional agents as polymorph agents, that is, agents capable of assuming the guise of all other agents. This process, hard to stabilize, is the system's characteristic allowing the representation of the concern on the generation of one interpretation process instead of another, by the system itself, according to its own morphology. The decision of interpretation taken by the set of MAS is again an emergence that leads the analysis agents to operate on the chreods via the decisional agents, as decisional patterns. The decision making must be typically made of numerous points of view. The interpretation system's stabilization thanks to the decisional agents is being worked on.

There is only one kind of decisional agent for the whole system, made of a macro-graph with four states. It allows the versatility of the agents and their action is lead according to these four principles:

In our example, a decisional agent operating accordingly to the second principle can lead the analysis agents to consider that the situation is definitely controlled since the Prefet issues orders, should the situation degenerate, the system will consider its the other actors' fault. However, if another decisional agent is operating accordingly to the third principle might be led to consider the inadequation of the operations and their executions and will systematically search for contradictions, thus neglecting the hierarchical management of the situation.

Thus, the motivation process of interpretation, produced by decisional agents in the other agents systems (they might struggle, work together, ...) leads the system that produces a more or less partial analysis that is not strictly rational but driven by preocupations. We consider such systems as realistic, in the way that its interpretation process operates on abstract components of its reality - the morphology agents - and only on predefined entities: the categories of agents. We are then able to study the mutation (like in genetic algorithms) of the decisional agents' behavior, this mutation thus endowing the interpretation system with some characteristics of artificial consciousness.

The decisional agents are still under construction and will heavily depend on the decision makers' wishes concerning the system behavior.

Conclusion

A prototype of such a system is actually extended for effective use in the industrial area of Le Havre (France). This prototype has successfully passed a first test where the first stages of an emergency situation have been experienced with the aid of the system (it was a simulation based on the recorded events of an accident). In this prototype, actors will exchange subjective opinions about a complex and fluctuating phenomenon to make cooperative decisions. Doing this, some useless confrontations, misunderstandings and personal conflicts should be prevented. So, some crisis in the crisis, which are the worst events that are to occur in emergency situations, should also be prevented. Such a model may be used in every social organizations which have a computerized communication network when we want to express human actors' intentions to improve the general management (financial flows, knowledge management, ...).

References

natexlab#1#1

Ashby(1960)
W.R. Ashby.
Design for a brain: the origin of adaptive behavior.
Chapman and Hall, London, 1960.

Cardon(1997)
A. Cardon.
A multi-agent model for co-operative communications in crisis management system: the act of communication.
In Proceedings of the 7th European-Japanese Conference on Information Modelling and Knowledge Bases, pages 111-123, May 1997.

Cardon and Durand(1997)
A. Cardon and S. Durand.
A model of crisis management system including mental representations.
In Proceedings of the AAAI Spring Symposium, March 1997.

Dennett(1987)
D. Dennett.
The intentional stance.
Cambridge Press, MIT, 1987.

Eco(1993)
H. Eco.
Sémiotique et philosophie du langage.
PUF, 1993.

Evaert-Desmedt(1990)
N. Evaert-Desmedt.
Le processus interpretatif.
P. Mardaga, 1990.

Jackendoff(1983)
R. Jackendoff.
Semantics and Cognition.
Cambridge, M.I.T. Press, 1983.

Le Moigne(1990)
J.-L. Le Moigne.
La Modélisation des Systèmes Complexes.
Dunod, Paris, 1990.

McFarland and Bosser(1993)
D. McFarland and T. Bosser.
Intelligent behavior in animals and robots.
MIT Press, Cambridge, 1993.

Nutin(1991)
J. Nutin.
Théorie de la motivation humaine.
PUF, 1991.

Peirce(1958)
Ch.S. Peirce.
Collected Papers, Vol 7, 8.
Cambridge, Massachussetts, Harvard University Pres, 1958.

Searle(1969)
J.R. Searle.
Speech Acts.
Cambridge University Press, 1969.

Stafford(1997)
S.P. Stafford.
Caring about knowledge: the importance of the link between knowledge and values.
In Proceedings of the AAA Spring Symposium, May 1997.

Tzafestas(1995)
E. Tzafestas.
Vers une systémique des agents autonomes: des cellules, des motivations et des perturbations.
Thèse de doctorat, UniversiteParis VI, 1995.

Varela(1996)
F. Varela.
Organism, a meshwork of selfless selves.
In East-West Symposium on the Origins of language, 1996.

Winograd and Flores(1986)
T. Winograd and F. Flores.
Understanding Computers and Cognition: A new Fundation for Design.
New-Jersey, Abley Press, 1986.
Franck Lesage
2/26/1998