Constructive Interactions

The new paradigm of “knowledge construction using experiential based and collaborative learning approaches” is an outstanding opportunity for interdisciplinary research. This document is an attempt to introduce and exemplify as much as possible using the lexicon of “social sciences”, considerations and tools belonging to “artificial intelligence” (e.g.: the machine learning tradition). In the paper we first draw a conceptual framework for rational agents in conversational interaction; then we use this framework for describing the processes of co-building ontologies, co-building theories, social interactive learning ... as examples of constructive interactions ; finally we give a brief description of a few AI methods and tools which might help - or perhaps be unavoidable in order to ensure the success of the interactive process.


INTRODUCTION
The intertwining of cultures in a Society with planetary extension, the progressive fragmentation of every one's daily life, the omnipresence of communication networks and computing machines, have induced a radical paradigm shift with an impact on any aspect of today's personal, social, cultural and economic processes: -people and computers meet through internet, therefore assuming the role of "agents" in conversational interactions; -the idea of static knowledge which might be enclosed in universal encyclopedias before being delivered to the masses is progressively substituted by the notion of dynamic, interactive, social knowledge construction, based on a consensus reached by means of subsequent cycles of acceptance, refutation and refinement of shared knowledge inside any group; -learning is therefore no longer considered as "knowledge transfer" -within a behaviouristic or cognitive paradigm at choice -rather on "knowledge construction using experiential based and collaborative learning approaches in a contextualized, personalized and ubiquitous way", as it appears in the ELeGI project, currently on negotiation.This paradigm shift is an outstanding opportunity for interdisciplinary research.Notions like "interaction", "collaboration" and "learning", that belong historically both to "social sciences" and "artificial intelligence" become central.However, it is yet quite unclear if and how there will be a convergence on meanings attributed by these and other scientific approaches to the same, fundamental phenomena for the future of our societies.This document is an attempt to introduce and exemplify as much as possible using the lexicon of "social sciences", considerations and tools belonging to "artificial intelligence" (e.g.: the machine learning tradition).By doing this, we wish to support the argument, hardly accepted by the general public, that current AI methods and tools, when they respect specific realistic constraints emerging from observing human communities engaged in the construction of shared meanings, indeed are of invaluable help to facilitate, if not enable the convergence of the process and therefore the achievement of important results, among which the learning of complex concepts and skills by humans.This approach may be synthetized by a view of human learning stimulated by doing: the actions being those necessary and sufficient for constructing shared meanings from real observations of experienced phenomena.
The position paper is organized in three parts: i.
a conceptual framework for rational agents in interaction : definitions and a scenario ; ii. the processes of co-building ontologies, co-building theories, social interactive learning ... seen as examples of constructive interactions ; iii. a brief description of a few methods and tools which might help -or perhaps be unavoidable in order to ensure the success of the interactive process.

AN INITIAL SCENARIO OF RATIONAL AGENTS IN INTERACTION
To start with an elementary scenario let us consider three "agents" looking at a collection of geometrically shaped coloured objects.Assume the three agents are motivated and have indeed decided to build a language to describe them.
• "gf1" is a agent able to see shapes (and not colours)."gf1" will naturally classify the objects by shapes and give a name to the resulting classes.A possible classification by "gf1" is : SQUARE, TRIANGLE • "gf2" is another agent, equally able to see shapes, and equally unable to see colours.She, or he produces the following class names : CARRÉ, TRIANGLE • "rf" is a third agent who cannot see shapes, but can see colours.He identifies and names the following classes : RED, GREEN, BLUE We wish to build of a "framework" where unambiguous, formally defined protocols can be defined, and where the following questions find an answer allowing to validate its predictive power: a. will "gf1", "gf2" and "rf" be able to build together a communication language?
b. if YES will they be able to build together a theory concerning the objects (for instance the expression of a particular relation between colour and shape)?
c. if YES will they be able to teach this theory to another agent, and how?
In order to start, we assume that: • there is a "physical world", where real experiences take place.This allows us to say that "gf1", "gf2" and "rf" are looking at the same REAL objects, although they may see and therefore describe those in different ways.
• there are "intelligent entities", able to put these experiences in order, to link some experiences to others, and to build "meaning" upon all this.AI researchers use to call them "agents" independently from their human or artificial nature.We are going to adopt this habit, not because we have an anthropomorphic view of software (or a mechanistic view of human thought), but rather for simplicity.Agents and objects are of course part of the REAL world.
The very basis for interaction is this "physical" or REAL world: agent "gf1" may select some objects and give them to "rf" without words.Whenever interaction inside a group will be shown to be (or not to be) successful through REAL experiences, we shall talk of PRAGMATIC PROTOCOLS.
Interactions of intelligent entities through pragmatic protocols may allow the emergence of SYNTAX which can be described: • in "social sciences" as the "grammatical relationships among signs, independently of their interpretation or meaning" ; • in "artificial intelligence" as the level where "well formed expressions" are built and recognized; when a programme is seen as a set of expressions, the syntax is checked by the interpreter or the compiler.
The syntactic level includes almost the totality of "informatics", roughly described as the discipline consisting in defining, using and processing formal languages as abstract models of reality.
One of the powerful paradigms of Artificial Intelligence is "Multi Agents Systems" [1].Here we shall find "messages" between "agents" respecting the (public) rules of "communication languages", so that collaboration may be enjoyed.And we shall find "expressions" inside "agents" respecting the (private) rules of "description languages", so that abstraction may occur.SQUARE, TRIANGLE are concepts in a description language for "gf1", and if "gf1" When "gf2" comes to the conclusion that "SQUARE = CARRÉ", then this correspondence might become a concept belonging to a communication language between them.
In order to talk about "interpretation and meaning", we must evocate the SEMANTIC level which focuses on the study of the signification of signs or symbols, as opposed to their formal relations."gf1" interprets the objects as shapes, while «rf» interprets them as colours.One point we wish to outline is that the other agents have NO ACCESS to "gf1's" or "m's" mind/semantic level, they are just allowed to guess with the help of protocols relying on PRAGMATICS and SYNTAX!That is to say that agents are not allowed to direct interaction at the SEMANTIC level, they need the mediation of both the REAL and the SYNTAX levels.
In the simplified framework we need for the purpose of this paper, a RATIONAL AGENT is: • able to interact with the REAL world through interfaces (eyes, mouth, keyboard, monitor ..); • able to describe the objects of the real world, as well as to interact inside groups through messages, using languages at the SYNTAX level ; • able to integrate private experiences, to develop abstraction, and to give a meaning to received messages at the SEMANTIC level.

INTERACTIVE CONSTRUCTION
Before giving examples of constructive interaction, let us come back to our initial scenario, and split it in the two simplified situations case1 and case 2: In case 1, "gf1" is an green frog able to see shapes and giving them english names, while "gf2" is another green frog able to see shapes but who gives them french names.
We imagine a pragmatic protocol where these two agents are watching the same objects, and sticking labels at them, so that each one can simultaneously see the objects and their associated labels.Because they basically classify the objects in the same way, according to their shapes, we may hope that they are going to understand each other.More precisely, we bet that they will be able to build at the SEMANTIC level a correspondence between english and french labels, which can be described at the SYNTAX level as the first step for sharing an ontology.We are not intending to give a full discussion of those two cases, but we wish to focus on a few points: • "ontologies" belong to the SYNTAX LEVEL, and the "checking" as well as the co-building of ontologies implies PROTOCOLS involving a shared communication language; when no communication language pre-exists, the REAL level is the only reference and a PRAGMATIC PROTOCOL is required.
GRID Infrastructure to support future technology enhanced Learning that is why we consider this operation as the emission of a hypothesis which will have to be validated through further experience, and therefore borrow the word ABDUCTION to logicians.Jean Sallantin and his team "LIRMM / Apprentissage & Rationalité", have built a protocol [4] in which the basic cycle connecting the private sphere of semantics to the public sphere of syntax is the induction/abduction cycle.
• the definition of an interaction protocol between a group of rational agents relies on the answers given to the following questions : i. the question of horizon: has each agent a local horizon (can access to only partial information) or a global horizon (can access to complete information)?
ii. the question of memory : are all the examples simultaneously given, or are they sequential events which have to be stored by the agent, and for how long?
iii. the question of the starting point: do the agents of the group already share a language, with or without derivation rules ?...
We are now able to define constructive interactions as situations in which a given protocol allows a group to develop a common knowledge, i.e. to build and stabilize a new syntactic corpus through a conversational process involving each agent's semantics.We observe two steps: In Figure 5 each agent takes into account the other's abduction and proceeds to a revision of one's theory In the above example two agents are co-building a theory, following a protocol defined by: -local horizons -simultaneous examples -starting point : "gf1" and "gf2" share an ontology and a formal language with the expressive power of first order logics.
Here again, the aim of this paper is not to give a full discussion of the example, which would lead us to the introduction of epistemic logics [5] [6], but rather to give an introduction to the general framework where AI methods and tools can be compared and combined.
In this framework, social supervized learning basically follows the same protocol as co-building of ontologies, the particular point is that the group feeds the learner with examples in order to let him make the correct abductions.A more extensive discussion of this kind of learning can be found in [7] [8].

AI METHODS AND TOOLS
The following diagram is a simplified representation of our framework: Let us follow once again the two steps of our "co-building of a theory" protocol, through a mock up dialog: "gf1" and "gf2" are looking at the same objects.They share an ontology in which the objects are depicted by shapes and colours, and a formal language including epistemic logics.
- This over-simplified example can apply to more complex scenarios: if we come to try and build a theory about organic chemistry, instead of geometrically shaped coloured objects, we may find it difficult to find out regularities "by hand", that is why the first AI tool we would like to mention here is "machine learning", considered as a help for a human learner.The LIRMM team research work concerning this topic relies mostly on structural machine learning with gallois lattice and graphs [3].This directly addresses the induction/abduction cycle.
On the other hand, we have shown that pragmatic protocols are essential in the co-building of ontologies and theories, and when dealing with Learning in Virtual Communities, it seems highly probable that "enhanced telepresence" tools are required to allow the sharing of real experiences.For instance, BuddySpace http://buddyspace.sourceforge.net/ is an instant messenger developed by KMI (Knowledge Media Insitute / Open University -GB), with three main characteristics: it allows optional maps for geographical & office-plan visualizations in addition to standard 'buddy lists' it is built on open source technology; it is implemented in Java, so it is cross-platform When Virtual Communities share experiences in less formal contexts than organic chemistry, besides telepresence, we may aim to give them an "assistance to elaborate abstractions" through an adequate PROTOCOL.To achieve interactive construction, we have to integrate in such a protocol semantic and syntactic aspects: to make a synthesis between social science and computer science concepts, we mainly put at work the transdisciplinary concept of cognitive object [9], and then allow dynamic construction of context-sensitive ontologies: each collaborative learner is involved in situations of action (experiences) where his intuition puts forward specific cognitive objects the group then exchanges visions to construct abstractions: classes with generic properties.Some properties are unknown in certain objects, some are virtual, some are natural (observed in the learner's world); the "object modelling concept" [10] acts as a media model.
Moreover, if we want conversational processes to be effective, they have to generate services that help humans to learn facts, rules and... languages.And if we use artificial agents, then they must also be able to learn dynamically facts, rules and languages [11].As a resulting side effect, we will have the opportunity to use artificial agents that "learn by being told" during conversations with other artificial agents, and thus show a GRID Infrastructure to support future technology enhanced Learning dynamic behaviour that adapts to the context.The STROBE model [12] allows artificial agents to modify dynamically their interpreters.In our general diagram, this model directly addresses the SYNTAX level.
To end this enumeration without pretending to exhaustively, we shall briefly mention a protocol built around the notion of "elementary service".As we have already stated, learning inside a Virtual Community is shifting to "interacting with a bunch of services, as well as other people from a virtual community", and this kind of protocol may be of some interest, especially if it allows these "elementary services" to compose "integrated services" dynamically.The automatic composition of services respecting the metaphor of humans asking questions and giving answers in a collaborative conversation is the main purpose of the "e-talk protocol" [13], to be formalized.The general idea is to facilitate the induction/abduction learning cycle by providing very high level languages, that is to say to rise the SYNTAX level as close as possible to the SEMANTIC LEVEL of each "agent".

CONCLUSIONS
The conceptual framework we have drawn for rational agents in interaction has allowed us to represent some of the major ingredients of collaborative learning: the co-building of ontologies the co-building of theories and social interactive learning As we have pointed out, these constructive interactions inside groups rely on protocols allowing dynamic consensus by means of subsequent cycles of acceptance, refutation and refinement of shared knowledge.
A few AI methods and tools enabling the interactive process have been briefly presented inside this framework, so that theories and methods belonging to social sciences may be empowered within explicit scenarios.

FIGURE 1 :
FIGURE 1: (case 1) « gf1 » and « gf2 » have identical classifiersIn case 2, "gf1" is a green frog able to see shapes, while "rf" is a red frog able to see colours ... then the mutual understanding is much more difficult, simply because classification of the couples (object, label given by other agent) doesn't work here!

FIGURE 3 :
FIGURE 3: classifying and naming• the basic "operations" in the building of ontologies are CLASSIFYING and NAMING: both of them imply the SEMANTIC LEVEL.CLASSIFYING happens in biological brains through the cross-activation of neural networks[2] ; it happens in the case of symbolic machine learning through algorithmical analysis of Galois lattice[3].While human agents classify real experiences, software agents classify only syntactic descriptions.But in both cases, the input is a set of experiences / examples which constitute the local private memory of the agent who classifies; and the output classes keep on transforming as long as this input is fed by new experiences.The logicians call this operation INDUCTION, because it basically consists in generalizing from peculiar examples.NAMING makes a classification visible to other agents, otherwise it would remain confined inside each agent's semantic level.And of course, naming is subject to evolution since the classes keep on evolving; that is why we consider this operation as the emission of a hypothesis which will have to be validated through further experience, and therefore borrow the word ABDUCTION to logicians.Jean Sallantin and his team "LIRMM / Apprentissage & Rationalité", have built a protocol[4] in which the basic cycle connecting the private sphere of semantics to the public sphere of syntax is the induction/abduction cycle.

FIGURE 4 (FIGURE 5
FIGURE 4 (STEP 1): each agent makes his own abduction, according to his local horizon

FIGURE 6 :
FIGURE 6: principal flows [gf1]My own induction about the objects I can see makes me formulate the following hypothesis (abduction) : "all little squares are green" & "all large squares are blue"