A Multifacet Formal Image Model for Information Retrieval

This paperpresentsan extendedmodelfor image representationand retrieval (EMIR 2 ). This modelcombinesdifferent interpretations of an image to build a complete description of it, each interpretation being represented by a particular view. The set of views considered in EMIR 2 include the physical view and the logical view, which is an aggregation of four main views: the structural view, the spatial view, the perceptive view, and the symbolic view. A description of the model concepts is given using a mathematical notation, yieldingthe framework EMIR 2 . We deﬁned a ﬁrst correspondence function that estimates the similarity between two images, one being the query.


Introduction
Generally speaking, two different formats of physical images are referred to by the term image in the literature, the raster and the vector formats.In the raster format an image is considered as a matrix of pixels, each pixel being represented by its colour.This kind of images includes photographs, paintings, ..., and are produced by digitalisation.In the vector format an image is represented as a set of mathematical equations defining n-dimensional objects with explicit knowledge on their structure and locations.This format is typically used to store line art and CAD information, and is produced by computer drawing tools.In this paper the term image designates only the first category, the raster format representation.
A lot of image content representations have been defined in the literature.The image descriptions are classified in three main categories.Most information retrieval systems use a combination of different descriptions to capture all aspects of image content.
The basic image representation considers the image as a physical object (pixel matrix) without considering semantics interpretation of its content.The indexing of this kind of representation is mainly based on the colour distribution in the image, textures, etc. [4] The second category considers an image as a graphics.All the elements recognised in the image are represented using a spatial description.Two general approaches are used in defining a spatial representation, the object oriented and the relation oriented.In the first approach the image content is considered as a set of geometric objects, each defined by a set of points in an Euclidean space.In the second approach the image is represented by a set of objects linked together by a particular set of spatial relationships (topological, metric, ...).[18,21,8,5] The third image representation category includes all kind of semantic interpretation of images.A wide range of models have been proposed and used for image retrieval.This category includes the list of external attributes of the image, e.g. the date, the author name, the size, etc.It includes the classical textual descriptions and their indexing using list of terms defining the elements considered as relevant in the image [12].Some rich semantic descriptions, developed in AI for knowledge representation, are also used to capture the complex image content, e.g.complex objects [15], first order logic [6], Shank's conceptual dependencies [2], etc.

The image model basics
The process of image description construction consists in recognising basic entities relevant to the image content, and assigning a particular semantics to these entities.This process produces a set of completely defined objects according to a semantic model, which can be general or specific to a particular application or domain.We will present our proposition for an information retrieval model adapted to images.

The basic notions
In EMIR 2 the basic description of an image is a particular interpretation of it.An interpretation defines the semantics of the image objects considered as representative of the image in a particular context.To build the best description of the image we combine a set of interpretations, each interpretation corresponds to a particular view of the image, and the image is said to be a multi-viewed object.The two principal views are the physical view and the logical view.The logical view groups all aspects of image contents and its general context.This view is an aggregation of different basic views, the spatial, the structural, the perceptive, and the symbolic views.
The notion of complexity is inherent to the image content, and is considered in EMIR 2 by representing the image content as a set of concrete objects identified as relevant in the image, and their interrelationships.Here an image is partitioned into a set of sub-images, each corresponding to some relevant object, which is designated by the term image object.An image object corresponds to the real word objects of the scene whose projection in a two-dimensional space is the described image.The multi-view aspect of the image description is extended to the sub-images and their representative, the image objects.The Fig. 1 below identifies an image and an image object by their basic descriptive views.
The principal views, mainly the components of the logical view of an image and image object, are described below.

The physical view
The physical view of an image is the corresponding pixel matrix.Four main image types are defined in our model: the bitmap images, where image pixels can be black or white; the grey scale images, where a pixel is one the 256 grey levels; the palette colour images where a pixel has a colour among a set of 256 possible colours; and the true colour images, where a pixel can have a colour among a set of 224 different colours.We defined two categories of operations to manipulate the physical view of images.The first category includes general image processing functions, like the zoom, the scaling, the edge detection, ... The second category includes binary operations to produce new images by combining existing ones by means of , three operators AND, OR, NOT.

The logical view
Each particular view describing the image content is partial, and the integration of a set of views in the same model leads to a more complete representation of the image.These different views are combined in a global view, the logical view, which integrates all aspects of the image content and can be used as a faithful representation of it.The schema of the Fig. 1 below illustrates how do we combine these partial views to get a global one.We identified four main complementary view types of the image.Each view is generally based on a set of descriptors linked together by relationships specific to the view.In the following we present each view, by identifying its basic components and the relationships between them.

The structural view
The structural view of an image defines the set of image objects that have been considered by the indexer as the most relevant to the image description.Each image object can be simple or complex, i.e. described by a structural view.The structural view is not a complete partition of the image, only relevant image objects are considered in the decomposition.
The structural elements of the image representation form a connex oriented graph whose nodes are the structural objects of the model (image and image objects) and the arcs correspond to the composition relation between these basic objects.The Fig. 2 shows an example of structural description of an image.In this example two image objects have been identified, a tree and a house, and considered in the structural view of the image.These image objects are described using structural views and decomposed into simpler image objects: foliage and trunk for the tree, and fac ¸ade and roof for the house image object.

The spatial view
The spatial view of an image object represents the shape of the image objects (polygon, segment, ...) and the spatial relationships (far, north, overlap, ...) that indicate their relative positions inside the image.We define the spatial view of an image object as a combination of a set of modelling spaces, generally used in the literature.
According to [17] only four modelling spaces, topological, metric, vector and Euclidean, are relevant to spatial reasoning.Topological spaces are the most relevant, and they include only concepts of connectedness and continuity.Metric structures involve notions of distances.Vector spaces are well known; coordinates, directions, dimensions are typically vectorial.The more realistic structures, the Euclidean ones, admit notions of scalar products, orthogonality, angle and norm.We consider in EMIR 2 a complete mathematical model that represents all aspect of spatial knowledge about objects in an image.
The four modelling spaces are then considered in our model.
The Euclidean space is used to describe the object shapes.Three basic categories of spatial objects are considered, the point, the segment and the polygon.A spatial object is defined given a list of points, and a point is identified by its Cartesian coordinates.
The metric space is reduced to two spatial relationships based on the object distances.These relations are far and close.
In the vector space we consider the four direction relations north, south, east and west.
In the topological space we chose a relevant set of topological relations defined in [9].This five relations set has two principal advantages.Its completeness, i.e. a couple of spatial objects are related at least by a relation, and its exclusiveness, i.e. a couple of spatial objects are related at most by a relation of the set.These relations are cross, overlap, disjoint, in, and touch.
One should keep in mind that the spatial relations do not relate image objects to each other, but relate their corresponding spatial views, which corresponds to their projection on the bi-dimensional plan of the image.This constraint implies that when a spatial view is inside another spatial view, the corresponding real world objects may not be included one in another.
The relations considered in EMIR 2 are computed using the Euclidean space.We provide a set of procedures that determines the topological, metric, and vector space descriptions of an image given the Euclidean space description of the image objects that compose it.In order to compute the vector and metric spaces we substitute to each spatial object its equidistant barycentre.
The spatial view of an image object based on the four modelling spaces combines the two classical approaches in image content modelling [7].The object oriented approach is represented by the Euclidean space, and the relation oriented approach is represented by the three other modelling spaces.
The spatial view of an image can be seen as a graph whose nodes represents the spatial objects and whose arcs corresponds to spatial relations linking the spatial objects.The presence of a spatial view in the description of an image object is interpreted as the visibility of the image object.So, an image object with no spatial view is considered as not visible in the physical view of the image.In EMIR 2 we can represent all elements relevant to image content even if they are not visible (hidden elements).
Combining this set of modelling spaces induces a set of dependencies between the spatial views.In fact, when two objects intersect we can deduce they are close to each other, and when two objects are far from each other we can deduce they are disjoint.

The perceptive view
The perceptive view includes all the visual attributes of the image and/or image objects.It describes the appearance of the image components as perceived by an observer.In EMIR 2 we consider mainly three basic visual attributes, the colour, the brightness, and the texture.
The Colour attribute captures the colour distribution in the image.The representation considered in EMIR 2 is the colour histogram, in which we consider the set of dominant colours and the ratio of the object surface represented by the colour.A colour value can be represented in different colour spaces, we consider in EMIR 2 for the moment the RGB colour space.
The texture of an object is represented by the regular pattern that fills all the surface of the object [22].For the moment we consider a set of basic textures, and the texture attribute of an object is instantiated by a value from this set.
The brightness attribute is represented by a value corresponding to the average light in an object or surface.
An important advantage of using the perceptive view is the possibility to use automatic procedures to compute the image description according to this view.The same procedures can be used for image objects once their spatial views are defined.

The symbolic views
A symbolic view associates a semantic description to an image or to an image object.A wide range of possible descriptions can be used as symbolic views, we will limit ourselves to the ones generally used in IR.The simplest descriptions are terms, database attributes, and complex terms (compound nouns).More sophisticated representations like first order logic, terminological logic, conceptual graphs, semantic nets, have been used as well.We think the use of such symbolic views, with rich semantics, is necessary to achieve the best effectiveness in information retrieval, but using simple descriptions leads to efficient IRS.Three general types of symbolic views are considered in EMIR 2 : classes, properties, and symbolic relations.
A class defines the semantic category of an image object.For example we can describe an image containing a tree and a house using two image objects each described by a symbolic view of type class.The first object is described as an element of the class Tree and the second is described as an element of the class House.The set of possible classes corresponds to all possible concrete objects and is organised in an ontology by the IS-A (specificity/genericity) relationship, and is part of the image model.
A property corresponds to an attribute defined by a couple of elements, representing the property identification and the domain value of the property.For example we can describe an image by a property called Author whose values are a subset of the string data type.We consider here two subsets, properties associated to images, e.g.size, date, author, etc., and those associated to image objects, e.g.identifier, name, etc.
A symbolic relation corresponds to different elements of the image content involving the image objects : an action involving one or more objects in the image, the states of the objects, etc.For example an image showing two persons who are fighting can be described using two image objects whose symbolic views indicate the persons' names and a symbolic relation between these two objects corresponding to the fight event.A set of constraint rules have to be defined to control symbolic views construction.A rule defines for each relation the possible symbolic objects that can be linked using it.The RIME semantic model [2] is an example of such symbolic view.

Inter-View dependencies
Being partial descriptions of the same image object is not the only link between the different views of an image object.A particular property follows from this definition of the model.The structural relation between two image objects implies that the spatial view of the component object is inside the spatial view of the composed object : The Fig. 2 shows a complete example of a description of an image.Here, two image objects are considered relevant to the image content description, the tree and the house.The image object corresponding to the tree is represented using a term symbolic view (tree), a general spatial view is used to state that the tree is visible in the image but its shape is not relevant, and the third component of the object is its structural view which states that it is composed f two simple entities, corresponding to its foliage and its trunk, and which are represented in the same way.In this example some spatial relations are used to link the spatial views of image objects, the foliage touches and is north of the trunk.No perceptive view has been used to describe the image in this example.

Formalisation of the image model
We present in this section the formalisation of the image model described above using a general mathematical formalism.This formal description considers all the elements presented above, plus some contextual elements necessary for the semantic interpretation of the image.

Definition
A physical view model is defined the following tuple : M ph = (I ph , POINT, EC, TYPE, h, w, tc, pixels, type ) I ph is the set of physical view identifiers in EMIR 2 .POINT is the set of natural number pairs representing the cartesian coordinates of possible points : POINT = N + N + .EC is the colour set defined in a particular colour space.We consider in EMIR 2 the RGB colour space defined by : EC = f0, 1, ..., 255g f 0, 1, ..., 255g f 0, 1, ..., 255g.TYPE is the set of physical view types.For the moment it contains four elements : TYPE = fBW, GS, PC, TCg, with BW = Black & White, GS = Grey Scale, PC = Palette Colour, and TC = True Colour.
h : I ph !N + , h is a function that associates with each physical view identifier a positive number corresponding to the image height.
w : I ph !N + , w is a function that associates with each physical view identifier a positive number corresponding to the image width.
tc : I ph !P (EC), tc is a function that associates with each physical view identifier the set of colours used in the corresponding image.P(s) stands for the set of subsets of s. pixels : I ph !P (POINT EC), this function associates with each physical view identifier the set of pixels of the image.Each pixel being defined as the association of a point and a colour.
type : I ph !TYPE, this function associates with each physical view identifier the type of the corresponding image.

Constraints on the physical view model
A given physical view model is said to be coherent if the following constraints hold : The colour table of a physical view is linked to its type by the following rules : The height and length of the image gives the pixel numbers : We can not associate with the same point two different colours : The coordinates of the points of the pixels are limited to the image dimensions, and the colours belongs to the image colour table : 8 i 2 I ph , 8 (p,c) 2 pixels(i), p = (x,y), then 1 x w(i) and 1 y h(i) and c 2 tc(i).

Definition
An image structural view model (Mst) is defined by a set of image object identifiers and the composition relation between the objects.

M st = (I io , CONT)
I io is the set of possible image object identifiers in the structural view.CONT is the composition relation between image objects, CONT I io I io .This composition relation depends on the semantics associated to the image objects.

Constraints on the structural view model
A structural view model is coherent if it respects the following set of constraints: The relation CONT is anti-symmetrical and transitive.An image object is component of only one image object.8 (io 1 , io 2 ), (io 3 , io 4 ) 2 CONT, if io 2 = io 4 then io 3 = io 1 .

Definition
The perceptive view is defined by : M pe = (I pe , TX, BR, CL, tx, br, cl) I pe is the set of perceptive object identifiers.
TX is the set of possible textures in the model.BR is the set of possible brightness values.
CL is the set of possible colour values, CL EC.
tx : I pe !TX, a function that associates with a perceptive object identifier a texture from TX.
br : I pe !BR, a function that associates with a perceptive object identifier a brightness from BR.
cl : I pe !CL, a function that associates with a perceptive object identifier a colour from CL.
Each basic set (TX, BR, CL) is augmented by a null value (tx 2 TX, br 2 BR, cl 2 CL) to be used when the value is unknown or undefined.

Definition
M sp = (I sp , POINT, OS, RSPA, shape, R sp ) I sp is the set of spatial objects identifiers.
POINT is the set of integer pairs that represent the cartesian coordinates of all possible points, POINT = N + N + .
OS is the set of basic image objects that can be used to represent the shape of the object in an image.Three basic types are used in EMIR 2 for the moment, the point, the segment and the polygon, and they are defined as follows : SEGMENT POINT POINT, the points being the segment extremities, POLYGON P (SEGMENT), each segment being a side of the polygon, OS = POINT [ SEGMENT [ POLYGON.
RSPA represents the set of spatial relations defined in EMIR 2 , and which is : RSPA = fFar, Close, East, West, North, South, In, Disjoint, Touch, Overlap, Crossg.
shape : I sp !OS is a function that associates with each spatial object identifier its shape which is defined by a subset of OS.
R sp RSPA I sp I sp , is the relation that represents all possible spatial relations linking the spatial objects of the spatial view.

Constraints on the spatial view model
The extremities of a segment are disjoint.HOLDS(sr, so 1 , so 2 ) is a boolean function that checks if the spatial relation sr holds between the two elements so 1 and so 2 2 OS.The definition of HOLDS for each spatial relation is given in the annex A of this paper.

The symbolic view
As presented previously the symbolic view is specific to a particular application and can not be defined independently from the application specificity.We will define the symbolic view model as the association between an application semantic model and a set of abstractions representing the symbolic view.

The application semantic model
The application semantic model includes the object class ontology, the definition of the composition relation between object classes, the symbolic relations definitions, and the properties definition. 2 COMP means that objects of the class c 1 can be components of objects of the class c 2 .This relation is mainly used to control the validity of the structural view (object decomposition) of an image.

The symbolic view model definition
The symbolic view model is defined relative to an application semantic model.It associates with the set of symbolic objects their semantic interpretation, and is defined by : M sy = (M app , I sy , cl, RI, PI) I sy is the set of symbolic objects identifiers.cl : I sy !ID cl , is the function that associates with a symbolic object identifier its class.RI ID rs I sy I sy , is the relation that represents the symbolic relations between the symbolic object identifiers.
PI ID pr I sy VAL PROP, is the relation that represents all the properties associated with symbolic objects.

Constraints on the symbolic view model
The elements of RI are instances of the symbolic relation definitions in the application semantic model.

Example
In an image base representing photographs, each image is described by a set of attributes : author name, place, etc.The main subject of images is landscapes and houses.We define, for this particular application, two properties of the image (author and place) and only one symbolic relation MakeShadowTo.The semantic model of the application is then defined by :

Definition
An image model M im is defined as an aggregation of a the basic coherent EMIR 2 view models and a set of relations that represent the inter-view dependencies : M sy is a coherent symbolic view model.L sp I io I sp , is the relation that associates with an image object a spatial object from the spatial view.L pe I io I pe , is the relation that associates with an image object a perceptive object from the perceptive view.L sy I io I sy , is the relation that associates with an image object a symbolic object from the symbolic view.

Constraints on the image model
An instance of the image model is noted i and each element e of i is noted i.e, for example i.i ph is the identifier of the physical view of the image i.An image model Mim is coherent if it respects the following constraints : The relation L sp (res.L sy , L pe ) associates at most one spatial object (res.symbolic, perceptive) with an image object.8 e 2 OS, 8 (x, y) 2 pts(e), 1 x i.w(i.iph ) and 1 y i.h(i.iph ).pts : OS !P (POINT), is a function that gives the points used in the definition of a spatial object.The composition relation between image objects in the structural view is an instantiation of the composition relation defined on the object classes in the symbolic view :

The image base
An EMIR 2 image base is defined as a collection of instances of a coherent EMIR 2 image model.

EMIR base = (M im , I im ).
M im is a coherent EMIR 2 image model.I im is a set of instances of the image model M im .

The query language and the correspondence function
We present in this section the elements of the correspondence model intended for EMIR 2 .We will give the general guidelines for the query language, and the query definition, thereafter the list of selection criteria to be considered in comparing an EMIR 2 query and an EMIR 2 image.

The query language
A query is an instance of the image model with some new possibilities : We can use generic identifiers instead of real identifiers to represent all objects : image objects, symbolic, perceptive, and spatial objects, image identifier and physical view identifier.For more convenience we can use the undefined identifier * for image and physical view identifiers.
Fuzzy values for the perceptive views Colour and Brightness can be used in a query.These fuzzy values correspond to a set of basic values from the domains Colour and Br defined in the image base context.
-The colour of an object (image or image object) in a query can be represented by an identifier that correspond to a subset of the colour space defined in the image base context.This sub-set is denoted by the function dom.For example the colour Green, does not correspond to a single colour, but to a set of colours that can be perceived as green by a human being, dom(Green) = fcc 1 , cc 2 , ..., cc n g.We define a set VAL CL that includes the set of terms representing the fuzzy colours that can be used in an EMIR 2 query, and with each we associated a set of colours from Colour.

The correspondence function
We will list here the basic selection criteria to be respected by a query (q) and an image (d) represented in the EMIR 2 model such that the image can be considered as relevant to the query.
The image d and the query q are defined as instances of the image model M im , with the possible extensions to the query described in the section above.
The image d is considered as answering the query q iff we can find a surjective application, denoted A, from the set d.I io in the set q.I io that respects the following constraints: A d.I io q.I io The application A has the following properties : (c1) It is surjective, i.e. 8 io q1 2 q.I io , 9 io d1 2 d.I io , such that (io d1 , io q1 ) 2 A .(c2) The antecedent of an element of q.I io is unique : 8 (io d1 , io q1 ) 2 A if 9 (io d2 , io q1 ) 2 A then io d1 = io d2 .This constraint is introduced so that all the constrains on the views of an object of the query are verified by the views of the same object from the image d.

Conclusion and future work
We presented in this paper our approach for an extended content based representation and retrieval of images.
EMIR 2 is a formal model that integrates all aspects considered as relevant to image content description for effective information retrieval.In this model we combine different types of image representations to get the most precise and the most exhaustive image content description.These different representations are identified as particular views and an abstraction to combine them is defined.A general mathematical formalism has been used to state the model elements, the query language and the selection criteria to be used for image-query similarity estimation.
An operational model EMIR 2 -CG, based on Sowa's conceptual graph formalism, is defined to implement the concepts of the model EMIR 2 , and the similarity function to be used in the retrieval engine.We are currently experimenting EMIR 2 -CG using a collection of images of the old Paris areas.The retrieval engine, based on a conceptual graph framework, has been developed on top of the object oriented database system O2.The test collection is composed of two main parts: the indexing of the images, which has been done by specialists, using a sophisticated term based symbolic view, and the modelling of domain dependent knowledge which includes the concept type lattices corresponding to the different views, mainly the class type symbolic view (thesaurus of the domain), and a set of image properties [20].
EMIR 2 is open to integrate other media description in the same framework.The symbolic view associated with images was inspired from textual data representation, and according to that a text can be easily represented in EMIR 2 using a particular symbolic view, and then a comparison between an image and a text could be based on this symbolic description.
The future work in EMIR 2 is conducted in three directions.First one we try to introduce some uncertainty and/or relevance measures in the image representation, since the image interpretation process, depending on its nature (manual or automatic), produces descriptions which are far from being perfect: they can be partial, ambiguous, uncertain, more or less relevant, etc.The second work axe concerns the definition of a complete graphics model to be used as a spatial view and mainly to get an effective function for comparing object shapes.The third work axe concerns the use of an operational model more suitable for IR, since Conceptual Graphs does not deal with logical inferences.Terminological logic based models seems to be the more promising for the moment and we will start soon working on this point.

Annex A. Definition of the spatial relations 6.1 Metric modeling space relations
The definition of the metric relations is based on the normalised distance function ndist : ndist (so 1 ,so 2 ) = , where distmax corresponds to the diagonal of the image, and mdist(so 1 , so 2 ) is the minimal distance between the objects so 1 and so 2 .HOLDS(close, so 1 , so 2 ) () ndist(so 1 , so 2 ) d min , d min 2 [0 .. 1].HOLDS(far, so 1 , so 2 ) () ndist(so 1 ,so 2 ) d max , d max 2 [0 .. 1].d min and d max are two parameters of the model, that are dependent on the properties of the relations far and close.

Vector modeling space relations
Let Bo 1 and Bo 2 be the barycentres of the spatial objects so 1 and so 2 , is the angle between the line defined by Bo 2 and which is parallel to Y-axis, and the line defined by the points Bo 1 and Bo 2 .

Topological modeling space relations
The topological modeling space relations are taken from [9].Their definition is based upon three functions, the boundary (@o), the interior (o ) of the objects, and Dim.

Basic functions
@so represents the set of points of the boundary of a spatial object so.so @(so) point Ø segment(p 1 , p 2 ) fp 1 , p 2 g polygon fs i / s i 2 segment(so)g o represents the interior points of the object so.so so point, p p segment, (p 1 , p 2 ) so -fp 1 , p 2 g polygon so -f s i / s i 2 segment(so)g The operator so represents all the points of the object so.So we have : so = @so [ so , and @so \ so = .
We define the function Dim as the dimension of a set of points (ps).ps dim(ps) Ø ps contains at least a point but no lines nor areas.0 ps contains at least a line but no areas.
1 ps contains at least an area. 2
The in relationship holds if the first object is included in the second.
The intersect relationship applies to every situation, and represents the union of the relations touch, in, overlap and cross.

Figure 1 :
Figure 1: Logical view of an image

Figure 2 :
Figure 2: Example of an image description in EMIR 2
Integer [ String [ Boolean.domain : ID pr !P (VAL PROP), is the function that defines for each property the set of its possible values.PROP is the set of property definitions.PROP ID pr ID cl P (VAL PROP) RSYMB is the set of symbolic relation definitions.RSYMB ID rs ID cl ID cl .COMP ID cl ID cl , is the composition relation between classes.(c 1 , c 2 ) cl , IS-A, ID pr , ID rs , VAL PROP, PROP, RSYMB, COMP, domain) ID cl is the set of class identifiers.This set is organised as a lattice by the IS-A relation with a minimal and maximal element ( ? and >).ID pr is the set of property identifiers.ID rs is the set of symbolic relation identifiers.VAL PROP is the set of possible values of the properties, VAL PROP = Real [ 8 sy o 1 , sy o 2 2 I sy , (rs, sy o 1 , sy o 2 ) 2 RI, iff 9 rs 2 ID rs , and c 1 , c 2 2 ID cl , and (rs, c 1 , c 2 ) 2 RSYMB, such that cl(sy o 1 ) IS A c 1 , and cl(sy o 2 ) IS A c 2 .The elements of PI are instances of the property definitions in the application semantic model.
io 1 , sy o 1 ), and (io 2 , sy o 2 ) 2 i.L sp .The colours used in the perceptive view are included in the colour table of the physical view : 8 pe o 2 i.I pe , cl(pe o) 2 i.tc(i.iph ) [ f cl g.