Optimal Sensors Placement for Failures Detection and Isolation

An optimized design of sensors and strategy of sensors placement will be extremely beneficial to both to ensure safety and reducing structural costs of the systems. The objective of this study is to provide a new method that is able to help sensor placement in the model. The chosen approach is based on structural analysis of the system. The sensors placement in order to detect and locate failures of the relations of the system to be monitored is a combinatorial problem. As a contribution, a solution based on combinatorial optimization is proposed for the resolution of the problem.


INTRODUCTION
The improvement of the process safety is essentially based on the fault detection and isolation (FDI) procedures [7,6].The FDI algorithms are based on the same principle: the comparison between the real behaviour of the process and reference behaviour provided by a model under normal operation.
Given the existence of efficiency procedure for faults detection, the problem arises is how the sensors should be placed for an optimum efficiency.The fundamental problem of fault detection is to infer the existence of a defect in the structure from measurements taken by sensors distributed over the structure.It would be necessary in practice to optimize the number and location of sensors for minimal cost and reliability of the system.The estimate of a process state quality and consequently, its dependability is highly conditioned by the number and distribution of measurements on it.The availability of a process can be increased and it becomes "reconfigurable" if it is able to continue to operate even when some sensors fail.The instrumentation architecture design of a system represents a very important step.
The requirements for maintenance and diagnosis should be considered in the early stages of design.For this reason, analytical methods for monitoring a system and determining the necessary instrumentation to achieve the desired level of monitoring, are greatly appreciated.This analysis of a given system can be done at the design phase, to determine which sensors are needed.
The monitorability (ability to detect and to isolate faults) of the system depends mainly on the implemented instrumentation architecture [1].
Two sensor placement methodologies have been developed, depending on the kind of knowledge used to describe the process: model based (the model is given under analytical form), and the non model-based (the knowledge is given under rules, tables, pattern recognition . ..).
Among the non model-based approaches, we can cite neuronal approaches (RN) [19], genetic algorithms (GA) [7], simulated annealing algorithm (SA) [18] and the iterative algorithm of insertion/deletion (I/D) [18].The main drawback of the used methods consists in the fact that they need recognition pattern step, the physical knowledge is also omitted, and the sensor placement algorithms are mainly based on the heuristics.
The model based approach uses the analytical redundancy relations (ARRs) for which is applied the sensor placement algorithm.The used analytical model can be given under structural or state space equation form [4].For the cited methods the sensor location cannot be defined explicitly.Furthermore, in addition to the modelling step problem, ARRs generation is not trivial and needs complicated unknown variables theory elimination .
Analytical redundancy has to find relations between known variables of the system.These relations are satisfied in the normal mode and not satisfied in the presence of a failure.This document addresses the specific issue of optimal sensors placement for monitoring, solutions that are reflected are from the structural analysis.The presentation of this paper is as follows: Section 2 gives a survey on the methods of sensors placement and some concepts on the structural analysis as well as the content of specifications supervision.Section 3 describes the proposed sensors placement process for monitoring based on a graphical method before some concluding remarks.

Classification of sensors placement methods
For the sensor placement problem, we distinguish two types of methods: no model-based and model-based (Fig. 1).According to the available works, and using the a priori knowledge deduced by training, we can cite: neuronal approaches (RN) genetic algorithms (GA), algorithm of the simulated annealing (SA) and iterative algorithm of insertion/deletion (I/D).They were the subject of a comparison in [9], [18], [10].
For the second kind of methodologies, we use a mathematical model based on physics laws.This model can be in analytical form, structural form or bond-graph model.The drawback of model-based diagnosis is the need for a reliable model, which implies using the whole knowledge of the system but making the design procedure more difficult.However, the accuracy of the model is the major limit of model-based approaches.Analytical models, especially based on system equations are not suitable for a systematic way to sensors placement, they cannot allow a physical meaning of the variables as bond-graph models do.No model-based methodologies require knowledge about the system.This information cannot be obtained without training phase.
The sensor placement has different objectives.Among them, the observability check as well as the decomposition of the system into redundant and unobservable parts by using the incidence matrices was the subject of many works.To quantify the redundancy of a variable, two concepts can be used: the degree of redundancy [14] which can be considered as measurement of the quality of monitoring, and the degree of calculability [8].

Optimisation criteria of sensors placement
Based on operational research strategies, some methods deal with sensor placement problem where we have to optimise an objective function under constraints.The term feature selection is taken to refer to algorithm that output a subset of the input feature set.
Sequential forward selection (SFS) [24], apply selection criterion to optimise some objective function.At the opposite of SFS, we choose in the sequential backward selection (SBS) the starting point as the full set, at each iteration, we remove the feature X in the smallest decrease in the value of the objective function f(X).
Branch and Bound [21] is the most used method in discrete optimisation.The method developed by Narendra and Fukunaya in 1977, consists of reducing the search space using a depth-first strategy.
Generally, the monitoring conditions do not impose enough constraints leading to a single solution of the sensors placement problem.Among the proposed solutions, these criteria can be the cost [18] or the sensitivity [20] of the monitoring system, or both.
A placement is considered potential to which we can associate a binary variable.In the optimal solution, the apparition of a "1" indicates that the placement is effective.The formulation of this problem yields to a MINLP (mixed integer nonlinear) program whose dimensions depend on the value of the integer decision variables.
Methodology developed in [4] is based on the structural analysis of systems using bipartite oriented graphs.In order to represent the existing relations between different variables of the system, a procedure of sensors placement has been elaborated for detectability and localizability of failures.

Structural analysis
The advantage of the structural analysis approach is the fact that it can only keep information about constraints acting on variables.This allows to take into account system non linearity and many kind of representations: rules, tables. . .The first-step of the FDI procedure consists to generate a subset of equations called analytical redundancy relations (ARRs) which express the difference between the model behaviour and the actual behaviour given by data directly or indirectly measured variables.These relations, whose numerical evaluation leads to residuals vanish when the behaviour of the system is conform to the model, are constituted by only known variables.Different approaches have been developed to generate residuals, based on graph theory [17], Bondgraph theory [2]. . .
The set of ARRs is represented in a binary table.The columns of this table are called failure signatures.A "1" entry in the ith row and the jth column of the table indicates that the residual ri is sensitive to the jth fault.

THE SUPERVISION SPECIFICATIONS
The supervision specifications should be written thanks to industrial expert of the process.The main element of The supervision specifications is the definition of subset of system components that we wish to monitor for security, quality of production, maintenance, etc...The supervision specifications must also define the subset of variables or physical quantities witch must always be known for reasons of control and command.This information helps to establish the basic set of known relations.The specifications must also indicate the subset of the unknown variables that are not physically measurable.In this way, algorithms of sensors placement are not required to add sensors on variables that cannot be measured [3].

PROPOSED SOLUTION
The combinatorial optimization is a discipline combining various techniques of discrete mathematics and computer science to solve optimization problems.
To verify the specifications containing the desired degree of redundancy of the variables of the system, we have to place the sensors in an optimal way to monitor the system.Several combinations of sensors placement of sensors, the question is posed: What is the best structure or combination of sensors that checks the best specifications and which gives a minimum cost of the system.To overcome this combinatorial problem, we provide as solution to represent the system by a tripartite graph.In terms of ranking, a place of a new sensor creates a fixed relation between the relations and known variables of the system (see Figure 02).
This concept is closed to observability degree, so we choice a place in such a way that a new independent cycle(s) is (are) created containing one of the variable(s) and not containing the other(s).Finally, the so called fault tolerance control (FTC) problem which also based on structural analysis has to encompass the faulty operating modes.Graphically, some paths in the tripartite graph are non longer available, so we have to find another path to get (redundancy) informations about the variables to be controlled; we have to solve the problem by trying to eliminate these combinatory.

Structural analysis with graphs
A general framework for an analysis of diagnostic feasibility is the structural analysis approach [21].The main principle of this method is to identify the measurements subsystems in the plant that contain redundant information.The advantage of these approach is that the structure of the system is independent from the detailed knowledge of parameters.
The types of variables in a diagnostic context are: the known variables correspond to measurements and controller input, unknown variables, typically internal states and unknown inputs that should not influence the residual, and faults to be detected.Formally, the structural model of the system is defined as follows: These relations can represent a dynamic, static, linear or non-linear relation; which constitutes the strength of the structural approach.
Definition 01: Tripartite graph ( ) is constituted of three nodes parts where each pair of parts is a bipartite graph as shown in (fig1).The set of edges is then partitioned into c A and x A linking K to R in one hand and R to X in the other hand respectively.Roughly speaking, we have two bipartite graphs: ) , , ( Definition 02: a residual cycle is a closed path free loop (cycle) starting from K and ending in K in the tripartite graph (all the variables involved by residual cycle will be known by deduction).
Among all cycles in the tripartite graph, only this kind of cycle will be investigated.
Fig. 2 Tripartite graph associated to a system.

Representation of the problem of sensors placement
Initially, when configuring the system for monitoring no sensors is placed (we can have sensors under physical constraints).We can represent the sensors placement in a form of a binary vector V defined as: A sensor is a direct measurement on a variable.

Observability, Redundancy and Degree of Redundancy
The classification based on the observability is to highlight two categories of variables: observable variables whose values can be known (by direct measurement or by inference) and the unobservable variables [20].

Minimal observability of a variable
A variable is redundant of degree 0 (minimal observability) if there is at least one configuration so that the failure of one sensor of the process makes this variable inaccessible.

Any degree of redundancy:
The previous concept can be extended.A variable is redundant of K degree is an observable variable whose value remains deductible when the simultaneous failure of any k sensors in the process.

Positioning of sensors under Constraints of Redundancy:
The previous definitions allow to characterize any variable with its degree redundancy which reflects its availability in terms of failures of sensors.

DESIGN OF INSTRUMENTATION SYSTEM
We present now a method of designing architecture for instrumentation that respects constraints on the degree of redundancy of variables.To put us in a real industrial context, we first specify the list of variables essential to the conduct (L1 list) that should be of observability minimal, then the lists of variables that we want to ensure a given degree of redundancy (LD k list of the variables that must be redundant of K degree).
The objective of the design is to determine the variables to measure under constraints on their redundancy degrees.
Based on some previous result which proved that Enumerating residual cycles in the tripartite graph is more relevant than enumerating matching in the bipartite graph [22], we develop our concept of sensors placement.Our proposed design methodology for determining the optimal set of sensors and their placement is as follows Step 01: Definition of the basic structure of the system.
The first phase consists to define the basic structure of the system.Thanks to the structural analysis of behavioural patterns of the various system components, it is possible to determine all unknown variables X, and all the relations which represent a normal behaviour of components.In a second step, we complete the set of relations with knowledge relations imposed by the specification for reasons of control and command.We define so the set of relations R and known variables C.
Step02: Definition of the supervision specifications containing the set of minimal observable variables L 1 and the list of the variables that must be redundant of K degree LD k .
Step 03: Representation of the system with tripartite graph.
Step 04: Construction of residual cycles.
For each element of K we generate all cycles from this element: by generating N-ary tree for each of the known variable.For this we created an algorithm that generates all the residual cycles of a tripartite graph.4.3 Complete an initially empty vector by the redundancy degrees of the variables from generated cycles (specifications vector).
Consult the specifications Among the potential places that close the circuits (complete paths in tripartite graph that are non longer available to construct a cycle), choose one that belong to more variables which figure in specifications.Add a sensor.Regenerate redundancy degrees.Return to step 4.3 if the specifications are not completely verified.

EXAMPLE
Here are the results (residual cycles generated from each known variable) of the following system:

Neighborhood table
For each known variable: generate an n-ary tree which calculates all possible residual cycles.All the variables involved in a residual cycle will be known.After generation of all residual cycles from all inputs, we can conclude the number of possible ways to reach an unknown variable and the number of cycles where each variable occurs in the following table: This approach enabled us to extract all possible paths to reach each of the information (variable) from the inputs and the number of cycle in which it operates.Roughly speaking, this strategy allows extracting all the information on the system originally designed, and it gives a clear view of the system to control adding sensors process.

Adding sensors
The place of an eventual sensor will create a new relation node in R and a new edge between K and R sets.
The addition of sensors leads to an increase in the number of knowledge relations and hence the over-determination of the system.It is thus possible to generate more residual cycles and improve system performance monitoring in terms of detectability and diagnosis.

Place of new sensor
To get redundancy of the variables which we must control, we have to complete paths in tripartite graph that are non longer available to construct a cycle.To place a sensor we have to see all paths in trees which do not lead to a cycle.So from the tree structure, we choice a place in such a way that a new independent cycle is created (The paths that do not lead to cycles in the tree are the missing cycles in the tripartite graph) This is done by return back to a variable and adding a sensor (this sensor will create a new relation node).We are limited to deploying a small number of sensors, and thus must carefully choose where to place them, so we place a sensor in a potential variable (which involve after its calculate paths more variables figuring in specifications), that's why this investment will lead to more redundancy for one place that minimizes the installation of new sensors.

CONCLUSION
We presented in this article the problem of optimal sensors placement for FDI.Based on the structural analysis of systems and on definition of observability and monitorability structural criteria, a method of sensors placement has been proposed using graphical ways to bear all possible combinations and solve efficiently the problem in a polynomial time.This method relies on the generation of residual cycles through a representation of the system with a tripartite graph, the algorithm of generation of residual cycles is reliable and based on the development of an n-ary tree and then extraction of all paths leading from the father node to the leaves.Thanks to various information (degree of redundancy) generated we can see where we should add sensors to better optimize our system and defining a configuration of the instrumentation.
As seen in the pedagogical example, this approach can be very beneficial to the industry with more complex models.
The possibility of introducing cost economy and another kinds of industrial criteria is let to future work.
impose a relation between variables and parameters, belonging to Z:

Fig. 3 .
Fig. 3. Tripartite graph of the example system

Fig. 8 .
Fig. 8. How to get redundancy from the tree.
On representing the system by a neighborhood table and a table output

Table 1 .
Table of initial results