Brain-like Approximate Reasoning

Humans can easily recognize objects as complex as faces even if they have not seen them in such conditions before. We would like to find out computational basis of this ability. As an example of our approach we use the neurophysiological data from the visual system. In the retina and thalamus simple light spots are classified, in V1-oriented lines and in V4-simple shapes. The feedforward (FF) pathways by extracting above attributes from the object form hypotheses. The feedback (FB) pathways play different roles – they form predictions. In each area structure related predictions are tested against hypotheses. We formulate a theory in which different visual stimuli are described through their condition attributes. Responses in LGN, V1, and V4 neurons to different stimuli are divided into several ranges and are treated as decision attributes. Applying rough set theory (Pawlak, 1991 –[1]) we have divided our stimuli into equivalent classes in different brain areas. We propose that relationships between decision rules in each area are determined in two ways: by different logic of FF and FB pathways: FF pathways gather a huge number of possible objects attributes together using logical " AND " (drivers), and FB pathways choose the right one mainly by logical " OR " (modulators).


INTRODUCTION
An important feature of primates' brain is insensitive to the exact properties of an object's parts, as well as recognition based on only partial and variable information about an object's parts.We outperform any AI system in such difficult tasks as complex objects (like faces) recognition even if they have never seen them in a particular context before.Object classification by its parts is similar to computing with words [2] where words in contrast to numbers, are not well (crisp) define objects but rather fuzzy granules.Zadeh [2] suggested using in the computer science words, which makes computational process similar to the perception.In this paper we will analyze neurological data using granulation theory similar to that in [2] but not limited to fuzzy systems, because our knowledge about parts is often so limited that we cannot even characterize it by its "fuzziness".
Our eyes constantly perceive changes in light, colors and intensities.From these sensations, our brains extract features related to different objects.So-called "basic features" were identified in psychophysical experiments as elementary features that can be extracted in parallel.Evidence of parallel features extraction comes from the fact that the extraction time is independent of the number of objects.Other features need serial searches, so that the time needed to extract them is proportional to the number of objects.High-level serial processing is associated with integration and consolidation of items combined with conscious awareness.Other low-level parallel processes are rapid, global, related to high-efficiency categorization of items and largely unconscious [3].Treisman [3] showed that instances of a disjunctive set of at least four basic features could be detected through parallel processing.Other researchers have BCS International Academic Conference 2008 -Visions of Computer Science provided evidence for parallel detection of more complex features, such as shape from shading, or experience-based learning of features of intermediate complexity.However, recent experiments by Thorpe's team [4] found that human and non-human primates are capable of rapid and accurate categorization of briefly flashed natural images.Human and monkey observers are very good at deciding whether or not a novel image contains an animal even when more than one image is presented simultaneously [4].The underlying visual processing reflecting the decision that a target was present is under 150ms.
These finding appear to be in contradiction to the classical view that only simple, "basic features," likely related to early visual areas like V1 and V2, are processed in parallel [3].Certainly, natural scenes contain more complex stimuli than "simple" geometric shapes.It seems that the conventional, two-stage perception-processing model needs correction, because to the "basic features" we must add a set of unknown intermediate features.We propose that at least some intermediate features are related to receptive field properties in area V4.Area V4 has been associated with shape processing because its neurons respond to shapes and because lesions in this area disrupt shape discrimination, complex-grouping discriminations, multiple viewpoint shape discriminations and rotated shape discriminations.
In this work, I have explored area V4 cells responses stimulated by pair of bars placed in variable parts of the receptive field (RF).Rough set analysis lead us to propose decision rules related to the neurophysiological basis of the interactions between parts.

METHOD
After Pawlak [1], we define an information system as S = (U, A), where U is a set of objects and A is set of attributes.If a ∈ A and u ∈ U, the value a(u) is a unique element of V (a value set).
The indiscernibility relation of any subset B of A, or IND(B), is defined [1] as the equivalence relation whose elements are the sets {u: b(u) = v} as v varies in V, and [u] B -the equivalence class of u form B-elementary granule.The concept ≠ } is an upper approximation of X.The set BN B (X) = B X -B X will be referred to as the B-boundary region of X.If the boundary region of X is the empty set then X is exact In this paper the universe U is a set of simple visual patterns that were used in neurophysiological experiments [3], which can be divided into equivalent indiscernibility classes or B-elementary granules, where B ⊆ A. The purpose of our research is to find how these objects are classified in the brain.Therefore we will modify definition of the information system as S = (U, C, D) where C and D are condition and decision attributes.Decision attributes will classify elementary granules in agreement with neurological responses from the specific visual brain area.In direct experimentally related part of this paper we are looking into single cell responses only in one area -V4 that will divide all patterns into equivalent indiscernibility classes of V4-elementary granules.Neurons in V4 are sensitive only to the certain attributes of the stimulus, like for example space localization -pattern must be in the receptive field, and most of them are insensitive to contrast changes.Different V4 cells have different receptive field properties, which mean that one B-elementary granule can be classified in many ways by different V4-elementary granules.
We will represent experimental data ( [5]) in the following table.In the first column are neural measurements.Neurons are identified using numbers related to a collection of figures from [5].Different measurements of the same cell are denoted by additional letters (a, b,…).For example, 11a denotes the first measurement of a neuron numbered 1 Fig. 1 of [5], 11b the second measurement, etc. Stimuli typically used in neuroscience have the following properties: 1. orientation in degrees appears in the column labelled by o, and orientation bandwidth is labelled by ob. 2. spatial frequency is denoted as sf , and spatial frequency bandwidth is sfb 3. x-axis position is denoted by xp and the range of x-positions is xpr 4. y-axis position is denoted by yp and the range of y-positions is ypr 5. x-axis stimulus size is denoted by xs 6. y-axis stimulus size is denoted by ys Thus the full set of stimulus attributes is expressed as B = {o, ob, sf, sfb, xp, xpr, yp, ypr, xs, ys, s}.

Decision Rules for a single neuron
Each neuron in the central nervous system sums up its synaptic inputs as a postsynaptic excitatory (EPSP) and inhibitory (IPSP) potentials that may cause its membrane potential to exceed the threshold and to generate an action potential.In other words single neuron approximate collective (thousands of interacting synapses with different weights) input information to the distributive one (unique decision in a single output).In principle a single spike (action potential) can be seen as a decision of the neuron, but in this work we will not take into account internal dynamics of the system and therefore we will estimate neuronal activity as spikes mean frequency (as described above).This complex synaptic potential summation process is related in sensory (here only visual) systems with so-called receptive field properties of each neuron.Below we will show how neurons in different parts of the brain change visual information in their receptive fields into decisions.

Logic of the anatomical connections
As it was mentioned above our model consists of three interconnected visual areas.Their connections can be divided into the feedforward (FF) and feedback (FB) pathways.We have proposed [6] that FF connections are related to the hypothesis about stimulus attributes and FB are related to predictions about the stimulus and its configuration.Below, we suggest that the different anatomical properties of the FB and FF pathways may determine their different logical rules.
We define LGN i , as LGN i-cell attributes for cells i=1,…, n, V1 j as primary visual cortex j-cell attributes for cells j=1,…, m, and V4 k as area V 4 attributes for cells k=1,…, l.
The specific stimulus attributes for a single cell can be found in the neurophysiological experiment by recording cell responses to a set of various test stimuli.As we have mentioned above, cell responses are divided into several (here 3) ranges, which means that responses of each cell may perform several classifications (granules).It is different than the classical receptive field definition, which assumes that the cell responds (logical value 1) or does not respond (logical value 0) to the stimulus with a certain attributes.In other words, in the classical electrophysiological approach all receptive field granules are crisp.In our approach, cell responses below the threshold -r 0 , have logical value 0, the maximum cell responses -r 2 , have a logical value 1 but we will introduce cell responses between r 0 and r 2 , in this paper only one value r 1 .The physiological interpretation of cell responses between the threshold and the maximum response may be related to the influence of the feedback or horizontal pathways.We assume that the tuning of each structure is different and we will look for decision rules in each level that give responses r 1 and r 2 .For example, we may interpret that r 1 meaning is that the local structure is tuned to the attributes of the stimulus and such granule for j -cell of the area V1 will be define as [u] 1V1j .

Decision Rules for Thalamus
Each LGN cell is sensitive to luminance changes in a small part of the visual field called the receptive field (RF).The cells in LGN have the concentric center-surround shapes of their RFs, which are similar to that in the retinal ganglion cells.We will consider only on-and off -type RFs.The on -(off) type cells increase (decrease) their activity by an increase of the light luminance in their receptive field center and/or decrease of the light luminance in the RF surround (Fig. 1).
Below are examples of the decision rules for on-, off -type LGN cells with the RF position: xp 0, yp 0 .We assume that there is no positive feedback from higher areas therefore the maximum response is r 1 .
which we interpret that the changes in the luminance of the light spot s 4 that covers the RF center (the first rule) or annulus s 5 which covers the RF surround (the second rule) gives neuronal response r 1 .We assume that other stimulus parameters like contrast, speed and frequency of luminance changes, etc. are constant and optimal, and that the cell is liner and therefore we measure response of the cell activity synchronized with the stimulus changes (the first harmonic).Depends on the cell type the phase shift between stimulus and the response is near 0 or 180 deg if we do not take into account the phase shift related to the response delay.Instead using light spots or annuli one can use a single, modulated with the drifting grating circular patch covering the classical RF.By changing the spatial frequency of the drifting grating one can stimulate only the RF center for high spatial frequencies or center and surround for lower spatial frequencies, which gives the following decision rule: where for example: sf = 0.4 c/d stimulates RF center and surround, sf > 1 c/d stimulates RF center only.
Notice that in agreement with above rules eqs.(1)(2)(3) LGN cells do not differentiate between light spot, light annulus, or patch modulated with grating.All these different objects represent the same LGN-elementary granule.The LGN RF can be modelled by the difference of Gaussian (DOG) model, where one 2D Gaussian function describes fuzzy in x-y plane RF center properties, whereas another Gaussian describes fuzzy RF surround properties (Fig. 1).Many papers already mentioned that the DOG model has a sharp zero-crossing property.In our decision rules we are using logical "AND" which in this and next three parts determines a subspace from generally unknown space of object's attributes.

Decision Rules for V1
In the primary visual cortex neurons are sensitivity to the orientation, which is not observed in lower areas: retina or LGN.There are two cell types in V1: simple and complex cells.They can be characterized by spatial relationships between their incremental (on) and decrement (off) subfields.A simple cell has in principle separated its subfields, whereas a complex cell is characterized by the overlap of its subfields (Fig. 1).In consequence simple cells are linear (the first harmonic dominates in their responses: F 1 / F 0 > 1) whereas complex cells are nonlinear (F 1 / F 0 < 1).The classical V1 RF properties can be found using small flashing light spots, moving white or dark bars or gratings.We will give an example of the decision rules for the RF mapped with the moving white and dark bars [7].
A moving white bar gives the following decision rule: The decision rule for a moving dark bar is given as: where xp i x-position of the increment subfield, where xp j x-position of the decrement subfield, yp 0 y-position of the both subfields, xs k , xs l , ys 1 horizontal and vertical sizes of the RF subfields, and s 2 is a vertical bar, which means that this cell is tuned to the vertical orientation (for illustration purpose we added orientation o 90 which not necessary because the bar s 2 is vertical).We have skipped other stimulus attributes like movement velocity, direction, amplitude, etc.
For simplicity we assume that the cell is not direction sensitive, it gives the same responses to both direction of bar movement and to the dark and light bars and that cell responses are symmetric around the x middle position (xp).
An overlap index [8] is which compares sizes of increment (xs k ) and decrement (xs k ) subfields to their separation (|xp i -xp j |).After [9] if OI ≤ 0.3 (non-overlapping subfields) it is the simple cell with dominating first harmonic response (linear) and r 1 is the amplitude of the first harmonic.If OI >= 0.5 (overlapping subfields) it is the complex cell with dominating F 0 response (nonlinear) and r 1 are changes in the mean cell activity.Hubel and Wiesel [10] have proposed that the complex cell RF is created by convergence of several simple cells in a similar ways like V1 RF properties are related to RF of LGN cells (Fig. 1).However, there are some recent experimental evidences that the nonlinearity of the complex cell RF may be related to the feedback or horizontal connections [11].This is important finding, because subfields of V1 RF are characterized by Gaussian functions therefore are fuzzy.If OI is small as in simple cells their RFs are also fuzzy, however when decrement and increment subfields overlap their difference determines a sharp edge.

Decision Rules for V4
The properties of the RFs in the area V4 are more complex than that in V1 or in LGN and in most cases they are nonlinear.It is not clear what exactly optimal stimuli for cells in V4 are, but popular hypothesis is that they V4 cells code the simple, robust shapes.Below there is an example from [12] of the decision rules for a narrow (0.

Decision Rules for feedforward (FF) connections from LGN -> V1
Thalamic axons target specific cells in layer 4 and 6 of the primary visual cortex (V1).Generally we assume that there is a linear summation of LGN cells to one V1 cell.It was proposed [9] that the LGN cells determine the orientation of the V1 cell in the following way: LGN cells which have direct synaptic connection to V1 neuron have their receptive fields arranged along a straight line on the retina (Fig. 1).In this Hubel and Wiesel [11] classical model the major assumption is that activity of all (four in Fig. 1) LGN cells is necessary for V1 cell to be sensitive to the specific stimulus (oriented light bar).This principle determines syntax of the LGN to V1 decision rule, by using logical "and" meaning that if one LGN cell does not respond then there is no V1 cell response.After Sherman and Guillery [13] we will call such inputs as drivers.Alonso et al [14] showed that it is a high specificity between RF properties of the LGN cells having monosynaptic connections to V1 simple cell.This precision goes beyond simple retinotopy and includes such RF properties as RF sign, timing, subregion strength and size [14].The decision rule for the feedforward LGN to V1 connections are following: where the first rule determines the r 1 responses of V1 with optimal horizontal orientation, and the second rule says that the optimal orientation is 45 degrees; (x i , y i ) is the localization of the RF in x-y Euclidian coordinates of the visual field.The logical "AND" in FF connections is similar to Lukasiewicz t-norm and can be written as v(r V1 )=max(0, v(r LGN (x i , y i ) where value v(.) is between 0 and 1 (multi-valued Lukasiewicz logic), interpreted that all inputs from LGN must have significant strength in order to obtain significant V1 response.
FIGURE 1: Modified schematic on the basis of [8].Four LGN cells with circular receptive fields arranged along a straight line on the retina have direct synaptic connection to V1 neuron.This V1 neuron is orientation sensitive as marked by the thick, interrupted lines.

Decision Rules for feedback (FB) connections from V1->LGN
There are many papers showing the existence of the feedback connections from V1 to LGN.In [15] authors have quantitatively compared the visuotopic extent of geniculate feedforward afferents to V1 with the size of the RF center and surround of neurons in V1 input layers and the visuotopic extent of V1 feedback connections to the LGN with the RF size of cells in V1.V1 feedback connections restrict their influence to LGN region visuotopically coextensive with the size of the classical RF of V1 layer 6 cells and commensurate with the LGN region from which they receive feedforward connections.In agreement with [13] we will call feedback inputs modulators with following decision rules: (11) this rule says that when the activity of a particular V1 cell is in agreement with activity in some LGN cells their responses increase from r 1 to r 2 , and r LGN 1 (x i , y i ) means r 1 response of LGN cell with coordination (x i , y i ) in the visual field, and r LGN  2 means r 2 response of all LGN cells in the decision rules which activity was coincidental with the feedback excitation, it is a pattern of LGN cells activity.We interpret logical "OR" as choosing the most favourite pattern of active LGN cells.

Decision Rules for feedforward connections V1 -> V4
They are relatively small bypassing area V2 direct connections from V1 to V4, but also we take into account V1 to V2 [16] and V2 to V4 highly organized but variable especially in V4 [17] feedforward connections.We simplify that V2 has similar properties to V1 but larger size of the RF.We assume that like from the retina to LGN and from LGN to V1 direct or indirect connections from V1 to V4 provide driver input and fulfil the following decision rules: We assume that the RF in area V4 sums up driver inputs from regions in the areas V1, V2 of cells with highly specific RF properties not only retino-topically correlated.We interpret logical "AND" as above for LGN to V1 FF.

Decision Rules for feedback connections from V4->V1
Anterograde anatomical tracing [18] has shown axons backprojecting from area V4 directly to area V1 or sometimes with branches in area V2.Axons of V4 cells span in area V1 in a large territories with most terminations in layer 1 which can be either a distinct clusters or in a linear array.These specific for each axon branches determine decision rules which will have similar syntax (see below) but anatomical structure of the particular axon may introduce different semantic.Their anatomical structures maybe related to the specific receptive field properties of different V4 cells.Distinct clusters may have terminals on V1 cells near "pinwheel centers" (cells with different orientations arranged radially) whereas a linear array of terminals may be connected to V1 neurons with similar orientation preference.In consequence some part of the V4 RF would have preference for certain orientations and other may have preference for the certain locations but be more flexible to different orientations.This hypothesis is supported by recent intracellular recordings from neurons located near pinwheels centers, which in contrast to other narrowly tuned neurons, showed subthreshold responses to all orientations [19].The V4 input modulates V1 cell in the following way: Meaning of r V1 1 (x i , y i ) and r V1 2 are same as explained above for the V1 to LGN decision rule.

Decision Rules for feedback connections V4->LGN
Anterograde tracing from area V4 showed axons projecting to different layers of LGN and some of them also to the pulvinar [20].These axons have widespread terminal fields with branches non-uniformly spread about several millimetres (Fig. 2).Like descending axons in V1, axons from area V4 have in LGN terminations in distinct clusters or in linear branches (Fig. 2).These clusters and branches are characteristic for different axons and as it was mentioned above their differences may be related to different semantics in the decision rule below: Meaning of r LGN 1 (x i , y i ) and r LGN 2 are same as explained above for the V1 to LGN decision rule.
Notice that interaction between FF and FB pathways extends a classical view that the brain as computer uses two-valued logic.This effect in psychophysics can be paraphrased as: I see it but it does not fit to my predictions.In neurophysiology we assume that a substructure could be optimally tuned to the stimulus but its activity does not fit to the FB predictions.Such interaction can be interpreted as the third logical value.If there is no stimulus the response in the local structure should have a logical value 0, if stimulus is optimal for the local structure, it should have logical value ½, and if it also is tuned to expectations of higher areas (positive feedback) then response should have logical value 1. Generally it becomes more complicated if we consider many interacting areas, but in this work we use only three-valued logic.

Experimental Basis
We present our model on an example of the data analysis from two neurons recorded in monkey's area V4 [5].One example of V4 cell responses to single, thin vertical bar in different horizontal -x-axis positions is shown in the upper left part of Fig. 2 (Fig. 2E).Cell responses are maximal but not symmetric around the middle bar position.Fig. 2F shows the same cell (cell 61 in Table 1) responses to two bars, the first bar stays at the 0 position, while the second bar changes its position along the x-axis.Both bars are changing their luminance between black and white (flashing) in opposite phase to each other.Cell responses show several maxima dividing the receptive field into four areas.However, this is not always the case as responses to two bars in another cell (cell 62 in table 1) show only three maxima (Fig. 2G).
We One-bar decision rules can be interpreted as follows: the narrow vertical bar evokes a strong response in certain positions, medium size bars evoke medium responses in certain positions, and wide horizontal or vertical bars evoke no responses (Fig. 2E and two top rows in Tab. 1).We propose following classes of the object Parts Interaction Rules:

Two-bar decision rules
PIR1: facilitation when stimulus consists of multiple similar thin bars with small distances (about 0.5 deg) between them, and suppression when distance between bars is larger than 0.5 deg.Suppression/facilitation can be periodic along the receptive field with dominating periods of about 30, 50, or 70% of the RF width.PIR2: inhibition when stimulus consists of multiple similar discs with distance between their edges ranging from 0 deg (touching) to 0.5-3 deg through the RF width.PIR3: if bars or patches have different attributes like polarity or drifting directions than suppression is smaller and localized facilitation at the small distance between stimuli is present.
These rules were partly tested in cells from area V4 by using discs or annuli stimuli with optimally oriented and variable in spatial frequencies drifting gratings [5].
The x positions measuring depends on the stimulus width: every 0.3 or 0.25 deg, therefore we will resample our data every 0.5 deg, and let assume that all bars have same sizes and we skip bar size parameters.We also assumed that cells are giving similar responses in y-positions 0, and - Stimuli used in our two-bar experiments can be placed in the following ten categories (we skip the bars in 0 x-positions): There are three ranges of responses, denoted as r o , r 1 , r 2 .Therefore the expert's knowledge involves the following three classes: which are denoted as X o , X 1 , X 2.
We want to find out whether equivalence classes of the relation IND{r} or V4-granules form the union of some equivalence to B-elementary granules, or whether B ⇒ {r}.We calculate the lower and upper approximation [1] of the basic concepts in terms of stimulus basic categories: Concepts related to response classes 0, 1, and 2 are roughly B-definable, which means that with some approximation we have found that the stimuli do not evoke a response, or evoke weak or strong response in the area V4 cells.Certain bar positions are always inhibitory such as Y 5 or Y 9 (-1, 0.5), but bar in 0 x-position may evoke inhibition or strong excitation.The most bar positions evoke medium excitatory response (r 1 ), but here as well bar in 0 x -position may also evoke strong excitatory response.In summary, we have performed rough set theory analysis for two different area V4 cells stimulated by two flashed in contra-phase bars.In this part we have looked into characteristic spots on x-axis and we have omitted exact size of bars and their shift along y-axis.Our assumption that these, and maybe others, like differences between cells, parameters are not important is weak.In consequence our concepts are rough, but still we found invariances: bar positions which always give similar responses.

DISCUSSION
Zadeh [2] in his computing with words concept suggested that such computations are analogues to perception, which are in principle fuzzy.He did not differentiate if computing with words is related to parallel (pre-attentive) or serial (conscious) processes.Our model concentrated on pre-attentive processes.These so-called early processes extract and integrate basic features of the environment into parallel channels.These processes are related to the human perceptual system's tendency to group together similar objects with unsharp boundaries [21].These similarities may be related to synchronizations of multi-resolution, parallel computations and are difficult to simulate using a digital computer [22].The similarity relation is reflexive, symmetric and non-transitive, but we have classify object attributes as B-elementary granules as the indiscernibility relation, which is reflexive, symmetric, and transitive.The main idea of this paper is that each area: LGN, V1, and V4 have separate and different classification and find area specific indiscernibility relation: LGN-elementary granules describe light points, V1-elementary granules -lines, V4-elementary granules -simple shapes.Unknown object is represented by many equivalent classes of different attributes.The main question is -what algorithm is used by brain to find similarities between unknown and familiar objects?Our hypothesis is that the brain uses the quasi-similarity relation, which is reflexive, graded symmetric and graded transitive and related to improper containment, which leads to rough mereological granular logic [23,24].We claim that the feedforward and feedback pathways are playing dominating role in introduction of the granular logic.

Decision rules of the Receptive Field
By using multi-valued categorization of V4 neuron responses, we have differentiated between bottom-up information (hypothesis testing) related to the sensory input, and predictions, some of which can be learned but are generally related to positive feedback from higher areas.If a prediction is in agreement with a hypothesis, object classification will change from category 1 to category 2. Our research suggests that such decisions can be made very effectively during preattentive, parallel processing in multiple visual areas.In addition, we found that the decision rules of different neurons can be inconsistent.One should take into account that modeling complex phenomena entails the use of local models (captured by local agents, if one would like to use the multiagent terminology [25]) that should be fused afterwards.This process involves negotiations between agents [25] to resolve contradictions and conflicts in local modeling.One of the possible approaches in developing methods for complex concept approximations can be based on the layered learning.Inducing concept approximation should be developed hierarchically starting from concepts that can be directly approximated using sensor measurements toward complex target concepts related to perception.This general idea can be realized using additional domain knowledge represented in natural language.

Critique of the "winner-takes-all" strategy
These inconsistencies could help process different aspects of the properties of complex objects.The principle is similar to that observed in the orientation tuning cells of the primary visual cortex.Neurons in V1 with overlapping receptive fields show different preferred orientations.It is assumed that this overlap helps extract local orientations in different parts of an object.However, it is still not clear which cell will dominate if several cells with overlapping receptive fields are tuned to different attributes of a stimulus.Most models assume a "winner takes all" strategy, meaning that using a convergence (synaptic weighted averaging) mechanism, the most dominant cells will take control over other cells, and less represented features will be lost.This approach is equivalent to the two-valued logic implementation.Our finding from area V4 seems to support a different strategy than the "winner takes all" approach.It seems that different features are processed in parallel and then compared with the initial hypothesis in higher visual areas.We think that descending pathways play a major role in this verification process.At first, the activity of a single cell is compared with the feedback modulator by logical conjunction in order to avoid hallucinations.Next, the global, logical disjunction ("modulators") operation allows the brain to choose a preferred pattern from the activities of different cells.This process of choosing the right pattern may have strong anatomical basis because individual axons have variable and complex terminal shapes, facilitate some regions and features against other -so called salient features (for example Fig. 2).Learning can probably modify the synaptic weights of the feedback boutons, fine-tuning the modulatory effects of feedback.

Object's parts fusion
As we have previously suggested [12] the brain may use multi-valued logic in order to test learned predictions about object attributes by comparing them with stimulus-related hypotheses.Neurons in area V4 integrate an object's attributes from the properties of its parts in two ways: (1) within the area via horizontal or intra-laminar local excitatory-inhibitory interactions, (2) between areas via feedback connections tuned to lower visual areas.Our research put more emphasis on feedback connections because they are probably faster than horizontal interactions.Different neurons have different Part Interactions Rules (PIR -as described in the Results section) and perceive objects by way of multiple "unsharp windows".If an object's attributes fit the unsharp window, a neuron sends positive feedback [26] to lower areas, which as described above, use "modulator logical rules" to sharpen the attribute-extracting window and therefore change the neuron's response from class 1 to class 2 (Fig. 1 E and F).The above analysis of our experimental data leads us to suggest that the central nervous system chiefly uses at least two different logical rules: "driver logical rule" and "modulator logical rule."The first, "driver logical rule," processes data using a large number of possible algorithms (overrepresentation).The second, "modulator logical rule," is supervises decisions and chooses the right algorithm.

Comparison with other models
Below we will look at possible cognitive interpretations of our model using the shape categorization task as an example.The classification of different objects by their different attributes has been regarded as a single process termed "subordinate classification" [27].Relevant perceptual information is related to "subordinate-level shape classification" by distinctive information of the object like its size, surface, curvature of contours, etc.There are two theoretical approaches regarding shape representation: metric templates and invariant parts models.As mentioned above, both theories assume that an image of the object is represented in term of cell activation in areas like V1: a spatially arrayed set of multi-scale, multi-oriented detectors ("Gabor jets").Metric templates map object values directly onto units in an object layer, or onto hidden units which can be trained to differentially activate or inhibit object units in the next layer [27].Metric templates preserve the metrics of the input without the extraction of edges, viewpoint invariant properties, parts or the relations among parts.This model discriminates shape similarities as well as human psychophysical similarities of complex shapes or faces.Matching a new image against those in the database is done by allowing the Gabor jets to independently change their own best fit (change their position).The similarities of two objects will be the sum of the correlations in corresponding jets.When this methods is used, changes in object or face position or changes in facial expressions can achieve 95% accuracy between several hundreds faces [28].
The main problems with the Lades model described above are that it does not distinguish among the largest effects in object recognition it is insensitive to contour variations which are very important psychophysically speaking, and it is insensitive to salient features (nonaccidental properties [NAP]) [27].The model we propose here suggests that these features are probably related to effects of feedback pathways, which may strengthen differences, signal salient features and also assemble other features, making it possible to extract contours.A geon structural description (GSD) is a two-dimensional representation of an arrangement of parts, each specified in terms of its non-accidental characterization and the relations amongst these parts [27].Across objects, the parts (geons) can differ in their NAP.NAP are properties that do not change with small depth rotations of an object.The presence or absence of the NAP of some geons or the different relations between them may be the basis for subordinate level discrimination [27].The advantage of the GSD is that the representation of objects in terms of their parts and the relations between them is accessible to cognition and fundamental for viewpoint invariant perception.Our neurological model introduces interactions between RF parts as in the geon model; however, our parts are defined differently than the somewhat subjective parts of the GSD model.

Hierarchical PARTS
We propose hierarchical definition of parts based on neurophysiological recordings from the visual system.First order primitives will have circular shapes much like receptive fields in the retina or LGN (Fig. 1).Second order parts might be edges or lines similar to the RF in V1 (Fig. 1).Third order parts will be like the RF in area V4 and therefore will be somewhat similar to geons (Fig. 2).In our model, interactions between parts and NAPs are associated with the role of area V4 in visual discrimination, as described in the above lesion experiments.However, feedback from area V4 to the LGN and area V1 could be responsible for the possible mechanism associated with the properties of the GSD model.The different interactions between parts may be related to the complexity and the individual shapes of different axons descending from V4. Their separated cluster terminals may be responsible for invariance related to small rotations (NAP).These are the anatomical bases of the GSD model, although we hypothesize that the electrophysiological properties of the descending pathways (FB), defined above as the modulator, are even more important.The modulating role of the FB is related to the neuron's (not sure if neuron is the right word in this sentence) logic.Through this logic, multiple patterns of the coincidental activity between the LGN or V1 and FB can be extracted.One may imagine that these differently extracted patterns of activity correlate with the multiple viewpoints or shape rotations defined as NAP in the GSD model.

7 .
stimulus shape is denoted by s, values of s are following: for grating s=1, for vertical bar s= 2, for horizontal bar s= 3, for disc s= 4, for annulus s=5 Decision attributes are divided into several classes determined by the strength of the neural responses.Small cell responses are classified as class 0, medium to strong responses are classified as classes 1 to n-1 (min(n)=2), and the strongest cell responses are classified as class n.Therefore each cell divides stimuli into its own family of equivalent objects Cell responses (r) are divided into n+1 ranges: class 0 : activity below the threshold (e.g. 10 sp/s) labelled by r 0 ; class 1: activity above the threshold labelled by r 1 ; … class n: maximum response of the cell (e.g.100-200 sp/s) labelled by r n .

FIGURE 2 :
FIGURE 2: Modified plots from [5].Curves represent responses of two cells to small single (E) and double (F, G) vertical bars changing their position along x-axis (Xpos).Responses are measured in spikes/sec.Mean cell responses ± SE are marked in E, F, and G. Thick horizontal lines represent a 95% confidence interval for the response to single patch in position 0. Cell responses are divided into three ranges by thin horizontal lines.Below each plot are schematics showing bar positions giving r1 (gray) and r2 (black) responses; below (E) for a single bar, below (F and G) for double bars (one bar was always in position 0).(H) This schematic extends responses for horizontally placed bars (E) to the whole RF assuming that responses along other axis (y and tilted 45, 135 deg axis) are similar to x-axis responses: white color shows max.excitatory responses, black color inhibitory interactions between bars.(I, J) Schematic summarizes cell responses (G) when bars are in opposite (I) or same phase (J -plot not shown) of their black-white flashing period (freq 4Hz).Ellipses in different positions and shapes are related to experimental results where bars stimuli were moved along y-axis: -1, -0.5, and +0.5.
4 deg) and long (4 deg) horizontal or vertical bars placed in different positions of the V4 RF:

TABLE 1 :
Decision table for cells from Fig.1.Attributes o, ob, sf, sfb were constant and are not presented in the table.In two bars experiment the shape value was s=22.For cell 62g only results of bar 0.3 by 2 deg in y-position 0 and -1 are included.