Objects and their computational framework

Most of the object notions are embedded into a logical domain, especially when dealing with a database theory. Thus, their properties within a computational domain are not yet studied properly. The main topic of this paper is to analyze different concepts of the distinct computational primitive frames to extract the useful object properties and their possible advantages. Some important metaoperators are used to unify the approaches and to establish their possible correspondences.


Topics
This note is inspired with a tremendous variety of the known and unknown to the database theoreticians notions of an object [Wan89], [WW94], [CY91].A traditional approach to understand an object as a database theory phenomena tends to bring the logical aspects to study all its practically necessary and interesting features [Bee90].
The implementation efforts often violate the object harmony within any prescribed logical domain especially when discovering some additional computation effects [MB90], [Fon91].Under these conditions the idea that only the logic gives a sound ground to establish and investigate the objects becomes not so attractive.
From the intuitive reasons an object is understood as a relatively self contained and stable entity which attaches both the building blocks and toolkit facilities with a possible aim to develop and implement some kind of the information system [Rou76], [HC89].
The discussions of the minimal mathematical amount to encircle the tasks implied are known and share the same domain: how to start with the minimal assumptions and restrictions for generating the most of the known and already used object features [Coy92].Thus, the notion of object makes the boundaries of the current theoretical issues concerning not only the logical part of a database theory but also its computational part.
The research efforts are concentrated on the main concepts, vital theories-and-foundations, metatheoretical considerations to resolve from an object positions an essence of the database universe of discourse [Bro95], [WWC92].
The advance in the logical studies of an object is based on the attempts to discover the suitable mathematical representation, and the individuals covered the gap between the intuitive ideas and the rigorous ground.The most prominent results have been established when all the individuals were divided into actual, possible and virtual ones [TM93].The notion of possible individual [Sco70] brought forward the schematic nature of an object and added more flexibility into the pure logical models.Thus, a state of the object was clear represented and studied in more details giving rise to data dynamics [Wol93].
The intuitively observed objects were enclosed into some mathematical universe of discourse, and, e.g., a category theory [NR95] is one of the promising candidates to establish the desired theoretical framework.In a category object is evaluated with the assignment which captures the properties of the computational environment.The changeable assignments simulate the dynamic effects to manipulate the object states.The possible invariants add the stability for the objects, and the triples <state, individual, concept> give the sound basis to use and represent this kind of objects as the basic building blocks for the target information system [Wol93].
This approach possessed of both the logical and computational properties.From time to time the more attention was payed to one of the features than to another.After a period of the research activity the new kind of object was discovered -the variable domain [Sco80].A hope to deal with the mathematically sound objects was reached as the functor-as-object [Wol96].
In turn, the functors were used for ranges of variables from the logical formulae to simulate the polymorphic types.At once, the initially computational idea became more logical violating the proportion between the counterparts.
The situation implies the backtracking to revise the necessary meaning which is put into the initial notion of an object.The attention is turned back to develop a conceptual framework that is adequate for characterizing the computational features of the object [Yah87].The basic task is to assemble existing computational cases and concepts to form a suitable framework.The descriptions thus obtained provide a perspective from which to view possible advantages and disadvantages for computations with the objects as they are [Gil87].

Aims
The descriptions of various kinds of computation constitute part of a framework which can accommodate combinatory logic.In an initial (and pure) theory the combinators-as-objects have been used to show that bound variables are unnecessary in systems of logic.When formulae are used to restrict an object properties then bound variables mark the places in a formula that are affected by some metaoperator, in particular, a quantifier.
In their mathematical origin, combinators are objects which express rules for manipulating other objects (and combinators among them).The generality of an approach to represent 'applied' objects by 'pure' objects can be conditionally restricted to capture the needed features and computational effects.Some of the following questions are of interest to many researchers and to the author also: (1) Non-formal ideas concerning 'object'.
(2) Do the known formal theories of objects really fruitful to capture an 'object' intuitive reasons amount?
(3) Data model based on computations with the objects: is it a phantom or the desirable means?
(4) Base of data vs. database: the basis property for computations with the objects.
(5) Inductive classes: generating a variety of (possible) objects (which are schematic).The present paper not obviously covers all the troubles.The mathematical trends and citations for the contemporary research activity in combinatory logic, λ-calculus, and category theory are omitted.Only a few of the related topics are used reflecting the current interests of the author.
The paper is divided into four sections.Section 1 gives some needed encircling of the object topics.Section 2 contains the preranging the expressions with the objects via different general metaoperators setting up the initial conceptual framework.Section 3 illustrates the particular computational frameworks within the initial one.
The first two sections are independent of combinatory logic (and purification of combinator-as-object).The third section deals with the particular applicative computational systems, i.e. an application metaoperator is significant.The functional abstraction as a metaoperator is unnecessary to use -its effects may be assembled into combinators.

Non-Formal Ideas Concerning 'Object'
The natural way to represent the ideas involved is to verify if are there any atomic, and simplest, entities.Those entities are to be used to generate the derived entities that are built from other, and less complicated entities.More suitable way is to propose different modes to establish and use entities.First of them is as follows: (1) the researcher starts with the simplest entities and expands them to generate more complicated ones; (2) the relation of expansion should be established to make a linkage between the initial and target objects.
To the contrary the second approach gives another way: (1) the researches takes an entity as it is and makes an attempt to reduce it to less complicated entities; (2) the relation of reduction is used to make a linkage between the initial and target objects.
In case when both expansion and reduction are used simultaneously the relation of conversion is said to be used.

Preliminary Remarks
An object in mathematics, as a rule, needs the purely abstract notion to avoid possible ambiguities.The fruitfulness of this notion depends on the pragmatic sense of the corollaries being extracted.The distance between the notion of objects in mathematics and in computer science is even more than the gap between pure and applied theory.E.g., in applications some kind of logic may be presupposed and used to fix the useful properties of the intuitively observed objects.To the contrast, pure and rigorous consideration does not deal with any presupposed logic to avoid excessive restrictions.Instead this, the metatheoretical framework is by default selected to fix the properties of the mathematical tool under development.Hence, the first remark is as follows: an initial metatheoretical framework is some kind of pre-logic, at least, with the (potential) computational properties.First of all, this means the possibility does exist to built the usual constructs, e.g., variables, constants, sets, functions and functional spaces etc.Note that the needed truth values are to be generated as the specific objects.
On the other hand the known essential computational property is heavy based on the notion of substitution.Indeed, an everyday computer science practice involve various replacement strategies of some parameters by the other parameters or values.Thus, an importance of substituting process is clear understood, deeply studied and not yet completed even in the research area.The main idea is to promote the restricted substitutions to generate the applied theories of objects.Being unrestricted, the substitution process directly leads to the higher order theories -and to interesting and less understood operators.

Ranging the Objects: Metaoperators
The restriction arises very naturally in different approaches.A typical way to construe the weak restriction is to enable the correspondences between objects, e.g., as follows: Here: the operator acts on an object which is restricted by the range.This kind of operators often in referred to the intention operators or metaoperators.
Note that the origin of the initial consideration of the entities needs from the very beginning some suitable constructs that individualize the set of properties by the objects.For convenience they are referred as the individuals.
The last two decade research activity tends to separate the class of individuals into subclasses, so the actual, or existing, potential, or possible, and virtual individuals are distinctly extracted and studied in part.Next, the correspondences between the actually existing, potential and virtual entities must be established.
The sensitivity of this separation-and-correspondence depends on an expressive power of the metamathematical framework.This is a branch point when the initial homogeneous metatheory is separated into syntax and semantics.Thus, the constructions containing the objects are to be evaluated to compute the values of expressions.An evaluation mapping results to the metaoperator Here an assignment marks the context switch, where the source-object is to be evaluated to result in its value, or targetobject.
3 Preserving the Computational Potentiality

Involving an Abstraction Metaoperator
Start with the initial amount of entities: possibly, infinite set of variables and constants.All the consideration deals mainly with the notion of function f which corresponds, at least, one object f (x1, x2, . . ., xn), its value, to n-tuple of objects x1, . . ., xn, its arguments, which in turn may be functions in the current sense.For convenience to refer to the distinct arguments of n-ary function the abstraction metaoperator as an n-placed multiabstraction.To determine objects by the λnotation the additional metaoperator of application is needed, and saves the writing efforts.
The intuitive reasons to use abstraction and application notation, before giving the precise definition of an object are as following.In general, λ-expression, or λ-term is known as an unary function which values and arguments may in turn be the functions.Every variable represent an arbitrary unary function, and (F G) is the result of applying the function F to argument G. Whenever F contains (free) occurrences of x, the (λx.F ) represents the function, where its value for argument A results from the substituting A instead of x into F .Now the class of objects is generated by induction on their complexity, namely: (i) both variables and constants are the objects; (ii for objects F , G their application (F G) is an object; (iii) for object F and variable x the abstraction (λx.F ) is an object.
The definition above has the 'side effect': set of variables becomes heterogeneous because of binding properties of the (λ • .•)operator.This effect is clearly observed by an attempt to determine the substitution: for any objects F , G and variable x an effect [G/x]F of replacing every free occurrence of x in F by G is given by induction on complexity of F : (ii) [G/x]a = a for atomic a and a = x; Here z is a new variable not included neither in G nor in F .The last step (v) of induction gives the distinction between free and bound variables.
The primitive frame of the (• •)+(λ • .•)-metaoperatorsgenerates an equational theory of objects below referred as Axioms.(α) λy.F = λv.[v/y]Fif y is not bound in F and v is both not free and not bound in F ; The question arises: is the (λ • .•)-abstractionmetaoperator necessary needed in a theory of objects?The answer below is negative.

Avoiding an Abstraction Metaoperator
The formal system without an abstraction metaoperator does exit.But this avoiding leads to some problem with encapsulation.Even more, the direct consideration generates the combinatory code with a lot of encapsulated objects.
Start with the same as above initial amount of entities: possibly, infinite set of variables and constants.The set of constants contains combinators I, K, and S. In addition, metaoperator (• •) of application is used.An inductive class of objects is generated as follows: (i) both variables and constants are the objects; (ii) for objects F , G their application (F G) is an object.Combinator is an object that contains only I, K, and S.
Axioms.For any objects X, Y , Z: Rules.For any objects X, X ′ , Y , Z: The missed parameter can be called as the encapsulated objects whenever the host object is observed as a kind of context.The encapsulation of the numbers result in, e.g., the computation as follows:

Equivalence of
The known result in a theory of applicative computations is an equivalence of (• •)+(λ • .•)and(I, K, S) +(• •)-theories.Thus, both the theories of objects deal with the same task and similar ideas concerning an object.It means that the (• •)+(λ • .•)-object(source object) can be represented by the (I, K, S) + (• •)-object (target object).Even more, the set { I, K, S } is the computational basis because of the following Metatheorem.

Type Checking
The natural way to generate functional spaces by the λ-abstractions reflects an idea of type assignment.The type assignment needs to modify an existing set of objects which are understood as terms.Before applying modification the set of types is to be determined.First of all some basic types are assumed to exist, and each of them represents some set.For instance, the basic type N represents the set of natural numbers.The set of types is defined by induction on their complexity: every basic type is a type; (ii) if α and β are the types then (α → β) is a type.
The following properties are presupposed: types (α → β) are distinct from the basic types, and (α → β) = (α ′ → β ′ ) implies α = α ′ and β = β ′ .The type (α → β) has the sense of "the functions from α to β" to represent the set of functions from the set represented by α to the set represented by β.An exact set of the denoted functions depends on the context where typed combinators or λ-terms are used.When it is determined, every type represents the set of individuals or functions.For simplicity, the terms are identified by the λ-abstractions.
As usually, for every type α an infinite set of variables v : α does exist and α = β implies v : α = v : β.In accordance with previous consideration, let λx.F be a primitive term generating operation.
Thus, the typed λ-terms are defined as follows: (i) all the variables v : α and constants c : δ are the typed λterms with the types α, δ respectively; (ii) for objects G : (α → β) and H : α their application (GH) has the type β; (iii) for the variable v : α and the object G : β an abstraction (λv.G) is the term of type (α → β).
This definition implies that every typed λ-term has the unique type.Below (ii) is referred as F-rule, or (F) and (iii) as λ-rule, or (λ).
The next step is based on (F-rule), and x(yz) : γ2 is separated by (F) into x : δ1 → γ2 and yz : δ1.In turn by (F) the object yz : δ1 is separated into y : ∆ → δ1 and z : ∆.No compounds are observed.
This type generating procedure can be implemented with more or less difficulties.Note that the type checking of the applicative forms needs no preliminary transformation of the initial object.

Computations in a Category
Different ways to construe the computation in a category are observed.Due to [CCM85] a categorical abstract machine became a tool to compile the initial programm into machine instructions.An advanced study [Wol96] is based on the object-oriented solution to involve the functor-as-object, and the flexible data models are to be extracted.

Combinatory Representation
Traditionally, the set of combinators is fixed to represent the machine instructions by the objects.
•, Id} be the set of combinators, where: • > be the abbreviations with the meaning that [x, y] = λr.rxy(pairing combinator) and < f, g >= λt.[f (t), g(t)] = λt.λr.r(f (t))(g(t)) (coupling combinator) respectively.This means that both of them are equipped with the first projection F st and the second projection Snd where F st : A × B → A and Snd : A × B → B. For arbitrary mappings h : A × B → C and k : A → (B → C) the following equations are valid:

λ-Representation
The λ-abstractions lead to a direct substitutions to obtain the meaning of expression.The elegance of computations becomes even more when the de Bruijn encoding is used.The de Bruijn code indicates the depth of binding the variables within λ-expressions, i.e. the bound variable is replaced by the number of 'λ' symbols between this variable and the binding 'λ' excluding this last 'λ' from the account.For instance, the object λy.

Evaluation and Environment
The main question is to determine the meaning of expressions, and this depends on the associated values and identifiers, i.e. on the environment.The usual set of semantic equations reflects the idea when applying function to its argument is represented by the order of writing.Thus, the symbol of argument follows the symbol of function.

Semantic Equations
The semantic equations (cf.[CCM85], [Wol96]) illustrate an idea of context dependent evaluations.Thus, ρ below is the desired context, and this context sensitivity controls the flow of computations.
In case when applicative computations are used, the resulting set of equations become extremely transparent: where ρ is an environment, ρ(x) is the value of x under the environment ρ, c is a constant denoting the value which is constant also (according to the usual mathematical practice), [d/x]ρ is the environment where all the x free occurrences are replaced by d.
In general, computation with de Bruijn notation is analogous to those with the usual combinators when the set of rules is slightly modified.Some set of rules and agreements is to accommodate the where the value wi is associated to the de Bruijn's code i.This is a strict restriction.The environments where an expression is evaluated are the mathematical structures but not arrays.This choice is due to the efficiency conditions.First of all this restriction leads to the simplified computation description: The values are not only of self interest but they are interesting from the supported computations.From the combinatory view, e.g., the meaning of (M N ) is the combination of M and N .Thus, the following three combinators: $ of arity 2, Λ of arity 1, and ′ of arity 1 along with the infinite set of combinators n! in a sense of: are to be established.The equations above generate the translation of semantic equations to the purely syntactic ones: These equations are similar to SK-rules: the first three of them indicate the property to suppress an argument (like properties of K), the fourth rule is the non-curried version of rule for S, the fifth rule is exactly the currying, i.e. transformation of a function of two arguments into the function of the first argument which in turn is the function of the second argument.An additional couple combinator bring more harmony into the syntactical equations: [M, N ] =< M , N > (this will be shown).This combinator is equipped with the selectors, or projections F st and Snd.Also consider the composition '•' and the additional command ε.The objects $[•, •] and n! are the abbreviations for 'ε• < •, • >' and 'Snd • F st' respectively, where F st n+1 = F st • F st n .Now all is prepared to write down the syntactical equations.

Syntactical Equations
The merging of the previously given sets of rules results in the following: (ass) where (dpair) connects the pairing and coupling operations, (ass) relates the composition and application.An easy conclusion of $[x, y]z = ε[xz, yz] may be proved.Hence, the manipulations with the combinators F st, Snd, and ε become homogeneous.Besides that, the equation is easy to verify giving rise to the equation ( ′ x)yz = xz.Now everything is prepared to set up an evaluation in a cartesian closed category.
Example 3.7 (computing by closure) .To compute by closure means evaluate F by applying F ′ to the environment.Initially, for closed F an environment is empty, thus ρ = ().The strategy to evaluate F ′ is to select the most left and most inner expression.To save writing let to abbreviate: The chain of equations is as follows: This result is the same as in case of direct computations given above.

Avoiding Encapsulation: Supercombinators
Now the process of compiling objects is under discussion.The known approaches from the applicative computations give the different (and distinct) strategies to transform one objects to other ones.The discussion given above stimulates the useful intuition to mark the specific features.The first approach deals with the direct compiling the source-object into target-object using the prespecified set of combinators.Non optimized combinatory code involves the set {I, K, S} as the basis to compile-in.Note that the precise definitions are known before the compiling has done.Another idea is to generate the combinators during the compiling.In fact, the target set of resulting combinators will be known after the compiling.The last strategy is based on the specific objects called as supercombinators.The supercombinator $S of arity n is the λ-expression (λ-abstraction) λx1.λx2. . . ..λxn.E where E is not abstraction.Thus, all the leading symbols of abstraction 'λ' bind exclusively x1, x2, . . ., xn and the following restrictions are valid: (1) $S does not include any free variable; (2) every abstraction in E is a supercombinator; (3) n ≥ 0, so the symbols 'λ' are not necessary.
It's a time to compare the different kinds of objects available and their main computational properties.Replacing the free occurrences of the formal parameters in the body of supercombinator by the actual arguments is called as comprehension and is enforced by β-conversion.Intuitively, the combinator is the λ-abstraction that does not contain any free occurrences of variables.Hence, some combinators are the supercombinators.Similarly, some λexpressions are the combinators.
Example 3.8 The objects 3, 4, [3, 4]+, λx.x are the supercombinators.The objects λx.y (free variable y), λy.+ yx (free variable x) are not supercombinators.The object λf.f (λx.f x2) is a combinator (all the variables are bound) but is not a supercombinator (inner abstraction contains the free variable f violating the definition).The combinators I, K, S are supercombinators.Hence, the IKS-compiling above is based on supercombinators, in particular.

Compiling Objects with Supercombinators
The actual programs contain the amount of abstractions.Hence, the program is to be transformed to include the supercombinators ultimately.The notations for supercombinators are started with the symbol '$', e.g., $X = λx.x.To stress the specific features of the supercombinators rewrite this definition by $Xx = x.
The strategy selected is to transform the compiled abstraction into: (i) the set of supercombinators' definitions; The compiling in supercombinators is straightforward.
(1) Select the innermost abstraction, i.e. the abstraction that does not contain the other abstractions: (λx.x).It does not contain any free variable, thus it is the supercombinator named $X.
(4) The compiled code is:

Conclusions
A large part of the difficulties with object notions can be traced to there not being a conceptual framework.A rough tracing of common object techniques shares distinct branches, -thus the logical, categorical, and mostly computational ones are outlined.The essential ideas from the object calculi are based on a few of initial concepts.The most important is a function which is not determined by its domain and range but is determined by the process.Those are combinators.The combinators are the pure and elementary objects which are combined by the metaoperator of application.
Another way to understand functions as objects (or: objects via functions) is to use the functional abstraction.This is a metaoperator which is added to application.
The computational systems based on applications and abstractions are referred as applicative computational systems (ACS).ACS to the contrast with the operator, or imperative computational systems, have some important advantages.Among them is the clear mathematical foundation.The usefulness of the mathematical properties is even more than a theory of computations.
The preliminary study to apply ACS to the domain of objects as they are in a database theory shows the following: (1) Modeling the objects and corresponding computations for combinatory logic and λ-calculus involves a domain of objects and a set of (meta-)operations that are to be represented by the elements of the domain.
(2) The class of possible operations depends on the particular constructions: the definable operations should be represented.
(3) To some extent the study of different object notions can be pursued independently of the particular data models.