2,711
views
0
recommends
+1 Recommend
1 collections
    37
    shares
      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Polynomial solvability of NP-complete problems

      Published
      Original article
      Bookmark

            Abstract

            A polynomial algorithm for solving the “Hamiltonian circuit” problem is presented in the paper. Computational complexity of the algorithm is equal to O(n8log22n), where n is the cardinality of the observed graph vertex set. Thus, the polynomial solvability for NP-complete problems is proved.

            Main article text

            1. INTRODUCTION

            The problem of the relation between P-class problems polynomially solvable by a deterministic computer and NP-class problems polynomially solvable by a non-deterministic computer is considered as one of basic unsolved millennium problems.

            Foundation of NP-complete problems is laid by S. A. Cook [1]. He introduced the concepts of NP-class problems, polynomial reducibility, and NP-complete problem to which a problem of NP class is polynomially reducible. At present, NP-completeness for many of NP-class problems is proved. In particular, the “Hamiltonian cycle” problem is NP-complete [2, 3].

            Nowadays a topical question is whether NP-complete problems are intractable. A proof of the possibility to solve an NP-complete problem with a polynomial algorithm is the proof of P and NP equivalence.

            In this paper, a polynomial algorithm for the NP-complete “Hamiltonian circuit” problem is presented which means that the polynomial solvability for NP-complete problems is proved.

            2. “HAMILTONIAN CIRCUIT” PROBLEM

            Statement of the “Hamiltonian circuit” problem is given as [3, c. 35–37]:

            • INSTANCE: The simple graph G = (V, E) is given.

            • QUESTION: Is it true that the graph G contains hamiltonian circuit, i.e. a sequence of vertices 〈v1, v2, …, vn〉 of the graph G vertices such that n = |V (G) |, {vi, vi+1} ∈ E(G) for all i: 0 ≤ in − 2 and {vn−1, v0} ∈ E(G).

            There have been unsuccessful attempts to solve the traveling salesman problem by a symmetric linear program. Mihalis Yannakakis closed the discussion with his paper [4] having proved that expressing the traveling salesman problem by a symmetric linear program requires exponential size.

            In this paper we present a polynomial algorithm for the “Hamiltonian circuit” problem that is a proof of the polynomial solvability for NP-complete problems.

            3. LOCATION OF VERTICES OF CYCLE C ON GRAPH G

            Let C be a simple cycle, V(C) = {c0, c1, …, cn−1} be a set of vertices, and E(C) = {{ci, c(i+1)%n}: i = 0, 1, …, n − 1} be a set of edges where a%b is modulo of integers a and b.

            The problem of determining the existence of Hamiltonian circuit in the graph G is equivalent the problem of recognizing the existence of bijection:

            φ:V(C)V(G):{φ(ci),φ(c(i+1)%n)}E(G).

            4. STATEMENT IN THE FORM OF BOOLEAN PROGRAMMING PROBLEM

            Let us consider a Boolean optimization problem:

            (4.1) F(x)=i=0n1(v,u:{v,u}E¯(G)xvixu(i+1)%n)minxD,
            E¯(G)=V2E(G), D=D1D2{0,1}n2
            (4.2)D1={x:vV(G)xvi=1,i=0,1,2,,n1}
            (4.3) D2={x:i=0n1xvi=1,vV}.
            Let φ: V(C) → V (G) be a map. Let us define x as:
            (4.4) (i=0,1,2,,n1,vV(G))(xvi=χ{v}(φ(ci)))
            where χX(·) is the characteristic function of the set X. Constraints (4.2) represent the requirement that all vertices ciV(C) get their assignments. Constraints (4.3) represent the requirement that for each vertex vV(G) is assigned to a certain ciV(C). Value of the loss function F(x) is equal to number of edges belonging to E(G¯) in the image φ(C) of the placed cycle C. If graph G has a Hamiltonian circuit and x* is the optimal solution of (4.1), then F(x*) = 0, otherwise F(x*) ≥ 1.

            5. RELAXATION OF THE INTEGRALITY CONDITIONS

            Algorithmic analysis of a combinatorial problem is significantly simplified after the consideration of the corresponding continuous statement. The quadratic assignment problem:

            (5.1) F(x)minxD˜
            that differs from problem (4.1) by the replacement of the discreet set D into continuous set D˜=D1D2[0,1]n2 is named a relaxed problem. The feasible region D˜ of the relaxed problem is defined by constraints (4.2), (4.3), and x ≥ 0, i.e. D˜ is the matching polytope of the complete bipartite graph Knn. Each vertex of D˜ corresponds to the Hamiltonian circuit in the graph Kn, and the value of the objective function F(·) at any vertex of the polytope D˜ is equal to the number of edges belonging to E(G¯) for the corresponding Hamiltonian circuit.

            Thus, if G has f Hamiltonian circuit then according to the non-negativity of F(x) for xD the optimal value of the relaxed problem is:

            F˜=min{F(x):xD˜}=0.
            But if G does not have a Hamiltonian circuit then:
            (5.2) F˜=min{F(x):xvertex(D˜)}1
            where vertex(D˜) is the vertex set of the polytope D˜.

            Since any point of the polytope is a convex combination of its vertices, for non-Hamiltonian graphs we have:

            F˜=min{F(x):xD˜}=min{F(x vertex(D˜)xα(x)):xvertex(D˜)α(x)=1,α0}min{x vertex(D˜)F(x)α2(x):xvertex(D˜)α(x)=1,α0}min{xvertex(D˜)α2(x):xvertex(D˜)α(x)=1,α0}1n!.
            The non-zero quadratic form F(·) having the matrix with non-negative items yields the first inequality of this chain. The second inequality is the result of inequality (5.2). Finally, the third inequality follows from:
            argmin{xvertex(D˜)α2(x):xvertex(D˜)α(x)=1,α0}={α(x)=1|vertex(D˜)|=1n!:xvertex(D˜)}.
            The foregoing proves following assertion.

            Proposition 5.1. Let F˜ be the optimal value of relaxed problem (5.1).

            • If F˜=0, then G = (V, E) has a Hamiltonian circuit.

            • If F˜1/n!, then G = (V, E) does not have a Hamiltonian circuit.

            Corollary 5.2. It is sufficient to calculate the optimal value F˜ with an absolute error no more than 1/(3n!). No more than (2 + n log2 n) bits are required for the representation of F˜.

            Relaxed problem (5.1) keeps intractability because it has many local minimums.

            6. DUAL PROBLEM

            Lagrangian function of problem (4.1) is of the form:

            (6.1) L(x,λ,ν)=i=0n1(v,u:{v,u}E¯(G)xvixu(i+1)%n)+vV[λv(1i=0n1xvi)]+i=0n1[νi(1vVxvi)].
            Let us make use of Lagrangian relaxations for constraints (4.3) to find the optimal value of problem (4.1) In this case the dual function is:
            (6.2) Λ(λ)=minx{0,1}n2D1L(x,λ,ν)=minx[0,1]n2D1i=0n1[vVλvxvi+v,u:{v,u}E¯(G)xvixu(i+1)%n]+vVλv.
            Function Λ(λ) is convex upwards and upper bounded by the optimal value of the problem (4.1) that exists and is finite [5]. The problem of finding the maximal value of the dual function Λ(λ) can be solved by the application of known convex non-smooth optimization methods [6]. Let us consider the possibility to apply the ellipsoid method [7] for solving the problem:
            (6.3) Λ(λ)maxλ.

            7. APPLICATION OF THE ELLIPSOID METHOD

            The ellipsoid method scheme from [7] is represented at Figure 1.

            Figure 1.
            The ellipsoid method scheme.

            This method allows finding an ε-solution of the problem

            (7.1) Λ(λ)maxλS

            satisfying the conditions

            (7.2) (λ0,λ^S,R,rR)(B2(λ^,R)SB2(λ0,r))Rn,
            (7.3) (λ1,λ2B2(λ0,R))(|Λ(λ1)Λ(λ2)|Lλ1λ22),
            where B2(λ, ρ) is the Euclidian ball with the center λ and the radius ρ by applying
            (7.4) N=2(n+1)2lnLR2rε
            iterations.

            For current Lagrange multipliers λk the k-th iteration of the ellipsoid algorithm comprises computing the value of Λ(λk), the separate vector g, the Lagrange multipliers λk+1, and the matrix Hk+1.

            So to construct an algorithm for calculating the maximum value of Λ(λ) by the ellipsoid method it is sufficient to determine

            1. the region S ∋ arg maxλ ≥0Λ(λ) and the ball B(λ0, R) ⊃ S;

            2. the Lipshitz constant L: | Λ(λ1) − Λ(λ2) | ≤ L· | |λ1λ2| |2;

            3. the algorithm for calculating the value Λ(λ) for all λB(λ0, R);

            4. the algorithm for calculating the subgradient ▽ Λ(λ);

            5. the algorithm for calculating the vector g of the hyperplane separating set S and the point μS.

            8. LOCALIZATION OF THE OPTIMAL LAGRANGE MULTIPLIERS

            Necessary conditions of the extreme of L(x, λ, ν) are

            (8.1) (vV)(Lxv0=u:{v,u}E¯(G)(xu1+xun1)λvν0=0),
            (8.2) (vV,i=1,2,,n2)(Lxvi=u:{v,u}E¯(G)(xui1+xui+1)λvνi=0),
            (8.3) (vV)(Lxvn1=u:{u}E¯(G)(xun2+xu0)λvνn1=0).
            (8.4) (i=0,1,2,,n2,n1)(vVxvi=1),
            (8.5) (vV)(i=0n1xvi=1),
            (8.6) (i=0,1,2,,n2,n1;vV)(xvi0).

            Proposition 8.1. The feasible solution set of system (8.1)–(8.6) contains a solution (x, λ, ν): λ ≥ 0, ν ≥ 0.

            Proof. It is following from (8.1)–(8.6) that

            (8.7) (vV,i=0,1,2,,n2,n1)(0λv+νi)(i=0,1,2,,n2,n1)(m=minvVλvνi).
            But it is easy to see that if λ = {λv}v V ν={νi}i=0n1 is satisfying to (8.1)–(8.3) then λ˜={λvm}vV and ν˜={νi+m}i=0n1 then λ˜0, ν˜0 is satisfying (8.1)–(8.3) too.

            Let as introduce variables

            M=4n24|E(G)|n,S={{λv}vV:0vVλvM,λ0}
            to reduce lengthy statements and improve their readability.

            Proposition 8.2.

            (8.8) λ*=argmaxλΛ(λ)S.

            Proof. The summation of equations (8.1)–(8.3) over i = 0, 1, …, n − 1 results is

            (8.9) (vV)(u:{u}E¯(G)(2i=0n1xui)nλvi=0n1νi=2n2ρ(v)nλvi=0n1νi=0),
            where ρ(v) is the number of edges in the graph G that are incident to vertex vV.

            The summation of equations (8.9) over all vV results is

            (8.10) 2n24|E(G)|nvVλvni=0n1νi=0.

            It is following from proposition 8.1 that ν ≥ 0 and λ ≥ 0 exist therefore we have

            0vVλv4n24|E(G)|n=M.

            Proposition 8.3. Let

            λ0={λv0=Mn}vV,R=Mn,λ^={λ^v=Mn+n}vV,r=Mn+n,
            then B2(λ^,r)SB2(λ0,R).

            Proof. It is easy to check that λ0 is a projection of the origin on the hyperplane defined by the equation vVλv=M, R = | |λ0| |2 and | |λ0λ | | ≤ R for any vertex λ of the simplex S. Therefore SB2(λ0, R).

            By symmetry, the ball with the maximal radius r inscribed in the simplex S has center λ^={λ^v=r}vV at that r=λ0λ^2. Therefore r=M/(n+n).

            9. LIPSHITZ CONSTANT

            Let λ1, λ2 ≥ 0 be. Let us assume Λ(λ1) ≤ Λ(λ2),

            x˜=argminx[0,1]nD1i=0n1[vVλv1xvi+v,u:{v,u}E¯(G)xvixu(i+1)%n].
            Evidently, this assumption does not reduce generality. Therefore,
            (9.1) |Λ(λ1)Λ(λ2)|=|minx[0,1]nD1i=0n1[vVλv1i=0n1xvi+v,u:{v,u}E¯(G)xvixu(i+1)%n]+vVλv1minx[0,1]nD1i=0n1[vVλv2i=0n1xvi+v,u:{v,u}E¯(G)xvixu(i+1)%n]vVλv2||vVλv2i=0n1x˜vi+vVλv1i=0n1x˜vi|+|vVλv1vVλv2|(n+1)vV|λv1λv2|(n+1)nλ1λ22.
            Thus, it is proved

            Proposition 9.1. Let L=(n+1)n be, then

            (9.2) (λ1,λ20)(|Λ(λ1)Λ(λ2)|Lλ1λ22).

            10. PROPERTIES OF THE OPTIMAL VALUE OF THE DUAL PROBLEM

            Let us estimate possible values of Λ(λ) to find a reasonable calculation error.

            Proposition 10.1.

            maxλSΛ(λ)minxD˜F(x).

            Proof. Let

            (10.1) x(λ)=argminx{0,1}nD1i=0n1[vVλvxvi+v,u:{v,u}E¯(G)xvixu(i+1)%n].

            It is following from the convexity of Λ(λ), the boundedness of S, and the necessary optimality conditions that exist λoS and x(λo) satisfying the condition

            (10.2) Λ(λo)={Λ(λo)v=1i=1n1x(λo)vi}vV=0.
            Since x(λo) is a convex combination of points of set {0,1}n2D1 then taking into account (10.2) we have x(λo)D˜. Therefore
            minxD˜F(x)F(x(λo))=Λ(λo).

            The corollary of propositions (5.1) and (10.1) is

            Proposition 10.2. Let Λ* = maxλSΛ(λ) be the optimal value of dual problem (7.1).

            If Λ* = 0, then G = (V, E) has a Hamiltonian circuit.

            If Λ* ≥ 1/(n!), then G = (V, E) does not have a Hamiltonian circuit.

            It is follow from proposition 5.1 that it is sufficient to find an approximate Lagrange multipliers vector λ*: Λ* − Λ(λ*) ≤ ε = 1/(n!). Indeed, this inequality implies Λ* ≤ Λ(λ*) + ε, which subsequently implies Λ*Λ(λ*)+ε. Since Λ(λ*) ≤ Λ* then Λ*=Λ(λ*)+ε.

            Proposition 10.3. If the error of the dual function Λ(λ) calculation is no more than ε = 1/(n!) then the ellipsoid algorithm solves a “Hamiltonian circuit” problem correctly.

            11. NUMBER OF ITERATIONS OF THE ELLIPSOID ALGORITHM

            The variables of expression (7.4) are equal

            R=Mn,r=Mn+n,M=4n2(1|E(G)|n2),L=(n+1)n,ε=1/(n!)
            in accordance with assertions 8.2, 8.3, 9.1 and 10.3. Therefore we have an estimation
            (11.1) N=2(n+1)2lnLR2rε=2(n+1)2ln[4nn(n+1)(n+n)(1|E(G)|n2)(3n!)]<2(n+1)225+5log2n+143nlog2n.

            Here, the last inequality is true for n ≥ 10.

            12. CALCULATION OF Λ(λ)

            It is follow from (6.2) that it is sufficient to suggest the effective algorithm for solving the problem

            (12.1) i=0n1[vVλvxvi+v,u:{v,u}E¯(G)xvixu(i+1)%n]minx{0,1}nD1.
            that may calculate the value of Λ(λ). Problem (12.1) is a special case of the polynomially solvable Weber problem on a tree network [8]. Let us give a sketch of the chain adaptation of the algorithm from [9] for the purpose of presentation completeness.

            Let us locate the vertex c0V(C) at the vertex v0V(G). Such fixation cuts down the number of locations and does not change the optimal value of the problem.

            Let us introduce a state transition digraph G with the node set

            (12.2) V(G)={(v0,0),(v0,n)}[u,vV(G){v0}((V(G){v0})×{1,2,,n1})]
            and the arc set
            A(G)=[vV(G){v0}{((v0,0),(v,1))}][i=1n2(u,vV(G){v0}{((u,i),(v,i+1))})][vV(G){v0}{((v,n1),(v0,n))}].
            Problem (12.1) in terms of digraph G is the minimum weight (v0, 0) − (v0, n) path problem when the weights of the nodes (v,i)V(G) are equal to λv, and the weights of the arcs ((u,i),(v,i+1))A(G) are (1 − χE(G){u, v}). This problem may be solved by any common shortest path algorithms. The number of nodes |V(G)| is n2 and the in-degree of all nodes is no more than n. Therefore algebraic computation complexity of finding the optimal solution
            x(λ))=argminx[0,1]nD1i=0n1[vVλvxvi+v,u:{v,u}E¯(G)xvixu(i+1)%n]
            and the calculation of Λ(λ) is no more than O(n3).

            13. PRECISION OF LAGRANGE MULTIPLIERS

            The estimation of algebraic computational complexity expresses the number of operations with operands for the representation of which only one register is required. But for fixed point number representation with an absolute error no more than δ the sufficient number of r-digital registers must be no less than |log2δ|/r. The computational bit complexity of the algorithm for problem (12.1) with these data is equal to O(n3|log2δ |).

            The algorithm for problem (12.1) contains only “add” and “compare” operations with data only. Using the fixed point format makes all computations accurate. If the absolute errors of the data is no more than δ then the absolute error of the calculated value Λ(λv) is no more than ε = n·δ as any intermediate result are sum of no more than n approximate summands.

            The accessible boundary of the error of Λ(λ) equal to ε = 1/(3·n!) in accordance to proposition (10.2). Therefore the error δ = 1/(3n·n!) for all λv,vV is sufficient and the following assertion is valid.

            Proposition 13.1. The computational complexity of calculating Λ(λ) is no more than O(n4log2n).

            14. CALCULATION OF gk

            The case of λkS is possible when λk0 or vVλvk>M. In the first case

            gk={gkv={0,λv0;1,λv<0  }vV,
            and in the second one gk = {gkv = −1}v V.

            Hence, the computational complexity of calculating gk is no more than O(n log2 n) when λkS.

            If λkS then gk = ▽ Λ(λ). It follows from (6.2) for dual function Λ(λ) that:

            Λ(λ)={λv=1i=1n1x(λ)vi}vV.

            Therefore the computational complexity of finding ▽ Λ is no more than O(n) if the solution x(λ) is known.

            15. ELLIPSOID METHOD ITERATION COMPLEXITY

            As it follow from the ellipsoid method description (Figure 1), the k-th iteration requires the calculation of the matrix Hk+1 of the quadratic form for defining ellipse and Lagrange multipliers λk+1 for the next iteration. The algebraic computational complexity of this procedure is equal to O(n2).

            The effectiveness of the ellipsoid method when of the Lagrange multipliers λk and items of the matrix Hk are fixed point numbers with the fractional part length equal to p = 5N = O(n3 log2 n) bits is proved [7, p. 173–176]. Therefore computational complexity of finding matrix Hk and vector λk is no more than O(n2N) = O(n5 log2 n).

            Hence, one iteration requires:

            1. O(n4 log2 n) bit operations to solve(12.1);

            2. O(n log2 n) bit operations to calculate g;

            3. O(n5 log2 n) bit operations to find the matrix Hk+1 and vector λk+1.

            The requirements herein before explain that the computational bit complexity of an ellipsoid algorithm iteration is no more than O(n5 log2 n). It is important to point out that arbitrary-precision arithmetic is required.

            16. POLYNOMIAL SOLVABILITY OF NP-CLASS PROBLEMS

            Since the ellipsoid method performs N = O(n3 log2 n) iterations, we have proved:

            Theorem 16.1. The computational complexity of a “Hamiltonian circuit” problem for graph G = (V, E), |V| = n is no more than O(n8log22n).

            The existence of a polynomial algorithm for a “Hamiltonian circuit” problem and NP-completeness of this problem prove the polynomial solvability for all NP-complete problems [1, 3]:

            Theorem 16.2. P = NP.

            17. CONCLUSION

            The problem of recognizing the existence of a Hamiltonian circuit in graph is solvable in polynomial time, and consequently the polynomial solvability of NP-complete problems is stated. Note there is no the contradiction to Mihalis Yannakakis results [6].

            Acknowledgments

            The author wishes to express his most sincere thanks to Anatoly V. Lakeev for valuable comments on this manuscript.

            References

            1. Cook SA. The complexity of theorem-proving procedures. In: , , , editors. Proceedings of 3-rd annual computation on theory of computing. New York (NY): Association of Computing Machinery; 1971. p. 151–8.

            2. Karp RM. Reducibility among combinatorial problems. In: , , editors. Complexity of computer computation. New York (NY): Plenum Press; 1972. p. 85–103.

            3. . . Computer and intractability: a guide for the theory of np-completeness. Murray Hill (NJ): Bell Laboratories; San Francisco (CA): W. H. Freeman and Company; 1979.

            4. Yannakakis M. Expressing combinatorial optimization problems by linear programs. J Comp Syst Sci. 1991;43(3):441–66. [Cross Ref]

            5. Minoux M. Programmation mathmatique: Thorie et algorithmes dunod (Mathematical Programming: Theory and Algorithms), Collection Technique et Scientifique des Tlcommunications, Bordas et C.N.E T. Paris: E.N.S.T; 1983, 1989. (in French)

            6. Nesterov Y. Introductory lectures on convex optimization: a basic course. Amsterdam: Kluwer; 2010.

            7. , , . The ellipsoid method and its consequences in combinatorial optimization. Combinatorica. 1981;1(2):169–97.

            8. , . The optimal distributions of a tree in a finite set. Comput Math Math Phys. 1988;28:204–6.

            9. , . Polynomial algorithms to finite weber problem for a tree network. J Comput App Math. 1991;35:291–6.

            Competing Interests

            The authors declare no competing interests.

            Publishing Notes

            © 2014 A. Panyukov. This work has been published open access under Creative Commons Attribution License CC BY 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com.

            Author and article information

            Contributors
            (View ORCID Profile)
            Journal
            SOR-COMPSCI
            ScienceOpen Research
            ScienceOpen
            2199-1006
            16 December 2014
            : 0 (ID: c0597241-52c2-47a7-8af0-dd09bbb57888 )
            : 0
            : 1-6
            Affiliations
            [1 ]Department of Computational Mathematics and Software Engineering, South Ural State University, 76, Lenin prospect, Chelyabinsk 454080, Russia
            Author notes
            [* ]Corresponding author's e-mail address: anatoly.panyukov@ 123456gmail.com
            Article
            2207:XE
            10.14293/S2199-1006.1.SOR-COMPSCI.AB0DLX.v1
            c0597241-52c2-47a7-8af0-dd09bbb57888
            © 2014 A. Panyukov.

            This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com .

            History
            Page count
            Figures: 1, Tables: 0, References: 9, Pages: 6
            Categories
            Original article

            Computer science
            Algorithm. Computational complexity. Hamiltonian cycle. Graph. NP-completeness

            Comments

            This is a pretty bold claim. Have you done any works towards a sample implementation? It would be relatively straightforward to test whether you arrive at the correct solution for random sample problems using the step number given in (7.4) and significant bit counts as given in the text with an arbitrary precision math library like e.g. GMP. Btw: Typo in sect. 5 ("elaxed") and formula (4.3) (you probably mean V(G)?) Also, what is the complexity of finding \lambda^0 and R?
            2015-02-04 14:03 UTC
            +1
            Indeed in the formula (4.3) should write V (G), and at the beginning of the 8-th row of section 5 should be "relaxed" instead of "elaxed". To encode the number $n=|V(G)| $ requires no more than $\log_2 n$ bits. Therefore, the complexity of computing the values of $M$, $\lambda^0$, and $R$ (see Proposition 8.3) does not exceed the value of the $O(log_2^2 n)$. Numerical experiment based on the subgradient algorithm for problem (6.3) was conducted. Assessment of its computational complexity above, but it allows you to use the 64 bit arithmetic for problems with n < 30.
            2015-02-11 03:38 UTC
            There are no comments.
            2015-02-03 16:33 UTC
            +1
            It necessary all symbols $C$ to replace by symbol $F$ in the formula following formula (5.2).
            2015-01-30 16:25 UTC
            +1
            Let me answer the editorial comments: AQ1 Please provide missing department name for author affiliation. Department of Computational Mathematics and Software Engineering AQ2 Please provide missing editor names for reference “1”. Michael A. Harrison, and Ranan B. Banerji, and Jeffrey D. Ullman AQ3 Please provide missing editor names for reference “2”. R. E. Miller, and J. W. Thatcher, Eds., AQ4 Please provide an English translation of the book title in the reference “5”, as per journal style. Mathematical Programming: Theory and Algorithms (in French) AQ5 Please provide missing city for the “9” references list entry North Holland – Amsterdam Let me draw the attention of the editor at the incorrect display formula 8.7 on page. 3.
            2015-01-07 06:02 UTC
            +1

            Comment on this article