A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

Similar documents
Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Lecture Notes 2: The Simplex Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

A CONVERGENT ALGORITHM FOR SOLVING LINEAR PROGRAMS WITH AN ADDITIONAL REVERSE CONVEX CONSTRAINT

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

Bilinear Programming

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Lecture 2 - Introduction to Polytopes

Integer Programming Theory

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Global Minimization via Piecewise-Linear Underestimation

ORIE 6300 Mathematical Programming I September 2, Lecture 3

Some Advanced Topics in Linear Programming

Linear Programming in Small Dimensions

Graphs that have the feasible bases of a given linear

The Simplex Algorithm

OPERATIONS RESEARCH. Linear Programming Problem

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Chapter 15 Introduction to Linear Programming

Math 5593 Linear Programming Lecture Notes

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

2. Optimization problems 6

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Lecture 2 September 3

COLUMN GENERATION IN LINEAR PROGRAMMING

THEORY OF LINEAR AND INTEGER PROGRAMMING

DC Programming: A brief tutorial. Andres Munoz Medina Courant Institute of Mathematical Sciences

Lesson 17. Geometry and Algebra of Corner Points

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

An FPTAS for minimizing the product of two non-negative linear cost functions

Convex Optimization CMU-10725

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Lecture 4: Rational IPs, Polyhedron, Decomposition Theorem

Compact Sets. James K. Peterson. September 15, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

arxiv: v1 [math.co] 24 Aug 2009

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Discrete Optimization. Lecture Notes 2

FACES OF CONVEX SETS

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

1. Lecture notes on bipartite matching February 4th,

Programming, numerics and optimization

Algorithms for Integer Programming

Topological properties of convex sets

Numerical Optimization

Linear and Integer Programming (ADM II) Script. Rolf Möhring WS 2010/11

Bilinear Modeling Solution Approach for Fixed Charged Network Flow Problems (Draft)

Convexity, concavity, super-additivity, and sub-additivity of cost function without fixed cost

A PROOF OF THE LOWER BOUND CONJECTURE FOR CONVEX POLYTOPES

On vertex-coloring edge-weighting of graphs

Extensions of Semidefinite Coordinate Direction Algorithm. for Detecting Necessary Constraints to Unbounded Regions

NATCOR Convex Optimization Linear Programming 1

16.410/413 Principles of Autonomy and Decision Making

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

DETERMINISTIC OPERATIONS RESEARCH

Polyhedral Computation Today s Topic: The Double Description Algorithm. Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010

5.3 Cutting plane methods and Gomory fractional cuts

Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems

3 No-Wait Job Shops with Variable Processing Times

The simplex method and the diameter of a 0-1 polytope

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Optimization Problems Under One-sided (max, min)-linear Equality Constraints

Approximation Algorithms: The Primal-Dual Method. My T. Thai

11 Linear Programming

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

A modification of the simplex method reducing roundoff errors

VARIANTS OF THE SIMPLEX METHOD

Week 5. Convex Optimization

Institutionen för matematik, KTH.

Combinatorial Geometry & Topology arising in Game Theory and Optimization

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example).

Monotone Paths in Geometric Triangulations

On the Finiteness of the Recursive Chromatic Number

Lecture 19 Subgradient Methods. November 5, 2008

Lecture 15: The subspace topology, Closed sets

1. Lecture notes on bipartite matching

arxiv: v1 [math.co] 27 Feb 2015

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

Linear Programming. Course review MS-E2140. v. 1.1

Lecture 5: Duality Theory

Conic Duality. yyye

Approximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs

Rigidity, connectivity and graph decompositions

Lecture 4: Linear Programming

MOURAD BAÏOU AND FRANCISCO BARAHONA

Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm

Mathematical Programming and Research Methods (Part II)

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

Transcription:

ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND LE DUNG MUU Abstract. The problem of optimizing a linear function over the efficient set of a multiple objective problem has many applications in multiple criteria decision making. The main difficulty of this problem is that its feasible region, in general, is a nonconvex set. In this paper we develop a fast algorithm for optimizing a linear function over the efficient set of a bicriteria linear programming problem. The method is a parametric simplex procedure using one parameter in the objective function. 1. Introduction Let X be a nonempty bounded polyhedral convex set in R n given by a system of linear inequalities and/or equalities. Let C denote a (p n)- matrix. The multiple objective linear programming problem (M P ) given by (MP) V max {Cx, x X} is the problem of finding all efficient point of Cx over X. This problem is called bicriteria linear programming problem if C has two rows. We recall that a point x 0 is said to be an efficient point of Cx over X if there no is vector y X such that Cy Cx 0. An efficient point is often called Pareto or nondominated point. Throughout this paper, for two vectors of k-dimensions a = (a 1,..., a k ) and b = (b 1,..., b k ) we write a b (resp. a > b) if and only if a i b i (resp. a i b i, a b) for every i. Let X E denote the efficient set of Problem (MP). The linear optimization problem over the efficient set of (MP) can be stated as Received April 13, 1995; in revised form September 22, 1995. Keywords. Optimization over the efficient set, bicriteria, parametric simplex algorithm. This paper is supported in part by the National Basic Research Program in Natural Sciences.

60 NGUYEN DINH DAN AND LE DUNG MUU (P) max{d T x, x X E }. Recently, interest in this problem has been intensifying since it has many applications in multiple criteria decision making [11, 12]. Problem (P) can be classified as a global optimization problem since its feasible domain, in general, is a nonconvex set. A few algorithms have been proposed for finding globally optimal solution of (P). Philip [10] first proposed Problem (P) and outlined a cutting plane method for finding an optimal solution of this problem. In [1, 2], using the fact that one can find a simplex S in R p for which Problem (P) is reduced to the infinitely-constrained Problem (Q) formulated as (Q) max d T x subject to λ T Cx λ T Cy y X, x X, λ S, Benson proposed two algorithms for finding a global solution to (P). In both these methods, at each iteration k, Problem (Q) is relaxed by Problem (P k ) given by (P k ) max d T x subject to λ T Cx λ T Cx i (i = 1,..., k) x X, λ S. In the first algorithm this relaxed problem is solved by a branch-and-bound procedure using the convex envelope of the bilinear term λ T Cx. In the second algorithm it is weakened by finding a feasible point (λ k, x k ) such that d T x k > LB k, where LB k is the best known lower bound found at iteration k. By taking h(λ, x) := max y X λ T Cy λ T Cx, Problem (Q) is reduced to the problem max d T x subject to h(λ, x) 0, x X, λ S.

A PARAMETRIC SIMPLEX METHOD 61 Since h(λ, x) is a convex-linear function, this problem is a special case of the problem considered in [7]. In [8] an algorithm is proposed which solves (Q) directly. The main computational effort of this algorithm involves searching the vertices of every generated simplex in R p. The algorithm therefore is resonably efficient when p is small while n, the number of the variables, may be fairly large. Perhaps due to its importance and inherent difficulty several algorithmic ideas have also been suggested for solving Problem (P) (see e.g. [3]). In this paper we propose a parametric simplex algorithm for finding a global solution of Problem (P). In the case when X E is the efficient set of a bicriteria linear problem we show that solving (P) amounts to performing a parametric simplex tableau with one parameter in the objective function. In an important special case when d = αc 1 + βc 2 with α 0, β 0, in particular d = c 1 we specialized a result of [4] by showing that a globally optimal solution of Problem (P) can be obtained by solving at most two linear programs. Thus, in this case there exist polynomial time algorithms for solving (P). 2. Preliminaries The algorithm we are going to describe in the next section is based upon the following theorem whose proof can be found, for example in [10]. Theorem 2.1. A vector x 0 is an efficient point of Cx over X if and only if there exists a λ > 0 such that x 0 is an optimal solution for the scalarized problem (L 0 λ ) max{λt Cx, x X}. By dividing to λ i one can always assume that λ i = 1. Thus, in the case p = 2 Problem (L 0 λ ) can be written as (L λ ) max λc 1 + (1 λ)c 2, x, subject to x X, where c 1 and c 2 are the rows of the matrix C. From Theorem 2.1 and the fact that the set of optimal solutions of a linear program is a face of its feasible domain it follows that there exists a finite set I of real numbers such that X E = λ I X λ, where X λ denotes the solution set of the linear program (L 0 λ ). Let ξ(λ) denotes the optimal value of (L 0 λ ). Then solving Problem (P) amounts to solving the following linear programs, one for each λ I, max d T x

62 NGUYEN DINH DAN AND LE DUNG MUU subject to (P λ ) x X, λ T Cx = ξ(λ). Let x λ be an optimal solution and η(λ) be the optimal value of this problem. Then η(λ ) := max{η(λ) : λ I} is the globally optimal value and x λ is an optimal solution of Problem (P). 3. A parametric simplex algorithm for linear optimization over the efficient set of a bicriteria linear problem The results described in the previous section suggest applying the parametric simplex method for solving Problem (P). For the case of bicriteria linear problem the algorithm runs as follows. Let λ 1,..., λ K be the critical values of the parametric linear program (L λ ) in the interval [0, 1] (this means that the optimal basic of the linear program (L λ ) is constant in each interval (λ i, λ i+1 ) and changes when λ passes through one λ i ). For each λ i (i = 1,..., K) we solve linear program (L λi ) by the simplex method. From the obtained optimal simplex tableau of (L λi ) we use simplex pivots to obtain the maximal value α i of the linear function d T x over the solution set of (L λi ). Then it is clear that the maximal value of α i (i = 1,..., K) gives the optimal value of the original problem (P). For each λ let c λ := λc 1 + (1 λ)c 2, and denote by α the optimal value of (P). Assume that X := {x R n : ax = b, x 0}, where A is a (m n)-matrix and b R m. Then the algorithm can be described in detail as follows: ALGORITHM Assume that we are given the critical values λ 1,..., λ K of the parametric linear program (L λ ). Let α 0 be a lower bound for α and x 0 X E such that d T x 0 = α 0. Set i = 1. Iteration i (i = 1,..., K) Step 1. Solve the following linear program by the simplex method max < λ i c 1 + (1 λ i )c 2, x >

A PARAMETRIC SIMPLEX METHOD 63 subject to x X. Let w i be the obtained optimal basic solution, B 1 = (z jk ) the inverse corresponding basic matrix, and J i be the set of the basic indices. Step 2 (The case when w i is also an optimal solution of (P λi ). If either ik := c λ i k j J i z jk c λ i j < 0 k / J i or d k j J i z jkdj 0 k / J i, then set and α i = d T x i. x i := w i if d T w i > α i 1 and x i = x i 1 otherwise, If i = K, then terminate: x := x i is an optimal solution of Problem (P). If i < K, then increase i by 1 and go to iteration i. Step 3 (The case when w i is not an optimal solution of (P λi )). If d k j j i z jk d j > 0 for some k / J i, then let J + i = {k / J i : c λ i k j J i z jk c λ i j < 0} and solve the linear program (M λi ) given as (M λi ) max{d T x : x X, x k = 0 k J + i }. Let y i be the obtained optimal solution of this linear program, and set x i := y i if d T y i > α i 1 and x i = x i 1 otherwise, and α i = d T x i. Increase i by 1 and go to iteration i. Remarks 1. To solve the linear program (M λi ) it is expedient to start from the obtained solution w i of Program (L λi ). Program (M λi ) can be rewitten in the form max{d T x : x X, λ i c 1 + (1 λ i )c 2 = ξ i }

64 NGUYEN DINH DAN AND LE DUNG MUU where ξ i is the optimal value of (L λi ). 2. It is evident that the above algorithm terminates after K iteration yielding a global optimal solution of Problem (P). 3. Sometimes the critical values of the parametric program (L λ ) are not given. In this case, at the beginning of each iteration i we need to compute the critical value λ i by using the parametric simplex method with one parameter in the objective function (see [5]). 4. A Special Case An important special case of Problem (P) occurs when d = c i for some i {1,..., p}. This case have been considered in [1, 4, 7]. For the case when p = 2 and d is a linear combination of the two rows of C, Benson in [4] has shown that the maximal value of d T x over X E attains at a vertex of X which is also an optimal solution of at least one of the following three linear programs: (L i ) max{c i x : x X, } (i = 1, 2) (L) max{c T x : x X, } Using this result Benson [4] give a procedure for maximizing d T x over X E by generating all basic solutions of these linear problems. In this section we specialize this result by observing that an optimal solution of Problem (P ) with d = αc 1 + βc 2 and α 0, β 0 can be obtained by solving at most two linear programs. Namely, we have the following lemma. Lemma 4.1. Let X 2 denote the solution set of the linear program (L 2 ). Then any solution of the program (L 12 ) max{c 1 x : x X 2 } is also an optimal solution of the problem (P 1 ) max{(αc 1 + βc 2 )x : x X E }. Proof. Let x 1 be any solution of (L 12 ). We first show that x 1 is efficient. Indeed, otherwise there exists a point x X such that c i x c i x 1 (i =

A PARAMETRIC SIMPLEX METHOD 65 1, 2) and Cx Cx 1. Then, since x 1 X 2, it follows that x X 2 and that c 1 x > c 1 x 1 which contradicts the fact that x 1 is an optimal solution of (L 12 ). Hence x 1 X E. Now let x be a global optimal solution of (P 1 ). Then (αc 1 + βc 2 )x (αc 1 + βc 2 )x 1. From x 1 X 2 it follows that c 2 x 1 c 2 x, and therefore c 1 x 1 c 1 x. This together with x X E implies that c i x 1 = c i x, (i = 1, 2). Hence x 1 is a global optimal solution of (P 1 ). The lemma is proved. By Lemma 4.1, solving Problem (P 1 ) amounts to solving the two linear programs (L 2 ) and (L 12 ). From the linear programming [5] we know that if B is an optimal basic matrix, J B is the set of basic indices and k (k J B ) is the entries of the first row in the simplex tableau corresponding to an optimal solution of (L 2 ), then the solution set of X 2 of Problem (L 2 ), which is the feasible set of Problem (L 12 ), is given by where {x X, x k = 0, k J + B } J + B = {k / J B : k < 0}. Note that if k < 0 for every k / J B, then solving (L 12 ) is avoided, since (L 2 ) has a unique optimal solution. We close the paper by elucidating how to calculate the critical values of the parameters. For simplicity assume that the set X := {x R n : Ax = b, x 0} is nondegenerate and that A is a full rank matrix. Let x be a given optimal solution of the linear program max λc 1 + (1 λc 2, x : x X}. Denote by B the basic matrix corresponding to x, and by J the set of basic indices. If B = (z jk ), then from the linear programming [5] we get k = λc 1 k (1 λ)c 2 k j J z jk (λc 1 j + (1 λ)c 2 j) 0 k / J which implies that (3.1) λ(c 1 k c 2 k (c 1 j c 2 j)) z jk c 2 j c 2 k, k / J. j J j J

66 NGUYEN DINH DAN AND LE DUNG MUU (c i k stands for the kth component of the vector ci ). Let m k := c 1 k c 2 k z jk (c 1 j c 2 j), j J t k := j J z jk c 2 j c 2 k, J := {k / J : m k < 0}, J + := {k / J : m k > 0}, α := max{t k /m k : k J }, α + := max{t k /m k : k J + }. Thus, by virtue of (3.1) we have α α +. Moreover, B is also an optimal basic solution of Problem (L λ ) for every λ which belongs to the interval [α, α + ]. References 1. H. P. Benson, An All-Linear Programming Relaxation Algorithm for Optimizing Over the Efficient Set, J. of Global Optimization 1 (1991), 83-104. 2., A Finite, Nonadjacent Extreme-Point Search Algorithm for Optimization Over the Efficient Set, J. of Optimization Theory and Applications 73 (1992), 47-64. 3. H. P. Benson and S. Sayin, A Face Search Heuristic Algorithm for Optimizing Over the Efficient Set, Naval Research Logistics 40 (1993), 103-116. 4., Optimization over Over the Efficient Set: Four Special cases, J. of Optimization Theory and Applications, to appear. 5. H. Danzig, Linear Programming and Its Extension, Princeton Press 1963. 6. R. Horst and H. Tuy, Global Optimization (Deterministic approaches), Spriger- Verlag, Berlin 1993. 7. H. Isermann and R. E. Steuer, Computational Experience Concerning Payoff Tables and Minimum Criterion Values Over the Efficient Set, European J. of Operational Research 33 (1987), 91-97. 8. Le D. Muu, An Algorithm for Solving Convex Programs with and Additional Convex-Concave Constraints, Mathematical Programming 61 (1993), 75-87. 9. Le D. Muu, A Method for Optimization of a Linear Function over the Efficient Set, J. of Global Optimization, to appear. 10. J. Philip, Algorithms for the Vector Maximization Problem, Mathematical Programming 2 (1972), 207-229. 11. R. E. Steuer, Multiple Criteria Optimization Theory: Computation and Application, John Wiley, New York 1986. 12. P. L. Yu, Multiple criteria Decision Making, New York 1985.

A PARAMETRIC SIMPLEX METHOD 67 Polytechnical University of Hanoi Institute of Mathematics, Box 631, Hanoi