CHAPTER 4 IDENTIFICATION OF REDUNDANCIES IN LINEAR PROGRAMMING MODELS

Size: px
Start display at page:

Download "CHAPTER 4 IDENTIFICATION OF REDUNDANCIES IN LINEAR PROGRAMMING MODELS"

Transcription

1 CHAPTER 4 IDENTIFICATION OF REDUNDANCIES IN LINEAR PROGRAMMING MODELS 4.1 INTRODUCTION While formulating a linear programming model, systems analysts and researchers often tend to include inadvertently all possible constraints although some of them may not be binding at the optimal solution. Since only a relatively small proportion of the constraints is binding at the optimal solution for most of the linear programming problems, systems analysts and researchers are interested in developing techniques for identifying redundant and nonbinding constraints. A constraint that is satisfied in exact measure at some optimum solution is a binding constraint. A redundant constraint enters into an optimum with positive slack and a variable enters with zero value. Redundancies, if any, in the model will waste computational effort. A technique that makes use of the intercept matrix to enable one to easily identify redundancies without investing computational effort has been developed and described in this chapter Redundant Constraints The Linear Programming Model can be written, in matrix form, as Maximize Z = CX Subject to AX < P0, X > 0

2 55 where C, X, A and P0 are matrices of order lxn, nxl, mxn and mxl respectively. Definition Let A X < bj be the ith constraint of the LPP and let A~ X < b~, X > 0 be the set of constraints of LPP excluding the ith constraint. The ith constraint is redundant if and only if there exists no vector X such that A~ X < b~ and Ai X > bt. Geometrically, the constraint Ai X < fy is redundant if and only if the convex set described by A X < b, X > 0 is identical with the set defined by A~ X < b~, X > 0. Illustration Consider the Polyhedral set (Bazarra et al 1990) defined by the following inequalities : Xj + 2x2 < 8 (1) 2x1 + x2 < 10 (2) 3xj^ + 4x2 < 20 (3) 9XJ + 8x2 < 56 (4) Xj^ > 0 (5) to IV o (6) The intersection of these six halfspaces gives the shaded set of Figure 4.1. Clearly the set is a convex set. If the third and fourth inequalities are disregarded, the polyhedral set is not affected. Such

3 Figure 4.1 Polyhedral set 56

4 57 inequalities are called (geometrically) redundant or irrelevant to the polyhedral set. Transform the inequations (1) through (4) into equations by adding slack variables s4, s2, s3 and s4. x1 + 2x2 + Sj = 8 (10 2xx + x2 + s2 = 10 (20 3x4 + 4x2 + s3 = 20 (30 9x4 + 8x2 + s4 = 56 (40 xl> x2> si> s2» s3> s4-0 (50 In Tables 4.1(a) through 4.1(e) all possible basic feasible solutions are generated. Table 4.1:Basic Feasible Solutions (a) Solution related to vertex O Basic variables XT x2 Solution si s S s (b) Solution related to vertex A Basic variables x2 s2 Solution si 1.5 xi s s o cn 3

5 58 (c) Solution related to vertex B Basic variables si s2 Solution x X sa s (d) Solution related to vertex B Basic variables s2 s3 Solution x X S s (e) Solution related to vertex C Basic variables X1 si Solution x s s s Prom the Table 4.1(a)-(e), it is obvious that for any feasible solution the value of the slack variable corresponding to inequality (4) is positive i.e., s4 > 0. Alternatively noxex exists such that in the corresponding solution of the adjoined equation system (10 through (50 the value of s4 becomes zero i.e., s4 * 0 for any x e X. The slack variable s3 has the same property except for the extreme point B. By now we can say that a constraint is called

6 59 redundant if, after deleting it, set X remains the same. If a boundary hyperplane that corresponds to a redundant constraint has at least one point in common with X, then this constraint is called relative redundant. Otherwise it is called absolute redundant (Gal 1979) Various Possibilities of Redundant Constraints Let us illustrate the various possibilities. Referring to Figures 4.2(a) - (c), in each of these figures constraint 1 is redundant. In all figures, the triangle OAB forms the feasible region defined by the constraints Aj X < (i = 2,3,...m) (Boot 1962). 4.2 EARLIER RESEARCH CONTRIBUTIONS A Chronological Survey Since only a relatively small proportion of constraints is binding at the optimal solution for most linear programming problems, there is considerable interest in developing methods for identifying redundant constraints. The earlier methods proposed in this direction are discussed below. Dantzig (1955) has suggested that some constraints can be anticipated to be non-binding and (equivalently) that certain activities (variables) are also anticipated to be in the optimum solution. The slacks of the nonbinding constraints and also the essential variables may be brought into the basis. The constraints in which the slack variables are basic, can together with the other variables, be dropped from the problem. When the optimum solution is obtained, these assumptions can be checked, and, if they are violated, the constraints may be reintroduced and made to take more number of iterations. If the number of errors in anticipating nonbinding constraints is relatively small, greater savings are achieved.

7 60 3 (c) Figure 4.2 Various Possibilities of Redundant Constraint

8 61 If the variables are known to be present in the optimum solution, then no additional iterations need be made. Boot (1962) has presented a method for identifying redundant constraints before beginning to solve a problem. In this method, only one constraint is checked for redundancy at a time. The method is as follows. First, establish that the convex set described by the problem is not empty. Then, to test whether a constraint is redundant or not, assume that the constraint, Aj X < bj, is violated, that is, ApC = bj + e, where e > 0 is a small but finite number. Use this relationship to eliminate one of the variables one of the variables from all of the other constraints, such that the eliminated variable is non-negative. If the resulting convex set is non-empty, then the constraint is not redundant. If the set is empty, the constraint is redundant. Each constraint may be so checked, the redundant ones are being discarded as they are found. However, the method can require considerable computation, since a feasible solution must be found to show that a convex set is non-empty. Thompson et al (1966) have proposed a method for identifying redundant constraints prior to the start of solving the problem. It is suggested that a constraint A X < fy, is E~ is redundant if AE_1 has nonnegative components where E is the set of inequalities whose slacks are not basic at a basic feasible solution X (i.e.) E : Ae X* = Be j ( E~ : Ae~ X < BE~, E~ is the complement of E. Ae is a square (nxn) nonsingular submatrix of A. Mattheiss (1973) has constructed a theorem for identifying the redundant constraints of a linear programming problem. This theorem states that a constraint is redundant for the feasible region of Linear Programming if and only if its associated slack variable is in the basis of every primary subsystem of the linear program.

9 62 Brearley et al (1975) have suggested a procedure to find out the redundant constraints and to fix variables at their bounds. The procedure for identifying redundant constraints is as follows: Step 1. Compute the upper and lower row bounds of the constraints. The ith constraint upper and lower bounds are Ui = I jepi ay ) + X ay lj and jenj Li = I ajj lj + X aij Uj respectively jenj jepj where Pi SZ (j : aij > 0} Ni = (j : aij < 0} and lj is the lower limit of Xj and Uj is upper limit of Xj, j = 1,2,...n tvi Step 2. i. The i upper bound constraint is redundant if Uj < fy ii. The i lower bound constraint is redundant if Lj > bj. Tomlin and Welch (1986) and Bixby and Wagner (1987) have presented an algorithm for identifying duplicate rows in an LP matrix, i.e. that is rows which are identical except for a scaler multiplier. Many researchers like Ye (1990), Mitchel (1986), Goffin et al (1990), Tone (1991), Den Hertog et al (1992) and Imbert et al (1996) have proposed a strategy for reducing the computational effort in this direction and solving Linear Programming Problems. In 1997, Gondzio discussed the presolve procedure of detection and removal of different linear dependencies of rows and columns in a constraint matrix.

10 63 This work addresses a different strategy for identifying redundancies in LPP using the intercept matrix. 4.3 DEDUCTING REDUNDANT CONSTRAINTS The methods so far proposed for the identification of redundant constraint could not be implemented as it required excessive computational efforts. The computational time is one of the critical factors for real time applications in the solving of large scale problems. Thus there is a need to develop an efficient algorithm to identify the redundancies prior to solving the problem. An attempt is made in this section to use the intercept matrix to identify redundant constraints without investing any computational effort Method for Identifying Redundant Constraints If a slack variable is in the optimal basis, the corresponding constraint will be redundant. The technique proposed here predicts a set of slack variables in the optimal basis prior to solving the problem. The procedure is the outcome of an indepth study of the theorem of Mattheiss (1973). Let us consider the linear programming problem which has m constraints and n variables. Maximize Z = CX subject to AX < P0, X > 0 Step 0 : Let I be the set of subscripts associated with the initial basic variables (slack variables). Initially let that set be I = {1,2,...m}. Let J be the set of subscripts associated with the initial decision variables. Initially let that set be J = {l,2,..,n}.

11 64 Step 1 : Construct an intercept matrix "0" using the following relationship 0ji =(P0)i/aij ; ay > 0 for je J, iel. Step 2 : Determine the promising variables making use of the following i. Calculate Zj-Cj = CgB'1 Pj-Cj for all nonbasic variables. ii. Let pj = min {0-}, for je J iel iii. Compute Zj'-Cj' = Pj(zj-Cj) for jej Step 3 i. Let zk' - ck = min {Zj'-Cj'} jej ii. iii. Take away the element k from the set J, i.e., J = J-{k} If zk'-ck' > 0, then the problem has no redundant constraint stop. Otherwise, iv. Let 0kj = min {0M} = Pk iel V. Take away the element from the set I, i.e, I = I-U} and vi. Find p such that min {0p$} = Pp for pej. If so, take away such p elements from the set J, i.e., J =J-{pl. Step 4 If J = {<)>}, then go to Step 5. Otherwise, go to Step 3. Step 5 If I = {())}, then the problem has no redundant constraint, stop. Otherwise, the constraints whose intercepts 8ji * max {Bj}, iel, are redundant. Stop j=l,2,...n

12 Illustration of the Method This section illustrates with examples the working of the method for identifying Redundant constraints in a step by step manner. Example 1 Maximize Z = 3XJ + 4x2 subject to Xj + 3x2 < 15 2xj + x2 < 10 2xj^ + 3x2 < 18 xl + x2 * 7 Solution Step 0 : 4xj + 5x2 < 40 x1} x2 > 0 I = {1,2,3,4,51; J = U,21 Steps 1 & 2 : The intercept matrix is Basic variables Decision variables S1 s2 s3 s4 s5 Zj-Cj p, f ^ f Zj-Cj xi X Step 3 Iteration No. k J l I J 1 2 {11 1 {2,3,4,51 { {<t»l 2 {3,4,5} {0}

13 66 Steps 4 & 5 : J = {<>} The constraints 3,4 and 5 are redundant. The optimal solution is Z = 25; Xj = 3, x2 = 4, s3 = 0, s4 = 0, s5 = 8 Example 2 Maximise Z = 2x1 + x2 subject to the constraints xl + x2 < 1 Xj + 2x2 < 4 x1? x2 > 0 Solution I = {1,2} ; J = {1,2} The intercept matrix is Decision variables Basic variables Si s2 Zj-Cj zrci' xi x The 2nd constraint is redundant. The optimal solution is Z = 2; Xj_ = 1, s2 = 3

14 67 Example 3 Maximize Z = 61xx + 209x x3 + 33x x x x xg+ 12x x10 subject to the constraints 16X-L + 25x2 + 22x3 + 4x4 + 9x5 + 8x6 + llx7 + 29x8 + 20x9 + 22x10 <11 5xj + 22x2 + 15x3 + 30x4 + 24x5 + 15x6 + 14x7 + 28xg + 31x9 + 25x10 < 53 22x1 + 17x2 + 9x3 + 32x4 + 26x5 + 20x6 +16x7 + 16xg + 26x9 + 24x10 < 50 14xj + 9x2 + 32x3 + 22x4 + 30x5 + 18x6 + 18x7 + 32xg + 15x9 + x10 < 40 32xj + 30x2 + 10x3 + 30x4 + 7xg + 29xg + 15x7 + xg + 19x9 + 26x10 < 4 12xj + 4x2 + 30x3 + llx4 + 23x5 + 29xg + 8x7 + 2xg + 0x9 + 23x10 <31 22xj + 23x2 + 26x3 + 13x4 + 6x5 + 13xg + 32x7 + llxg + 8x9 + 5xlg < 39 Xj > 0, j = 1,2,3, Solution I = {1,2,3,4,5,6,71 J = {1,2,3,4,5,6,7,8,9,10} The intercept matrix is given by Decision variables Basic varia jles S1 s2 s3 s4 sf? sfi s7 zrei Pi zrci' x, x X, X Xr XR x XR x x

15 68 The constraints 2,3,4,6 and 7 are redundant. The optimal solution is Z = ; x5 = 0.541, x8 = 0.211, s2 = 34.09, s3 = 32.55, s4 = 17, s6 = 18.13, s7 = Computational Experience This section discusses the efficiency of the method and concludes with the presentation of the observations made. The efficiency of the algorithm is tested by solving LPP before and after the model reduction. Table 4.2 provides a comparison of the computational results. These results show that the proposed algorithm is useful to identify redundant constraints in a given LP problem and it also reduces the computational effort and memory requirements. For this purpose, the author used the problems of types employed by Kuhn and Quandt (1962). These problems have the canonical form with Cj = 1 for j = 1... n, fy = 500 for i = l,2...m and a^ is generated uniformly within the interval (0,100) for i = 1,2... m, j = 1,2... n. Table 4.2: Comparison of Computational Efforts Required with and without Redundant Constraints SI. No. Size of the Problem No.of Constraints No.of Variables No.of redundant constraints No.of multiplications/divisions to solve LPP With redundant Without redundant O o

16 IDENTIFYING REDUNDANT VARIABLES A variable which has zero value in every optimal solution is redundant. Thompson et al (1966) have proposed a theorem for declaring redundant variables in linear programming problems, which states that "if at any iteration of the simplex method solution, a variable is not profitable with respect to a profit function (i.e., Zj-Cj > 0), and has only positive coefficients in the constraint equations, then there exists an optimum solution that does not contain that variable". For example, consider the following linear programming problem Maximize Z = 4xx + 3x2 + 5x3 + 2x4 + 5x5 subject to the constraints 3X]^ + 2x2 - x3-2x4 + 4x5 < 1 2xj + x2 + 3x3 + x4 + 2x5 < 1 Xi, x2, x3, x4, x5 > 0 The simplex tables of the linear programming problem are Basic X1 x2 x3 x4 x5 x6 x7 Solution x X? Z x x z

17 70 Basic xl x2 x3 x4 x5 x6 x7 Solution xf> x Z xf> x Z x x Z When the theorem of Gerald L. Thompson et al (1966) is applied the variables Xj_, x3 become redundant. The present work has suggested a new approach for identifying the redundant variables in linear programming problems without solving the problem Algorithm for Identifying Redundant Variables in the Primal Model Consider the linear programming model which has m resource constraints and n decision variables. Maximize Z = CX subject to the constraints AX < P0, X > 0

18 71 Step 0 : Let I be the set of subscripts associated with the initial basic variables. Let that initial set be I = {1, 2,... m}. Let J be the set of subscripts associated with the decision variables. Let that initial set be J = {1, 2,... n}. Step 1 Construct the intercept matrix 0 using the following relationship 0ji = (PoV^ji aij > 0 for Je ie 1 Step 2 Scan the 0 matrix row-wise and identify the minimum intercept in each row. m Let pj = min {0^}, je J i=l Step 3 Check whether more than one j s (je J) have the minimum intercept value in the same column of the 0 matrix. If yes, then go to Step 4. Otherwise, the model has no redundant variables. Stop. Step 4 Let the number of minimum values of the intercepts in the 0 JLl. matrix lie in the i column. i. Identify the maximum of the intercept in the ith column. ii. iii. iv. The decision variable(s) not corresponding to this maximum intercept bring(s) out redundant variable(s). Let the set of subscripts corresponding to the basic and redundant variables be Jj Set J = J - Jl Step 5 Go to Step 3

19 Illustration The following examples illustrate the steps of the algorithm. Example 1 : Consider the example given in Section 4.4 Step 0:1 = {1,2}; J = {1,2,3,4,5} Steps 1 & 2 : The intercept matrix 0 is Decision variables Basic variables sl xi X X x4-1 1 X s2 ft Steps 3 & 4 : Xj_, x3, x5 are redundant variables. The optimal solution is Z = 2.75; x2 = 0.75, x4 = 0.25 Example 2 : Maximize Z = 6x4 + 10x2 + 13x3 Subject to 0.5XJ + 2x2 + x3 < 24 x4 + 2x2 + 4x3 < 60 xi> x2> x3-0

20 73 Step 0:1 = {1,2} J = (1,2,3} Steps 1 & 2 : The intercept matrix 0 is Basic variables Decision variables sl s2 Pj xi x x Steps 3 & 4 : x2 is a redundant variable. The optimal solution is Z = 294; Xj = 36; x3 = Computational Results Thomson (1966) has attempted to identify the number of redundant variables in the primal model after generating the Simplex table. His algorithm could identify only a few redundant variables after investing some computational effort. But the algorithm developed in this thesis could identify more redundant variables than Thomson s without wasting computing time. The algorithm developed in this thesis reduces the size of the LP models leading to the reduction of computational effort. The presence of redundant rows and columns permits more variables to pop in and out of the basis. The elimination of redundant constraints and variables abinitio curtails this tendency of popping variables to the least minimum possible. The developed algorithm not only saves memory and computing time due to the elimination of rows and columns but also minimizes the popping variables.

21 74 The Table 4.3 shows the computational effort required for solving problems with and without redundancies. Table 4.3 : Comparison of Computational Efforts Required with and without Redundancies Size of the problem No.of redundancies No.of iterations required m n Constraints Variables With redundancies Without redundancies REDUNDANCY RELATIONSHIP BETWEEN PRIMAL AND DUAL MODELS Redundant resource constraints and redundant variables in a LP model not only occupy more storage in a computer but also consume more computational time. The algorithm presented in this chapter avoids the wastage of storage and improves the computing time by removing redundancies. The intercept matrix is used for identifying both the redundancies in one stroke. The intercept matrix is used for identifying both the redundancies for a given model. The twin properties between the two models should hold good i.e. for every primal redundant constraint there exists a dual redundant variable and vice versa. Any redundant primal constraint should have a non-negative value for the corresponding surplus variable in the optimal solution. According to the complementary slackness theorem the corresponding dual variable should be zero. Conversely any redundant dual constraint should have a corresponding redundant primal variable. There is row(s) reduction and column(s) reduction resulting in an overall model reduction.

22 Lemma on Redundant Inequalities (Chames et al 1962) Consider the pair of linear programming problems written in the form Maximize X + C'2 Y Subject to A X + BY < bj D X + EY < b2 X, Y>0 and Minimize bjw' + b2u' Subject to AW' + DU' > BW' + EU' > C'2 W', U' > 0 where A,B,D,E represent a four-part partition of a standard linear programming constraint matrix, bj and b2 form a corresponding partition of the usual b or stipulations vector, and C'i, C'2 form a partitioning of the coefficients in the functional vector C. Lemma Part (a) If the constraints AX + BY < bj are redundant in an optimum solution to the maximizing problem then there will exist solutions W', U' to the $ a e * minimization problem with bjw ' = 0, AW ' = BW ' = 0.

23 76 Part (b) If the constraints W'A + U'D > C\ are redundant in an optimum solution to the minimizing problem then there will be solutions X, Y to the maximising problem with C\ X = 0, AX = 0, DX = Illustrative Examples The redundancy relationship between the primal and the dual model is explained using the following examples. Example 1 Consider the following primal and dual problems. Primal: Maximize 4x4 + 3x2 subject to the constraints x4 + 2x2 < 2 Xj - 2x2 < 3 2xj + 3x2 < 5 x4 + x2 < 2 3x^ "f* x2 3 Xi, x2 > 0 Dual : Minimize 2yx + 3y2 + 5y3 + 2y4 + 3y5 subject to the constraints y4 + y2 + 2y3 + y4 + 3y5 > 4 2yx - 2y2 + 3y3 + y4 + y5 > 3 yi» y2> ys» ys - -

24 77 The primal constraints 2,3 and 4 are identified as redundants by the proposed algorithm. The optimal solution to the primal is = 0.8, x2 = 0.6, s2 = 3.4, s3 = 1.6, s4 = 0.6 with objective value 5. Utilizing the theorem of complementary slackness, we can conclude that the dual variables y2, y3 and y4 corresponding to primal constraints 2,3 and 4 are redundant in the dual model. Example 2 Consider the example given in Section 4.4. The primal variables x4, x3 and x5 are identified as redundants by the proposed algorithm. Utilizing the theorem of complementary slackness, we conclude that the dual constraints 1, 3 and 5 corresponding to the primal variables x4, x3 and x5 are redundant in the dual model. 4.6 CONCLUSION Simple heuristic algorithms have been presented for identifying the redundant constraints and variables, if any, in the linear programming models apriori to the start of the solution process with the help of the intercept matrix. The redundant constraints and the variables in the model are then eliminated, and the resulting model is solved by using the Multiplex Algorithm in order to establish the validity of the above algorithms. A significant reduction in the computational effort of the order 5% to 40% is thereby achieved.

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Optimization of Design. Lecturer:Dung-An Wang Lecture 8 Optimization of Design Lecturer:Dung-An Wang Lecture 8 Lecture outline Reading: Ch8 of text Today s lecture 2 8.1 LINEAR FUNCTIONS Cost Function Constraints 3 8.2 The standard LP problem Only equality

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Dual-fitting analysis of Greedy for Set Cover

Dual-fitting analysis of Greedy for Set Cover Dual-fitting analysis of Greedy for Set Cover We showed earlier that the greedy algorithm for set cover gives a H n approximation We will show that greedy produces a solution of cost at most H n OPT LP

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Combinatorial Optimization G. Guérard Department of Nouvelles Energies Ecole Supérieur d Ingénieurs Léonard de Vinci Lecture 1 GG A.I. 1/34 Outline 1 Motivation 2 Geometric resolution

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Read: H&L chapters 1-6

Read: H&L chapters 1-6 Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research Fall 2006 (Oct 16): Midterm Review http://www-scf.usc.edu/~ise330

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

The Ascendance of the Dual Simplex Method: A Geometric View

The Ascendance of the Dual Simplex Method: A Geometric View The Ascendance of the Dual Simplex Method: A Geometric View Robert Fourer 4er@ampl.com AMPL Optimization Inc. www.ampl.com +1 773-336-AMPL U.S.-Mexico Workshop on Optimization and Its Applications Huatulco

More information

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way. University of Southern California Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research - Deterministic Models Fall

More information

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25 Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme

More information

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming Linear Programming 3 describes a broad class of optimization tasks in which both the optimization criterion and the constraints are linear functions. Linear Programming consists of three parts: A set of

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics Civil Engineering Systems Analysis Lecture XIV Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics Today s Learning Objectives Dual 2 Linear Programming Dual Problem 3

More information

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D. Linear Programming Module Outline Introduction The Linear Programming Model Examples of Linear Programming Problems Developing Linear Programming Models Graphical Solution to LP Problems The Simplex Method

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization? Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Design and Analysis of Algorithms (V)

Design and Analysis of Algorithms (V) Design and Analysis of Algorithms (V) An Introduction to Linear Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Homework Assignment 2 is announced! (deadline Apr. 10) Linear Programming

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com 1.0 Introduction Linear programming

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Finite Math Linear Programming 1 May / 7

Finite Math Linear Programming 1 May / 7 Linear Programming Finite Math 1 May 2017 Finite Math Linear Programming 1 May 2017 1 / 7 General Description of Linear Programming Finite Math Linear Programming 1 May 2017 2 / 7 General Description of

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm April 25, 2005 We seek x 1,..., x n 0 which mini- Problem. mizes C(x 1,..., x n ) = c 1 x 1 + + c n x n, subject to the constraint Ax b, where A is m n, b = m 1. Through the introduction

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 05 Lecture - 24 Solving LPs with mixed type of constraints In the

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018 Section Notes 4 Duality, Sensitivity, and the Dual Simplex Algorithm Applied Math / Engineering Sciences 121 Week of October 8, 2018 Goals for the week understand the relationship between primal and dual

More information

An example of LP problem: Political Elections

An example of LP problem: Political Elections Linear Programming An example of LP problem: Political Elections Suppose that you are a politician trying to win an election. Your district has three different types of areas: urban, suburban, and rural.

More information

Linear Programming. Course review MS-E2140. v. 1.1

Linear Programming. Course review MS-E2140. v. 1.1 Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer

More information

Linear Programming 1

Linear Programming 1 Linear Programming 1 Fei Li March 5, 2012 1 With references of Algorithms by S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani. Many of the problems for which we want algorithms are optimization tasks.

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

R n a T i x = b i} is a Hyperplane.

R n a T i x = b i} is a Hyperplane. Geometry of LPs Consider the following LP : min {c T x a T i x b i The feasible region is i =1,...,m}. X := {x R n a T i x b i i =1,...,m} = m i=1 {x Rn a T i x b i} }{{} X i The set X i is a Half-space.

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

Polytopes Course Notes

Polytopes Course Notes Polytopes Course Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2013 i Contents 1 Polytopes 1 1.1 Convex Combinations and V-Polytopes.....................

More information

Lecture 5: Properties of convex sets

Lecture 5: Properties of convex sets Lecture 5: Properties of convex sets Rajat Mittal IIT Kanpur This week we will see properties of convex sets. These properties make convex sets special and are the reason why convex optimization problems

More information

Linear programming II João Carlos Lourenço

Linear programming II João Carlos Lourenço Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,

More information

UNIT 2 LINEAR PROGRAMMING PROBLEMS

UNIT 2 LINEAR PROGRAMMING PROBLEMS UNIT 2 LINEAR PROGRAMMING PROBLEMS Structure 2.1 Introduction Objectives 2.2 Linear Programming Problem (LPP) 2.3 Mathematical Formulation of LPP 2.4 Graphical Solution of Linear Programming Problems 2.5

More information

CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES

CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES 46 CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES 3.1 INTRODUCTION While solving a linear programming problem, a systematic search is made to find a non-negative vector X which extremizes a linear

More information

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality 6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,

More information

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs 11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...

More information

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Introduction. Linear because it requires linear functions. Programming as synonymous of planning. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing

More information

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.854J / 18.415J Advanced Algorithms Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.415/6.854 Advanced

More information

Solutions for Operations Research Final Exam

Solutions for Operations Research Final Exam Solutions for Operations Research Final Exam. (a) The buffer stock is B = i a i = a + a + a + a + a + a 6 + a 7 = + + + + + + =. And the transportation tableau corresponding to the transshipment problem

More information

Lecture 9: Linear Programming

Lecture 9: Linear Programming Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

CSE 460. Today we will look at" Classes of Optimization Problems" Linear Programming" The Simplex Algorithm"

CSE 460. Today we will look at Classes of Optimization Problems Linear Programming The Simplex Algorithm CSE 460 Linear Programming" Today we will look at" Classes of Optimization Problems" Linear Programming" The Simplex Algorithm" Classes of Optimization Problems" Optimization" unconstrained"..." linear"

More information

Generalized Network Flow Programming

Generalized Network Flow Programming Appendix C Page Generalized Network Flow Programming This chapter adapts the bounded variable primal simplex method to the generalized minimum cost flow problem. Generalized networks are far more useful

More information

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

NOTATION AND TERMINOLOGY

NOTATION AND TERMINOLOGY 15.053x, Optimization Methods in Business Analytics Fall, 2016 October 4, 2016 A glossary of notation and terms used in 15.053x Weeks 1, 2, 3, 4 and 5. (The most recent week's terms are in blue). NOTATION

More information

Marginal and Sensitivity Analyses

Marginal and Sensitivity Analyses 8.1 Marginal and Sensitivity Analyses Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Consider LP in standard form: min z = cx, subject to Ax = b, x 0 where A m n and rank m. Theorem:

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

MAXIMAL FLOW THROUGH A NETWORK

MAXIMAL FLOW THROUGH A NETWORK MAXIMAL FLOW THROUGH A NETWORK L. R. FORD, JR. AND D. R. FULKERSON Introduction. The problem discussed in this paper was formulated by T. Harris as follows: "Consider a rail network connecting two cities

More information

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 Mathematical programming (optimization) problem: min f (x) s.t. x X R n set of feasible solutions with linear objective function

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Linear Programming and its Applications

Linear Programming and its Applications Linear Programming and its Applications Outline for Today What is linear programming (LP)? Examples Formal definition Geometric intuition Why is LP useful? A first look at LP algorithms Duality Linear

More information

Lecture notes on Transportation and Assignment Problem (BBE (H) QTM paper of Delhi University)

Lecture notes on Transportation and Assignment Problem (BBE (H) QTM paper of Delhi University) Transportation and Assignment Problems The transportation model is a special class of linear programs. It received this name because many of its applications involve determining how to optimally transport

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

Chapter 2 An Introduction to Linear Programming

Chapter 2 An Introduction to Linear Programming Chapter 2 An Introduction to Linear Programming MULTIPLE CHOICE 1. The maximization or minimization of a quantity is the a. goal of management science. b. decision for decision analysis. c. constraint

More information

Graphing Linear Inequalities in Two Variables.

Graphing Linear Inequalities in Two Variables. Many applications of mathematics involve systems of inequalities rather than systems of equations. We will discuss solving (graphing) a single linear inequality in two variables and a system of linear

More information

Interpretation of Dual Model for Piecewise Linear. Programming Problem Robert Hlavatý

Interpretation of Dual Model for Piecewise Linear. Programming Problem Robert Hlavatý Interpretation of Dual Model for Piecewise Linear 1 Introduction Programming Problem Robert Hlavatý Abstract. Piecewise linear programming models are suitable tools for solving situations of non-linear

More information

Mathematics. Linear Programming

Mathematics. Linear Programming Mathematics Linear Programming Table of Content 1. Linear inequations. 2. Terms of Linear Programming. 3. Mathematical formulation of a linear programming problem. 4. Graphical solution of two variable

More information

AMATH 383 Lecture Notes Linear Programming

AMATH 383 Lecture Notes Linear Programming AMATH 8 Lecture Notes Linear Programming Jakob Kotas (jkotas@uw.edu) University of Washington February 4, 014 Based on lecture notes for IND E 51 by Zelda Zabinsky, available from http://courses.washington.edu/inde51/notesindex.htm.

More information

5 The Theory of the Simplex Method

5 The Theory of the Simplex Method 5 The Theory of the Simplex Method Chapter 4 introduced the basic mechanics of the simplex method. Now we shall delve a little more deeply into this algorithm by examining some of its underlying theory.

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

COLUMN GENERATION IN LINEAR PROGRAMMING

COLUMN GENERATION IN LINEAR PROGRAMMING COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar http://www.robots.ox.ac.uk/~oval/ Outline Linear Programming Matroid Polytopes Polymatroid Polyhedron Ax b A : m x n matrix b:

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Graphs that have the feasible bases of a given linear

Graphs that have the feasible bases of a given linear Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700

More information

Chapter 2--An Introduction to Linear Programming

Chapter 2--An Introduction to Linear Programming Chapter 2--An Introduction to Linear Programming 1. The maximization or minimization of a quantity is the A. goal of management science. B. decision for decision analysis. C. constraint of operations research.

More information

Notes for Lecture 18

Notes for Lecture 18 U.C. Berkeley CS17: Intro to CS Theory Handout N18 Professor Luca Trevisan November 6, 21 Notes for Lecture 18 1 Algorithms for Linear Programming Linear programming was first solved by the simplex method

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

Math Models of OR: The Simplex Algorithm: Practical Considerations

Math Models of OR: The Simplex Algorithm: Practical Considerations Math Models of OR: The Simplex Algorithm: Practical Considerations John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Simplex Algorithm: Practical Considerations

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :) Outline Combinatorial Optimization 2 Rumen Andonov Irisa/Symbiose and University of Rennes 1 9 novembre 2009 Finite Systems of Linear Inequalities, variants of Farkas Lemma Duality theory in Linear Programming

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix. Math Lecture The greedy algorithm provides the initial transportation matrix. matrix P P Demand W ª «2 ª2 «W ª «W ª «ª «ª «Supply The circled x ij s are the initial basic variables. Erase all other values

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module 03 Simplex Algorithm Lecture - 03 Tabular form (Minimization) In this

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Linear Programming: Introduction

Linear Programming: Introduction CSC 373 - Algorithm Design, Analysis, and Complexity Summer 2016 Lalla Mouatadid Linear Programming: Introduction A bit of a historical background about linear programming, that I stole from Jeff Erickson

More information

5.3 Cutting plane methods and Gomory fractional cuts

5.3 Cutting plane methods and Gomory fractional cuts 5.3 Cutting plane methods and Gomory fractional cuts (ILP) min c T x s.t. Ax b x 0integer feasible region X Assumption: a ij, c j and b i integer. Observation: The feasible region of an ILP can be described

More information

How to Solve a Standard Maximization Problem Using the Simplex Method and the Rowops Program

How to Solve a Standard Maximization Problem Using the Simplex Method and the Rowops Program How to Solve a Standard Maximization Problem Using the Simplex Method and the Rowops Program Problem: Maximize z = x + 0x subject to x + x 6 x + x 00 with x 0 y 0 I. Setting Up the Problem. Rewrite each

More information

Chap5 The Theory of the Simplex Method

Chap5 The Theory of the Simplex Method College of Management, NCTU Operation Research I Fall, Chap The Theory of the Simplex Method Terminology Constraint oundary equation For any constraint (functional and nonnegativity), replace its,, sign

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Linear Programming Motivation: The Diet Problem

Linear Programming Motivation: The Diet Problem Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Network Flows & Applications NP-completeness Now Linear Programming and the Simplex Method Hung Q. Ngo (SUNY at Buffalo) CSE 531 1

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

4.1 The original problem and the optimal tableau

4.1 The original problem and the optimal tableau Chapter 4 Sensitivity analysis The sensitivity analysis is performed after a given linear problem has been solved, with the aim of studying how changes to the problem affect the optimal solution In particular,

More information

Real life Problem. Review

Real life Problem. Review Linear Programming The Modelling Cycle in Decision Maths Accept solution Real life Problem Yes No Review Make simplifying assumptions Compare the solution with reality is it realistic? Interpret the solution

More information

arxiv: v1 [cs.cc] 30 Jun 2017

arxiv: v1 [cs.cc] 30 Jun 2017 On the Complexity of Polytopes in LI( Komei Fuuda May Szedlá July, 018 arxiv:170610114v1 [cscc] 30 Jun 017 Abstract In this paper we consider polytopes given by systems of n inequalities in d variables,

More information