Some Advanced Topics in Linear Programming
|
|
- Todd Cummings
- 5 years ago
- Views:
Transcription
1 Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory, and the simplex method connect with the concepts from linear algebra and geometry discussed in Chapter 3 of Murty [].. The Feasible Set of an LP Recall the standard-form LP: Min cx s. t. Ax = b x. (LP) The feasible set forthislpisthesets={x R n Ax = b, x }. We will explore different characterizations of this set... Linear Functions and Linear Equations Recall that, for an m n matrix A and n-vector x, f(x) =Ax is a linear function of x, thatis,f(αx + βy) =αf(x)+βf(y). Also recall that we can take two views of the matrix product b = Ax (where A and x are given). The row-space view: n b i = A i x = a ij x j for i =,...,m j= In this view, each component of b is the inner product of the vector whose transpose is the corresponding row of A with the vector x. Copyright 995 by Matthew J. Saltzman. All rights reserved.
2 July 2, : 39 DRAFT 2 The column-space view: n b = x j A j j= In this view, the vector b is a linear combination of the vectors that are the columns of the matrix A, with the scalar multiples of these vectors given by the values of the corresponding components of x. [ ] [ ] 3 3 Example. Consider the matrix A =, and the vector x =. The rowspace view of A, x, andb=ax = is depicted in Figure, and the column-space 4 2 [ ] 6 4 view is depicted in Figure 2. Consider now a system of linear equations Ax = b (where A and b are given). In the row space view, the set of solutions to a single linear equation is a hyperplane. (In two dimensions, a hyperplane is a line; in three dimensions, it is a plane, etc. In R n, a hyperplane is an (n )-dimensional affine space.) The solution to a system of equations is an affine space, namely the intersection of the hyperplanes corresponding to the individual equations. In Figure, the dotted lines represent the solutions of the two equations in the system, and their intersection occurs at the point x. Observe that the row vector corresponding to a single equation (called the normal vector of the equation) is perpendicular to the corresponding hyperplane of solutions. In the column-space view, the solution set is a set of scalar weights in a linear combination of the columns of A that produces the right-hand-side vector b. In Figure 2, the linear combination is represented by the dotted lines...2 Linear Programs When discussing the constraints of a linear program, the row space is often referred to as activity space, referring to the interpretation of x j s as activity levels. The column space is correspondingly called requirements space, referring to the righthand side values as levels of requirements that must be met by the solution. In activity space, the set of solutions to a linear program in standard form (LP) is the intersection of the affine space of solutions to the equations Ax = b and the cone of solutions to x. This region is a polyhedron (a polytope if it is bounded). Associated with each point in this space is a weight, corresponding to the objective function value of the point. The optimization problem is to find the point in the polytope with the smallest (or largest) weight.
3 July 2, : 39 DRAFT 3 x 2 b 2 3 A 2 b = Ax 2 A 2 8 x 2 3 x b Figure : Row-space interpretation of a linear function. x 2 b b = Ax 2 8 x 4 A 2 3 x A b Figure 2: Column-space interpretation of a linear function.
4 July 2, : 39 DRAFT 4 x 3 x 2 x Figure 3: Feasible solutions to an LP. Example 2. Consider the constraints x + x 2 + x 3 = x,x 2,x 3 The set of feasible solutions is shown in Figure 3. In requirements space, the feasible set is harder to picture. The set of positive multiples of the columns of A form a cone. If the right-hand-side vector b lies within this cone, then the LP is feasible, otherwise it is not. The feasible set is simply the set of all positive combinations of columns of A that produce b. Example 3. Consider the constraints x,x 2,x 3 The columns of the constraint matrix and the right-hand side vector are shown in Figure fig-lp-row. Also shown are...
5 July 2, : 39 DRAFT 5 For a bounded LP, the basic feasible solutions (in activity space) correspond to extreme points of the feasible polytope. Every feasible solution is a convex combination of these extreme points, so the polytope is the convex hull of the extreme points. An advanced theorem in linear programming called the Representation Theorem states that a polytope can be described equivalently either as the solution set of a system of linear inequalities or as the convex hull of the set of extreme points. A related but more complicated result holds when the feasible region is unbounded, but we will not discuss it here...3 Geometric Interpretation of Duality Another important advanced theorem, called the Separating Hyperplane Theorem, states that, given a polyhedron (indeed, any convex set) and a point not contained in the polyhedron, there is a hyperplane with the property that the set lies on one side and the point on the other. If we take a point on the boundary of the polyhedron, the Supporting Hyperplane Theorem states that there is a hyperplane that contains that point, such that the polyhedron lies entirely in or on one side of the hyperplane. Recall that the dual problem can be viewed as the problem of finding a linear combination of the constraint equations that bounds the objective values of all feasible solutions. Given a basic feasible solution x to the primal, the complementary dual solution constructs an iso-objective hyperplane that passes through x. The solution is dual feasible if this iso-objective hyperplane is a supporting hyperplane for the primal feasible region. Example 4. For the primal LP in example 5, the optimal iso-objective line is the hyperplane constructed by taking a linear combination of the two equality constraints with multipliers π =5/3andπ 2 =2/3. 2 The Dual Simplex Method We have already seen how to analyze the effect of certain changes in the coefficients of a linear program. Certain types of changes are easy to analyze, because they affect only the optimality of the current solution, not its feasibility. For example, adding a new primal variable to the problem does not affect feasibility. We can easily check optimality conditions by computing the appropriate reduced cost, and reoptimizing (if necessary) by performing simplex pivots. Changes to objective function coefficients or nonbasic technology coefficients can be treated similarly. More difficult to cope with (using the tools we have so far) are changes that affect feasibility, such as changes to the right-hand side coefficients or addition of a new inequality constraint. We can test easily enough if the new solution is feasible, but if
6 July 2, : 39 DRAFT 6 the change makes the solution infeasible, we currently have no easy way to start from that solution and seek the new optimum. Such a method would certainly be useful in sensitivity analysis, but it is absolutely critical in integer programming. Both the branch-and-bound algorithm and the cutting-plane algorithms involve solving a sequence of linear programs, each derived from an earlier one by adding a constraint or constraints that are guaranteed to make the previous optimal solution infeasible. If we could not efficiently reoptimize after adding the new inequalities, we would have no hope of solving even relatively small IPs in reasonable time. In this section, we will develop just such a method, namely the dual simplex algorithm. The fundamental insight that allows us to efficiently reoptimize after feasibility is lost is based in the symmetric relationship between the primal and dual LPs (i.e., if LP2 is the dual of LP, then LP is the dual of LP2). Given a complementary primal-dual pair of solutions, we know that the primal optimality condition is that the complementary dual solution is dual feasible. Symmetry allows us to conclude that the dual optimality condition is that the complementary primal solution is primal feasible. That is, primal optimality and dual feasibility are equivalent, and dual optimality and primal feasibility are equivalent. Thus, a basic primal-dual pair can be in one of four states:. Primal feasible and dual feasible (dual optimal and primal optimal). This is thesolutionweseek. 2. Primal feasible and dual infeasible (i.e., primal suboptimal). In this case, we can continue from the current basis by applying the simplex method to the primal problem. This is the case we have encountered in post-optimality analysis. 3. Primal infeasible and dual feasible (i.e., primal superoptimal). In this case, we can proceed by applying the simplex method to the dual problem. This is thecasewewillexaminehere. 4. Primal and dual infeasible. In this case, we need to apply Phase I to the primal or the dual. Example 5. Consider the following primal-dual pair of LPs: Min 3x +4x 2 s. t. x +2x 2 4 2x +x 2 5 x,x 2 Max 4π +5π 2 s. t. π +2π 2 3 2π +π 2 4 π,π 2 Figure 4 shows the feasible regions of the primal and dual problems, with the corresponding complementary basic solution pairs labeled in each graph. The optimal
7 July 2, : 39 DRAFT 7 iso-objective lines are also shown. We see that bases that are primal sub-optimal are dual infeasible, and vice versa, and that the basis labeled e is primal and dual feasible, and is optimal. (Of course bases that are primal and dual infeasible are possible, but there are none in this problem.) 2. The Simplex Method Applied to the Dual Consider the dual of (LP): Max πb s. t. πa c π unrestricted. After the addition of dual slacks (call them y), we can transpose the problem so that the variables appear in column vectors: Max b T π s. t. A T π + Iy = c T π unrestricted, y. (DLP) The resulting linear system has n rows and m + n columns. Clearly the system has rank n (since the n columns of I are independent), so we can definitely select a basis. But which one? 2.. Free Variables Revisited We have seen two ways of handling free variables (such as π) when converting a problem to standard form. If the columns associated with these variables are linearly independent (as we assume the columns of A T are), there is a third possibility: we can simply include these variables in the basis right from the start. Recall that the simplex method s ratio test is designed to prevent variables that are required to be nonnegative from violating that constraint. But it s OK for free variables to take on negative values, so the rows in the tableau labeled with free variables can simply be skipped during the ratio test. Thus once a free variable is in the basis, it will never leave! 2..2 Constructing a Dual Basis Suppose we know a set of m linearly independent rows of A T. We can partition A T into these rows (B T ) and the remaining rows (N T ). The columns of A T together
8 July 2, : 39 DRAFT 8 x 2 4 a 2 b e c d f 2 4 x π 2 4 a 2 b e c d 2 f 4 π Figure 4: Correspondence of primal and dual basic solutions.
9 July 2, : 39 DRAFT 9 with the columns of I with s in the rows corresponding to N (denoted I N )form an n n nonsingular basis matrix: [ ] [ ] B T ˆB = ˆB N T B = T I N N T B T I N It is plain that for every basis for the primal problem, there is a unique basis for the dual problem. The dual basic variables are π plus the dual slacks (reduced costs) associated with the nonbasic primal variables (y N ). If we know a primal optimal basis, the corresponding dual basic solution must be feasible, and we can start the simplex method on the dual problem A Simplex Pivot in the Dual The current dual solution is the solution to the system [ ][ ] [ ] B T π c T = B, N T which is π = B T c T B and y N = c T N N T π = c T N N T B T c T B = c T N. The dual reduced cost vector is [ ] [ ][ ] b T B T IB N T B T, I N I N y N or after multiplication, b T B T. Since the dual is to be maximized, optimality will be achieved when all dual reduced costs are nonpositive, i.e., b T B T or equivalently, B b. Thus, if x j is a primal basic variable and x j <, then y j is a candidate to enter the dual basis. It s easy to verify theorems from linear algebra that (AB) T = B T A T and that, if AB is nonsingular then A and B are also, and (AB) = B A. It s also easy to verify that if B is nonsingular, then (B T ) =(B ) T. We denote this matrix B T. c T N
10 July 2, : 39 DRAFT Since the column associated with the entering y j isacolumnofi, the [ updated direction vector is just a column of B B ]. In particular, it is the column of T N T B T corresponding to the row of B in the primal that is labeled with x j. Since we ignore the π variables in the ratio test, the ratio test compares nonnegative entries of this column with the corresponding values of y N = c N. If there are no nonnegative entries in the column, the dual problem is unbounded, so the primal problem is infeasible. The minimum ratio determines the dual variable y k that will leave the dual basis. To keep up, the primal variable x k must enter the primal basis. The pivot is completed by updating the basis inverse and the primal and dual variable values as usual The Dual Simplex Method Careful study of the steps outlined above reveals that there is no information required for a simplex pivot in the dual that is not already available in the primal revised simplex tableau. In particular, the dual reduced costs can be derived from the values of the primal basic variables, the dual ratio test computations involve the primal reduced costs and one component of each of the updated primal direction vectors. These direction components can be computed by taking the product of each nonbasic column of A with a single row of B (corresponding to the infeasible basic primal variable). The dual simplex method begins with the revised simplex tableau associated with a dual feasible basis: x B B b z π z The steps are as follows (all terminology is with respect to the primal basis):. Leaving Primal Variable Find a negative basic variable. If there are none, then stop the current solution is dual feasible and primal feasible, hence optimal. Otherwise, suppose b i <. 2. Dual Direction Vector For each nonbasic variable x j compute the ith row of B A j (the ith row of B times A j ). Call these entries ā j. 3. Ratio Test/Entering Primal Variable For each negative entry computed in the previous step, compute the reduced cost c j and the ratio c j /ā j. If there are no negative entries, then stop the dual is unbounded, so the primal is infeasible. Otherwise, let the minimum-magnitude (i.e., least negative) ratio be achieved for column k. Thenx k is the entering variable.
11 July 2, : 39 DRAFT 4. Pivot Compute the [ rest] of the updated primal direction vector d = B A k. d Append the column to the tableau and perform an elimination step pivotingontheith entry of the added column, as in the primal simplex method. c k Go to Step. Example 6. Consider the LP with dual Min 2x + 3x 2 + x3 s. t. x + x 2 2x 3 + x 4 = 4x 2x 2 x 3 + x 5 = 2 x,...,x 5 Max u 2u 2 s. t. u + 4u 2 2 u 2u 2 3 2u u 2 u u 2 An obvious dual-feasible basis is x B = (x 4,x 5 ) T, with π = (,). The primal detached coefficient tableau is x x 2 x 3 x 4 x 5 z b and the starting revised dual simplex tableau is x 4 x 5 2 z We choose x 5 to leave[ the basis (the ] most negative variable), and compute [ the] dual direction vector ā = 4 2. The reduced cost vector is c = 2 3 and the minimum ratio of the second and third components is, so k = 3. Appending the updated direction vector for x 3 to the tableau gives x 4 2 x 5 2 z
12 July 2, : 39 DRAFT 2 Pivoting gives x x 3 2 z 2 Since b, this basic pair is primal and dual feasible, so it is optimal; x = (,, 2, 3, T and π =(, ) Constructing a Dual Feasible Basis For certain kinds of LPs (such as those where all primal variables have upper and lower bounds), constructing a dual-feasible starting basis is easy. For these problems, it makes sense to construct this basis and solve using the dual simplex method (using special techniques to handle the bound constraints). For problems with more constraints than variables, it may make sense to actually formulate the dual problem and solve it, perhaps using the dual simplex method if a primal-feasible basis for the original problem is easy to construct. In fact, state-of-the-art commercial codes often solve LPs from scratch using these techniques with great success, but the most important use for the dual simplex method is probably still in post-optimality analysis and integer programming algorithms, where it is used to reoptimize after adding a constraint. We will concentrate on this use. One concern in post-optimality analysis is assessment of the effect of changes to the right-hand side vector b. If changing b to b affects the feasibility of the current basis (i.e., B b ), then we can simply start the dual simplex method from the current basis. The other post-optimality problem, and the problem in branch-and-bound and cutting-plane algorithms, is to assess the effect of addition of a new inequality constraint. In this case, the number of rows is increased by one, and we need to add a new variable to the basis. Since the constraint to be added is an inequality, it also introduces a new slack or surplus variable. We can augment the basis with [ this] new variable: if the new constraint is ax+s = β, then the new basis is ˆB B = and a B [ ] its inverse is ˆB B = a B B ; if the constraint is ax s = β, then the new [ ] [ ] basis is ˆB B = and its inverse is a B ˆB B = a B B. The shadow prices and basic variable values can be easily computed, and the dual simplex method applied if necessary. When applying this method in a - branch-and-bound context, we can branch by adding the constraint x j orx j, without regard for the
13 July 2, : 39 DRAFT 3 fact that x j andx j are already constraints in the problem. 2 References [] K. G. Murty, Operations Research: Deterministic Optimization Models, Prentice Hall, Commercial-quality codes that handle bounds implicitly don t have to worry about increasing the size of the basis when imposing simple bound constraints or fixing variables in branch-andbound.
DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini
DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions
More informationAdvanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material
More informationIntroduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs
Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling
More informationLinear programming and duality theory
Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables
More informationSection Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017
Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures
More informationLinear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.
University of Southern California Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research - Deterministic Models Fall
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationIntroduction. Linear because it requires linear functions. Programming as synonymous of planning.
LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the
More informationLinear Programming. Course review MS-E2140. v. 1.1
Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer
More informationMathematical and Algorithmic Foundations Linear Programming and Matchings
Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis
More informationLinear Programming Problems
Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationBCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.
Linear Programming Module Outline Introduction The Linear Programming Model Examples of Linear Programming Problems Developing Linear Programming Models Graphical Solution to LP Problems The Simplex Method
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationOutline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014
5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38
More informationSection Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018
Section Notes 4 Duality, Sensitivity, and the Dual Simplex Algorithm Applied Math / Engineering Sciences 121 Week of October 8, 2018 Goals for the week understand the relationship between primal and dual
More informationAM 121: Intro to Optimization Models and Methods Fall 2017
AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 10: Dual Simplex Yiling Chen SEAS Lesson Plan Interpret primal simplex in terms of pivots on the corresponding dual tableau Dictionaries
More informationSolutions for Operations Research Final Exam
Solutions for Operations Research Final Exam. (a) The buffer stock is B = i a i = a + a + a + a + a + a 6 + a 7 = + + + + + + =. And the transportation tableau corresponding to the transshipment problem
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationUnconstrained Optimization Principles of Unconstrained Optimization Search Methods
1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization
More informationLinear programming II João Carlos Lourenço
Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,
More informationLecture 9: Linear Programming
Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative
More informationRead: H&L chapters 1-6
Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research Fall 2006 (Oct 16): Midterm Review http://www-scf.usc.edu/~ise330
More informationmaximize c, x subject to Ax b,
Lecture 8 Linear programming is about problems of the form maximize c, x subject to Ax b, where A R m n, x R n, c R n, and b R m, and the inequality sign means inequality in each row. The feasible set
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018
More informationCOLUMN GENERATION IN LINEAR PROGRAMMING
COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for
More informationVARIANTS OF THE SIMPLEX METHOD
C H A P T E R 6 VARIANTS OF THE SIMPLEX METHOD By a variant of the Simplex Method (in this chapter) we mean an algorithm consisting of a sequence of pivot steps in the primal system using alternative rules
More informationOptimization of Design. Lecturer:Dung-An Wang Lecture 8
Optimization of Design Lecturer:Dung-An Wang Lecture 8 Lecture outline Reading: Ch8 of text Today s lecture 2 8.1 LINEAR FUNCTIONS Cost Function Constraints 3 8.2 The standard LP problem Only equality
More informationThe Simplex Algorithm
The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly
More informationChapter II. Linear Programming
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION
More information5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY
5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves
More informationCivil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics
Civil Engineering Systems Analysis Lecture XV Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics Today s Learning Objectives Sensitivity Analysis Dual Simplex Method 2
More informationCivil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics
Civil Engineering Systems Analysis Lecture XIV Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics Today s Learning Objectives Dual 2 Linear Programming Dual Problem 3
More information5 The Theory of the Simplex Method
5 The Theory of the Simplex Method Chapter 4 introduced the basic mechanics of the simplex method. Now we shall delve a little more deeply into this algorithm by examining some of its underlying theory.
More informationLecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh
Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming
More information3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs
11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...
More informationMA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:
MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of
More informationArtificial Intelligence
Artificial Intelligence Combinatorial Optimization G. Guérard Department of Nouvelles Energies Ecole Supérieur d Ingénieurs Léonard de Vinci Lecture 1 GG A.I. 1/34 Outline 1 Motivation 2 Geometric resolution
More informationLP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008
LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following
More informationCollege of Computer & Information Science Fall 2007 Northeastern University 14 September 2007
College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions
More informationPOLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if
POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :
More informationThe Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0
The Simplex Algorithm Chapter 5 Decision Procedures An Algorithmic Point of View D.Kroening O.Strichman Revision 1.0 Outline 1 Gaussian Elimination 2 Satisfiability with Simplex 3 General Simplex Form
More informationLinear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25
Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme
More informationLinear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?
Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x
More informationMath 5593 Linear Programming Lecture Notes
Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com 1.0 Introduction Linear programming
More information5.3 Cutting plane methods and Gomory fractional cuts
5.3 Cutting plane methods and Gomory fractional cuts (ILP) min c T x s.t. Ax b x 0integer feasible region X Assumption: a ij, c j and b i integer. Observation: The feasible region of an ILP can be described
More informationDiscrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity
Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows
More information5.4 Pure Minimal Cost Flow
Pure Minimal Cost Flow Problem. Pure Minimal Cost Flow Networks are especially convenient for modeling because of their simple nonmathematical structure that can be easily portrayed with a graph. This
More informationGeneralized Network Flow Programming
Appendix C Page Generalized Network Flow Programming This chapter adapts the bounded variable primal simplex method to the generalized minimum cost flow problem. Generalized networks are far more useful
More informationLinear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS
Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all
More information11 Linear Programming
11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed
More information6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality
6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,
More information3 INTEGER LINEAR PROGRAMMING
3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=
More informationDiscrete Optimization. Lecture Notes 2
Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The
More informationGeorge B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer
George B. Dantzig Mukund N. Thapa Linear Programming 1: Introduction With 87 Illustrations Springer Contents FOREWORD PREFACE DEFINITION OF SYMBOLS xxi xxxiii xxxvii 1 THE LINEAR PROGRAMMING PROBLEM 1
More informationMarginal and Sensitivity Analyses
8.1 Marginal and Sensitivity Analyses Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Consider LP in standard form: min z = cx, subject to Ax = b, x 0 where A m n and rank m. Theorem:
More informationSimulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016
Simulation Lecture O Optimization: Linear Programming Saeed Bastani April 06 Outline of the course Linear Programming ( lecture) Integer Programming ( lecture) Heuristics and Metaheursitics (3 lectures)
More informationLecture 5: Duality Theory
Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane
More informationLinear Programming Motivation: The Diet Problem
Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Network Flows & Applications NP-completeness Now Linear Programming and the Simplex Method Hung Q. Ngo (SUNY at Buffalo) CSE 531 1
More informationLinear Programming: Introduction
CSC 373 - Algorithm Design, Analysis, and Complexity Summer 2016 Lalla Mouatadid Linear Programming: Introduction A bit of a historical background about linear programming, that I stole from Jeff Erickson
More informationTribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology
Tribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology Course Title: Linear Programming Full Marks: 50 Course No. : Math 403 Pass Mark: 17.5 Level
More informationAMS : Combinatorial Optimization Homework Problems - Week V
AMS 553.766: Combinatorial Optimization Homework Problems - Week V For the following problems, A R m n will be m n matrices, and b R m. An affine subspace is the set of solutions to a a system of linear
More informationDesign and Analysis of Algorithms (V)
Design and Analysis of Algorithms (V) An Introduction to Linear Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Homework Assignment 2 is announced! (deadline Apr. 10) Linear Programming
More information16.410/413 Principles of Autonomy and Decision Making
16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)
More informationLinear Programming and its Applications
Linear Programming and its Applications Outline for Today What is linear programming (LP)? Examples Formal definition Geometric intuition Why is LP useful? A first look at LP algorithms Duality Linear
More information4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 Mathematical programming (optimization) problem: min f (x) s.t. x X R n set of feasible solutions with linear objective function
More information4.1 The original problem and the optimal tableau
Chapter 4 Sensitivity analysis The sensitivity analysis is performed after a given linear problem has been solved, with the aim of studying how changes to the problem affect the optimal solution In particular,
More informationR n a T i x = b i} is a Hyperplane.
Geometry of LPs Consider the following LP : min {c T x a T i x b i The feasible region is i =1,...,m}. X := {x R n a T i x b i i =1,...,m} = m i=1 {x Rn a T i x b i} }{{} X i The set X i is a Half-space.
More informationIntroductory Operations Research
Introductory Operations Research Theory and Applications Bearbeitet von Harvir Singh Kasana, Krishna Dev Kumar 1. Auflage 2004. Buch. XI, 581 S. Hardcover ISBN 978 3 540 40138 4 Format (B x L): 15,5 x
More informationLinear Programming in Small Dimensions
Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional
More informationNotes for Lecture 20
U.C. Berkeley CS170: Intro to CS Theory Handout N20 Professor Luca Trevisan November 13, 2001 Notes for Lecture 20 1 Duality As it turns out, the max-flow min-cut theorem is a special case of a more general
More informationGraphs that have the feasible bases of a given linear
Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700
More informationLinear Programming Terminology
Linear Programming Terminology The carpenter problem is an example of a linear program. T and B (the number of tables and bookcases to produce weekly) are decision variables. The profit function is an
More information4.1 Graphical solution of a linear program and standard form
4.1 Graphical solution of a linear program and standard form Consider the problem min c T x Ax b x where x = ( x1 x ) ( 16, c = 5 ), b = 4 5 9, A = 1 7 1 5 1. Solve the problem graphically and determine
More informationMATLAB Solution of Linear Programming Problems
MATLAB Solution of Linear Programming Problems The simplex method is included in MATLAB using linprog function. All is needed is to have the problem expressed in the terms of MATLAB definitions. Appendix
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer
More informationLecture 2 Convex Sets
Optimization Theory and Applications Lecture 2 Convex Sets Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2016 2016/9/29 Lecture 2: Convex Sets 1 Outline
More informationOutline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)
Outline Combinatorial Optimization 2 Rumen Andonov Irisa/Symbiose and University of Rennes 1 9 novembre 2009 Finite Systems of Linear Inequalities, variants of Farkas Lemma Duality theory in Linear Programming
More information4 Integer Linear Programming (ILP)
TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that
More informationLinear Optimization and Extensions: Theory and Algorithms
AT&T Linear Optimization and Extensions: Theory and Algorithms Shu-Cherng Fang North Carolina State University Sarai Puthenpura AT&T Bell Labs Prentice Hall, Englewood Cliffs, New Jersey 07632 Contents
More informationLecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject
More informationLesson 17. Geometry and Algebra of Corner Points
SA305 Linear Programming Spring 2016 Asst. Prof. Nelson Uhan 0 Warm up Lesson 17. Geometry and Algebra of Corner Points Example 1. Consider the system of equations 3 + 7x 3 = 17 + 5 = 1 2 + 11x 3 = 24
More informationCMPSCI611: The Simplex Algorithm Lecture 24
CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,
More informationDepartment of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):
Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x
More informationChap5 The Theory of the Simplex Method
College of Management, NCTU Operation Research I Fall, Chap The Theory of the Simplex Method Terminology Constraint oundary equation For any constraint (functional and nonnegativity), replace its,, sign
More informationDecision Aid Methodologies In Transportation Lecture 1: Polyhedra and Simplex method
Decision Aid Methodologies In Transportation Lecture 1: Polyhedra and Simplex method Chen Jiang Hang Transportation and Mobility Laboratory April 15, 2013 Chen Jiang Hang (Transportation and Mobility Decision
More information6. Lecture notes on matroid intersection
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm
More informationCS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension
CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture
More informationCSE 40/60236 Sam Bailey
CSE 40/60236 Sam Bailey Solution: any point in the variable space (both feasible and infeasible) Cornerpoint solution: anywhere two or more constraints intersect; could be feasible or infeasible Feasible
More informationLecture 4: Linear Programming
COMP36111: Advanced Algorithms I Lecture 4: Linear Programming Ian Pratt-Hartmann Room KB2.38: email: ipratt@cs.man.ac.uk 2017 18 Outline The Linear Programming Problem Geometrical analysis The Simplex
More informationTHEORY OF LINEAR AND INTEGER PROGRAMMING
THEORY OF LINEAR AND INTEGER PROGRAMMING ALEXANDER SCHRIJVER Centrum voor Wiskunde en Informatica, Amsterdam A Wiley-Inter science Publication JOHN WILEY & SONS^ Chichester New York Weinheim Brisbane Singapore
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationLinear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming
Linear Programming 3 describes a broad class of optimization tasks in which both the optimization criterion and the constraints are linear functions. Linear Programming consists of three parts: A set of
More informationIntroduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 05 Lecture - 24 Solving LPs with mixed type of constraints In the
More informationLinear Programming Duality and Algorithms
COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover
More informationFinite Math Linear Programming 1 May / 7
Linear Programming Finite Math 1 May 2017 Finite Math Linear Programming 1 May 2017 1 / 7 General Description of Linear Programming Finite Math Linear Programming 1 May 2017 2 / 7 General Description of
More informationAdvanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam
Advanced Linear Programming Organisation Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam E-mail: stougie@cwi.nl Marjan van den Akker Universiteit Utrecht marjan@cs.uu.nl Advanced Linear
More information