Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Size: px
Start display at page:

Download "Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize."

Transcription

1 Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject to c x Ax b x 0 (1) assuming that b 0, so that x = 0 is guaranteed to be a feasible solution. Let n denote the number of variables and let m denote the number of constraints. A simple transformation modifies any such linear program into a form such that each variable is constrained to be non-negative, and all other linear constraints are expressed as equations rather than inequalities. The key is to introduce additioinal variables, called slack variables which account for the difference between and left and right sides of each inequality in the original linear program. In other words, linear program (1) is equivalent to maximize subject to c x Ax + y = b x, y 0 (2) where x R n and y R m. The solution set of {Ax + y = b, x 0, y 0} is a polytope in the (n + m)-dimensional vector space of ordered pairs (x, y) R n R m. The simplex algorithm is an iterative algorithm to solve linear programs of the form (2) by walking from vertex to vertex, along the edges of this polytope, until arriving at a vertex which maximizes the objective function c x. To illustrate the simplex method, for concreteness we will consider the following linear program. maximize 2x 1 + 3x 2 subject to x 1 + x 2 8 2x 1 + x 2 12 x 1 + 2x 2 14 x 1, x 2 0 This LP has so few variables, and so few constraints, it is easy to solve it by brute-force enumeration of the vertices of the polytope, which in this case is a 2-dimensional polygon. 1

2 The vertices of the polygon are [ 0 7 ], [ 2 6 ], [ 4 4 ], [ 6 0 ], [ 0 0 ]. The objective function 2x 1 + 3x 2 is maximized at the vertex [ 2 6 ], where it attains the value 22. It is also easy to certify that this is the optimal value, given that the value is attained at [ 2 6 ]: simply add together the inequalities to obtain x 1 + x 2 8 x 1 + 2x x 1 + 3x 2 22, which ensures that no point in the feasible set attains an objective value greater than 22. To solve the linear program using the simplex method, we first apply the generic transformation described earlier, to rewrite it in equational form as maximize 2x 1 + 3x 2 subject to x 1 + x 2 + y 1 = 8 2x 1 + x 2 + y 2 = 12 x 1 + 2x 2 + y 3 = 14 x 1, x 2, y 1, y 2, y 3 0 From now on, we will be choosing a subset of two of the five variables (called the basis), setting them equal to zero, and using the linear equations to express the remaining three variables, as well as the objective function, as a function of the two variables in the basis. Initially the basis is {x 1, x 2 } and the linear program can be written in the form maximize 2x 1 + 3x 2 subject to y 1 = 8 x 1 x 2 y 2 = 12 2x 1 x 2 y 3 = 14 x 1 2x 2 x 1, x 2, y 1, y 2, y 3 0 which emphasizes that each of y 1, y 2, y 3 is determined as a function of x 1, x 2. Now, as long as the basis contains a variable which has a positive coefficient in the objective function, we select one such variable and greedily increasing its value until one of the non-negativity constraints becomes tight. At that point, one of the other variables attains the value zero: it enters the basis, and the variable whose value we increased leaves the basis. For example, we could choose to increase x 1 from 0 to 6, at which point y 2 = 0. Then the new basis becomes {y 2, x 2 }. Rewriting the equation y 2 = 12 2x 1 x 2 as x 1 = y x 2, (3) 2

3 we may substitute the right side of (3) in place of x 1 everywhere in the above linear program, arriving at the equivalent form maximize 12 y 2 + 2x 2 subject to y 1 = 2 + 1y 2 2 1x 2 2 x 1 = 6 1y 2 2 x 2 y 3 = 8 + 1y 2 2 3x 2 2 x 1, x 2, y 1, y 2, y 3 0 At this point, x 2 still has a positive coefficient in the objective function, so we increase x 2 from 0 to 4, at which point y 1 = 0. Now x 2 leaves the basis, and the new basis is {y 1, y 2 }. We use the equation x 2 = 4 + y 2 2y 1 to substitute a function of the basis variables in place of x 2 everywhere it appears, arriving at the new linear program maximize 20 4y 1 + y 2 subject to x 2 = 4 + y 2 2y 1 x 1 = 4 y 2 + y 1 y 3 = 2 y 2 + 3y 1 x 1, x 2, y 1, y 2, y 3 0 Now we increase y 2 from 0 to 2, at which point y 3 = 0 and the new basis is {y 1, y 3 }. Substituting y 2 = 2 y 3 + 3y 1 allows us to rewrite the linear program as maximize 22 y 1 y 3 subject to x 2 = 6 + y 1 y 3 x 1 = 2 2y 1 + y 3 (4) y 2 = 2 + 3y 1 y 3 x 1, x 2, y 1, y 2, y 3 0 At this point, there is no variable with a positive coefficient in the objective function, and we stop. It is trivial to verify that the solution defined by the current iteration namely, x 1 = 2, x 2 = 6, y 1 = 0, y 2 = 2, y 3 = 0 is optimal. The reason is that we have managed to write the objective function in the form 22 y 1 y 3. Since the coefficient on each of the variables y 1, y 3 is negative, and y 1 and y 3 are constrained to take non-negative values, the largest possible value of the objective function is achieved by setting both y 1 and y 3 to zero, as our solution does. More generally, if the simplex method terminates, it means that we have found an equivalent representation of the original linear program (2) in a form where the objective function 3

4 attaches a non-positive coefficient to each of the basis variables. Since the basis variables are required to be non-negative, the objective function is maximized by setting all the basis variables to zero, which certifies that the solution at the end of the final iteration is optimal. There is another way that the simplex method can terminate, which is not illustrated by the example above. It may happen, in one iteration, that when we choose a variable with a positive coefficient in the objective function, and we can increase this variable to an arbitrarily large value without violating any of the non-negativity constraints. (This situation happens when the increasing variable appears with a non-negative coefficient in each of the equations that defines the non-basic variables as a linear function of the variables in the basis.) In that case, we have verified that the optimum of the linear program is equal to +, i.e. the objective function c x takes unboundedly large values as x ranges over the set of vectors satisfying the constraints. Note that, in our running example, the final objective function assigned coefficient 1 to both y 1 and y 3. This is closely related to the fact that the simple certificate of optimality described above (before we started running the simplex algorithm) we obtained by summing the first and third inequalities of the original linear program, each with a coefficient of 1. We will see in the following section that this is not a coincidence. Before leaving this discussion of the simplex method, we must touch upon a subtle issue regarding the question of whether the algorithm always terminates. A basis is an n-element subset of n + m variables, so there are at most ( ) n+m n bases; if we can ensure that the algorithm never returns to the same basis as in a previous iteration, then it must terminate. Note that each basis determines a unique point (x, y) R n+m defined by setting the basis variables to zero and assigning to the remaining variables the unique values that satisfy the equation Ax + y = b and as the algorithm proceeds from basis to basis, the objective function value at the corresponding points never decreases. If the objective function strictly increases when moving from basis B to basis B, then the algorithm is guaranteed never to return to basis B, since the objective function value is now strictly greater than its value at B, and it will never decrease. On the other hand, it is possible for the simplex algorithm to shift from one basis to a different basis with the same objective function value; this is called a degenerate pivot, and it happens when the set of variables whose value is 0 at the current solution is a strict superset of the basis. There exist pivot rules (i.e., rules for selecting the next basis in the simplex algorithm) that are designed to avoid infinite loops of degenerate pivots. Perhaps the simplest such rule is Bland s rule, which always chooses to remove from the basis the lowest-numbered variable that has a positive coefficient in the objective function. (And, in case there is more than one variable that may move into the objective function to replace it, the rule also chooses the lowest-numbered such variable.) Although the rule is simple to define, proving that it avoids infinite loops is not easy, and we will omit the proof from these notes. Instead, the following section is devoted to presenting a pivot rule that is much more algorithmically costly, but leads to an easier and more conceptual proof of termination. 4

5 1.1 A non-cycling pivot rule based on infinitesimals In order for a degenerate pivot to be possible when solving a given linear program using the simplex method, the equation Ax + y = b must have a solution in which n + 1 or more of the variables take the value 0. Generically, a system of m linear equations in m + n unknown does not have solutions with strictly more than n of the variables equal to 0. If we modify the linear system Ax+y = b by perturbing it slightly, we should expect that such a modification will, generically, eliminate the possibility of encountering degenerate pivots when running the simplex algorithm. These algebraic considerations can also be visualized in geometric terms. The linear system Ax + y = b consists of m linearly independent equations in m + n unknown, so its solution set is an n-dimensional subspace of the m + n-dimensional vector space of all (x, y) pairs. In this n-dimensional space, the inequalities x 0, y 0 define m + n halfspaces, each bounded by a hyperplane of the form {x j = 0} or {y i = 0}. In n-dimensional space, n hyperplanes in general position will intersect at a single point, whereas n + 1 or more hyperplanes in general position will have an empty intersection. For example, in three dimensions, any three planes in general position have one intersection point, whereas four planes in general position have an empty intersection. However, it is perfectly possible for four planes (not in general position) to have a non-empty intersection, as when the four faces of a square pyramid meet at its apex. If one were to run the simplex algorithm to optimize a linear function on a three-dimensional polyhedron such as a square pyramid, it is possible that one or more iterations of the algorithm would start at the apex of the pyramid, with a basis consisting of any three of the four slack variables corresponding to the four sides of the pyramid. A degenerate pivot would substitute a new basis by replacing one of these three slack variables with the slack variable that was omitted from the previous basis. After performing this operation, and setting the variables in the new basis equal to zero, we remain situated at the same vertex as before, namely the apex of the pyramid. Now consider modifying the polyhedron by perturbing the height of each face of the pyramid by a very small amount potentially a different small quantity for each of the faces. This operation modifies the shape of the top of the polyhedron slightly; the apex point is replaced with a very short line segment joining two vertices, as illustrated in Figure 1. Figure 1: A pyramid and its generic perturbation The modified polyhedron has no degenerate vertices; every vertex is situated at the intersection of three (and only three) faces. The degenerate pivot at the apex of the pyramid has been replaced by a non-degenerate pivot that makes progress by moving along the short line segment joining the two topmost vertices of the modified polyhedron. The foregoing discussion about perturbations could potentially by implemented by adding a short random vector ɛ to the vector b occurring on the right side of the equation Ax+y = b, 5

6 resulting in the new equation Ax + y = b + ɛ. To justify that this works, one first needs to reason about the probability that the modified polyhedron has no degenerate vertices. This probability turns out to be equal to 1, assuming that the distribution of the vector ɛ is absolutely continuous with respect to Lebesgue measure, but that fact requires proof. Next one would need to reason about whether every vertex of the new polyhedron is situated near a vertex of the original polyhedron (i.e., at a distance tending to zero as the length of the vector ɛ tends to zero). This also turns out to be true, but again requires proof. Instead of using probability, we will adopt a more algebraic formalism that gracefully sidesteps some of these issues. The idea will be to treat the coefficients of the linear program as belonging to an extension field of the real numbers that is still totally ordered, but contains infinitesimal numbers representing the perturbations. Definition 1. An ordered field is a field F with a distinguished subset F >0, called the set of positive elements, that is closed under addition and multiplication, does not contain 0, and satisfies the property that for every non-zero x F, exactly one of x and x is positive. One can define a total ordering on F by specifying that x < x if and only if x x is positive. Inequalities defined in this way obey the usual algebraic rules for manipulating inequalities, for example the rule which asserts that the validity of an inequality is preserved under scaling both sides by the same positive scalar. Definition 2. If F is an ordered field that contains R as a subfield, an element x F is called finite if there is some r R such that r < x < r, and otherwise we refer to x as infinite. If x 0 and x 1 is infinite we say that x is infinitesimal. The finite elements constitute a subring F < and the infinitesimals are an ideal in this subring. There is a surjective homomorphism F < R whose kernel is the ideal of infinitesimal elements. This homomorphism is defined by mapping each finite x F to the real number [x] = inf{r R r > x}. (5) Lemma 1. There exists an ordered field F that contains R as a subfield, and whose ideal of infinitesimal elements is an infinite-dimensional vector space over R. Proof. Let ɛ be a formal variable (i.e., a symbol with no numerical meaning) and let F be the field of rational functions R(ɛ). In other words, an element of F is an equivalence class of fractions P/Q where P and Q are both polynomials in the formal variable ɛ, with real coefficients, and Q 0. The equivalence relation is defined by specifying that P 1 /Q 1 = 3 3ɛ P 2 /Q 2 if and only if P 1 Q 2 = P 2 Q 1. For example, 2 3+ɛ and are two expressions 1 ɛ 3 1+ɛ+ɛ 2 representing the same element of R(ɛ). The ordering of F is defined as follows. A non-zero polynomial P = a 0 + a 1 ɛ + + a n ɛ n is positive if and only if the first non-zero element of the coefficient sequence a 0, a 1,..., a n is positive. A quotient P/Q is positive if and only if P and Q are either both positive or both negative. Under this ordering, the monomials ɛ, ɛ 2, ɛ 3,... constitute an infinite set of infinitesimal elements that are linearly independent over R. 6

7 We defined linear programs and the simplex algorithm using the field of real numbers, but the problem definition and the algorithm both make sense over any ordered field. In particular, we can let K be an ordered extension field of R that contains m linearly independent infinitesimal elements ɛ 1, ɛ 2,..., ɛ m, and we can modify the linear program (2) by replacing the equation Ax + y = b with Ax + y = b + ɛ, where ɛ denotes the column vector (ɛ 1, ɛ 2,..., ɛ m ). Claim 2. Fix a matrix A and vector b over the real numbers, and fix an ordered extension field F R that contains m linearly independent infinitesimal elements ɛ 1, ɛ 2,..., ɛ m. Let ɛ = (ɛ 1, ɛ 2,..., ɛ m ). For any x F n and y F m satisfying Ax + y = b + ɛ, at most n of the elements x 1,..., x n, y 1,..., y m are equal to 0. Proof. For i = 1,..., m the equation ɛ i = n a ij x j + y i b i j=1 shows that ɛ i belongs to the R-linear subspace of K generated by {1, x 1,..., x n, y 1,..., y m }. Of course, the element 1 F also belongs to this subspace, which we will denote by L. Since 1, ɛ 1,..., ɛ m are linearly independent over R, the dimension of L over R is at least m + 1. In particular, since the set {1, x 1,..., x n, y 1,..., y m } contains a basis for L, it must contain at least m + 1 non-zero elements, and hence at most n elements are equal to 0. Claim 3. Fix A, b, F, ɛ as in Claim 2. When one solves the linear program maximize subject to c x Ax + y = b + ɛ x, y 0 (6) using the simplex method, there are no degnerate pivots and the algorithm terminates after performing at most ( ) m+n n pivots. Proof. A degenerate pivot, by definition, occurs when n + 1 or more coordinates of the vectors x, y are non-zero. Claim 2 shows that this can never happen when (x, y) belongs to the solution set of Ax + y = b + ɛ. Hence, there are no degenerate pivots. This means that the value of the objective function c x (interpreted as an element of the ordered field F) strictly increases with each iteration of the algorithm. Consequently, no two iterations of the algorithm can use the same basis, and the number of iterations is bounded above by the number of bases, which is ( ) m+n n. A pivot rule that guarantees termination. To define a non-cycling pivot rule for the simplex method solving an ordinary linear program (i.e., with coefficients in R) our strategy will be to run the simplex method in tandem on two linear programs: one defined over R with constraint set {Ax + y = b, x, y 0}, and another defined over the extension field F with constraint set {Ax + y = b + ɛ, x, y 0}. Let us distinguish the two linear programs 7

8 by calling the first one LP R and the second one LP F. We run the simplex method to solve LP F, and we let x (t), y (t) denote the values of the vectors x, y at the start of iteration t, for t = 0, 1,..., T, where T either denotes the final iteration of the algorithm, or the last iteration in which all components of the vector x (T ) are finite. Recall the homomorphism x [x] from F < to R defined in equation (5). We will abuse notation and apply this homomorphism to vector spaces as well: if x is a vector with components in F < we let [x] denote the vector over R obtained by applying the operation x i [x i ] to each component of x. Now take the sequence of vector pairs {x (t), y (t) } T t=0 representing the execution of the simplex method solving LP F and translate it to the sequence of vector pairs {[x (t) ], [y (t) ]} T t=0. This sequence represents an initial segment of a valid execution of the simplex method solving LP R. In every iteration, n of the variables in the set {[x (t) j ] j = 1,..., n} {[y(t) i ] i = 1,..., m} are equal to zero, because the n variables in the basis of the corresponding iteration of the LP F are equal to 0, and [0] = 0. The remaining m variables in the set are non-negative, because their counterparts in the LP F execution are non-negative, and the homomorphism x [x] preserves non-negativity. To conclude our analysis we will show that {[x (t) ], [y (t) ]} T t=0 represents a terminating execution of the simplex method solving LP R. By the definition of T, we know that in iteration T the simplex method solving LP F either terminates (because when the objective function is written in terms of the current basis, every coefficient is non-positive) or it chooses a variable in the basis and increases its value from 0 to an infinite element of F. In the former case the simplex method solving LP R also terminates at iteration T because its objective function coefficients are non-positive. (The homomorphism x [x] preserves non-positivity.) In the latter case, the simplex method solving LP R terminates in iteration T because it discovers that the optimum is equal to The simplex method takes exponential time in the worst case An example due to Klee and Minty illustrates that the simplex method can take exponential time in the worst case. Consider the linear program maximize x n subject to 0 x 1 1 (7) δx i x i+1 1 δx i for 1 i < n Here, δ is a positive number less than 1/2. The polyhedron defined by the constraints of this linear program is shaped like an n- dimensional hypercube with tilted sides, as depicted in Figure 2. It is called a Klee-Minty cube. There is an execution of the simplex method that visits each of the 2 n vertices of the Klee-Minty cube, starting from (0, 0,..., 0) and ending at (0, 0,..., 1). The sequence of vertices can be defined recursively as follows. The back face of the Klee-Minty cube, where the equation x n = δx n 1 is satisfied, is a copy of the (n 1)-dimensional Klee-Minty cube. We run an execution of the simplex method on this (n 1)-dimensional Klee-Minty cube, with the modified objective function x n 1. The equation x n = δx n 1 guarantees that 8

9 Figure 2: A Klee-Minty cube in three dimensions the true objective function increases as the modified objective function increases, which means that this remains a valid execution of the simplex method on the back face of the n-dimensional Klee-Minty cube. After reaching the vertex (0, 0,..., 1, δ), we move from the back face to the front face, where the equation x n = 1 δx n 1 is satisfied, arriving at the vertex (0, 0,..., 1, 1 δ). Then we run through the vertices of the front face by reversing the order in which we visited the corresponding vertices on the back face. As we do this, x n 1 strictly decreases with each pivot; due to the equation x n = 1 δx n 1 this means that x n strictly increases with each pivot, confirming that this is a valid execution of the simplex algorithm. 2 The Simplex Method and Strong Duality An important consequence of the correctness and termination of the simplex algorithm is linear programming duality, which asserts that for every linear program with a maximization objective, there is a related linear program with a minimization objective whose optimum matches the optimum of the first LP. Theorem 4. Consider any pair of linear programs of the form maximize c x minimize b η subject to Ax b and subject to A η c (8) x 0 η 0 If the optimum of the first linear program is finite, then both linear programs have the same optimum value. Proof. Before delving into the formal proof, the following intuition is useful. If a i denotes the i th row of the matrix A, then the relation Ax b can equivalently be expressed by stating that a i x b i for j = 1,..., m. For any m-tuple of non-negative coefficients η 1,..., η m, we 9

10 can form a weighted sum of these inequalities, m m η i a i x η i b i, (9) j=1 obtaining an inequality implied by Ax b. Depending on the choice of weights η 1,..., η m, the inequality (9) may or may not imply an upper bound on the quantity c x, for all x 0. The case in which (9) implies an upper bound on c x is when, for each variable x j (j = 1,..., n), the coefficient of x j on the left side of (9) is greater than or equal to the coefficient of x j in the expression c x. In other words, the case in which (9) implies an upper bound on c x for all x 0 is when m j {1,..., n} η i a ij c j. (10) We can express (9) and (10) more succinctly by packaging the coefficients of the weighted sum into a vector, η. Then, inequality (9) can be rewritten as j=1 and the criterion expressed by (10) can be rewritten as i=1 η Ax η b, (11) η A c. (12) The reasoning surrounding inequalities (9) and (10) can now be summarized by saying that for any vector η R m satisfying η 0 and η A c, we have c x η Ax η b (13) for all x 0 satisfying Ax b. (In hindsight, proving inequality (13) is trivial using the properties of the vector ordering and our assumptions about x and η.) Applying (13), we may immediately conclude that the minimum of η b over all η 0 satisfying η A c, is greater than or equal to the maximum of c x over all x 0 satisfying Ax b. That is, the optimum of the first LP in (8) is less than or equal to the optimum of the second LP in (8), a relation known as weak duality. To prove that the optima of the two linear programs are equal, as asserted by the theorem, we need to furnish vectors x, η satisfying the constraints of the first and second linear programs in (8), respectively, such that c x = b η. To do so, we will make use of the simplex algorithm and its termination condition. At the moment of termination, the objective function has been rewritten in a form that has no positive coefficient on any variable. In other words, the objective function is written in the form v ξ x η y for some coefficient vectors ξ R n and η R m such that ξ, η 0. An invariant of the simplex algorithm is that whenever it rewrites the objective function, it preserves the property that the objective function value matches c x for all pairs (x, y) R n R m such that Ax + y = b. In other words, we have x R n v ξ x η (b Ax) = c x. (14) 10

11 Equating the constant terms on the left and right sides, we find that v = η b. Equating the cofficient of x j on the left and right sides for all j, we find that η A = ξ + c c. Thus, the vector η satisfies the constraints of the second LP in (8). Now consider the vector (x, y) which the simplex algorithm outputs at termination. All the variables having a non-zero coefficient in the expression ξ x η y belong to the algorithm s basis, and hence are set to zero in the solution (x, y). This means that v = v ξ x η y = c x and hence, using the relation v = η b derived earlier, we have c x = b η as desired. 11

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

The simplex method and the diameter of a 0-1 polytope

The simplex method and the diameter of a 0-1 polytope The simplex method and the diameter of a 0-1 polytope Tomonari Kitahara and Shinji Mizuno May 2012 Abstract We will derive two main results related to the primal simplex method for an LP on a 0-1 polytope.

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Linear Programming Motivation: The Diet Problem

Linear Programming Motivation: The Diet Problem Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Network Flows & Applications NP-completeness Now Linear Programming and the Simplex Method Hung Q. Ngo (SUNY at Buffalo) CSE 531 1

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

Lecture Notes 2: The Simplex Algorithm

Lecture Notes 2: The Simplex Algorithm Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Linear Programming and its Applications

Linear Programming and its Applications Linear Programming and its Applications Outline for Today What is linear programming (LP)? Examples Formal definition Geometric intuition Why is LP useful? A first look at LP algorithms Duality Linear

More information

Lecture 4: Linear Programming

Lecture 4: Linear Programming COMP36111: Advanced Algorithms I Lecture 4: Linear Programming Ian Pratt-Hartmann Room KB2.38: email: ipratt@cs.man.ac.uk 2017 18 Outline The Linear Programming Problem Geometrical analysis The Simplex

More information

Lecture 6: Faces, Facets

Lecture 6: Faces, Facets IE 511: Integer Programming, Spring 2019 31 Jan, 2019 Lecturer: Karthik Chandrasekaran Lecture 6: Faces, Facets Scribe: Setareh Taki Disclaimer: These notes have not been subjected to the usual scrutiny

More information

J Linear Programming Algorithms

J Linear Programming Algorithms Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.]

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

CS522: Advanced Algorithms

CS522: Advanced Algorithms Lecture 1 CS5: Advanced Algorithms October 4, 004 Lecturer: Kamal Jain Notes: Chris Re 1.1 Plan for the week Figure 1.1: Plan for the week The underlined tools, weak duality theorem and complimentary slackness,

More information

1. Lecture notes on bipartite matching February 4th,

1. Lecture notes on bipartite matching February 4th, 1. Lecture notes on bipartite matching February 4th, 2015 6 1.1.1 Hall s Theorem Hall s theorem gives a necessary and sufficient condition for a bipartite graph to have a matching which saturates (or matches)

More information

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs 11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...

More information

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality 6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

1 Bipartite maximum matching

1 Bipartite maximum matching Cornell University, Fall 2017 Lecture notes: Matchings CS 6820: Algorithms 23 Aug 1 Sep These notes analyze algorithms for optimization problems involving matchings in bipartite graphs. Matching algorithms

More information

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm Algorithmic Game Theory and Applications Lecture 6: The Simplex Algorithm Kousha Etessami Recall our example 1 x + y

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

An introduction to pplex and the Simplex Method

An introduction to pplex and the Simplex Method An introduction to pplex and the Simplex Method Joanna Bauer Marc Bezem Andreas Halle November 16, 2012 Abstract Linear programs occur frequently in various important disciplines, such as economics, management,

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

Lecture 9: Linear Programming

Lecture 9: Linear Programming Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative

More information

Linear Programming Terminology

Linear Programming Terminology Linear Programming Terminology The carpenter problem is an example of a linear program. T and B (the number of tables and bookcases to produce weekly) are decision variables. The profit function is an

More information

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 In this lecture, we describe a very general problem called linear programming

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 Mathematical programming (optimization) problem: min f (x) s.t. x X R n set of feasible solutions with linear objective function

More information

2 The Fractional Chromatic Gap

2 The Fractional Chromatic Gap C 1 11 2 The Fractional Chromatic Gap As previously noted, for any finite graph. This result follows from the strong duality of linear programs. Since there is no such duality result for infinite linear

More information

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems. 2 Basics In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems. 2.1 Notation Let A R m n be a matrix with row index set M = {1,...,m}

More information

Linear Programming. them such that they

Linear Programming. them such that they Linear Programming l Another "Sledgehammer" in our toolkit l Many problems fit into the Linear Programming approach l These are optimization tasks where both the constraints and the objective are linear

More information

Linear Programming: Introduction

Linear Programming: Introduction CSC 373 - Algorithm Design, Analysis, and Complexity Summer 2016 Lalla Mouatadid Linear Programming: Introduction A bit of a historical background about linear programming, that I stole from Jeff Erickson

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

MATH 890 HOMEWORK 2 DAVID MEREDITH

MATH 890 HOMEWORK 2 DAVID MEREDITH MATH 890 HOMEWORK 2 DAVID MEREDITH (1) Suppose P and Q are polyhedra. Then P Q is a polyhedron. Moreover if P and Q are polytopes then P Q is a polytope. The facets of P Q are either F Q where F is a facet

More information

CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

More information

5 The Theory of the Simplex Method

5 The Theory of the Simplex Method 5 The Theory of the Simplex Method Chapter 4 introduced the basic mechanics of the simplex method. Now we shall delve a little more deeply into this algorithm by examining some of its underlying theory.

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where LP Geometry: outline I Polyhedra I Extreme points, vertices, basic feasible solutions I Degeneracy I Existence of extreme points I Optimality of extreme points IOE 610: LP II, Fall 2013 Geometry of Linear

More information

MATH 310 : Degeneracy and Geometry in the Simplex Method

MATH 310 : Degeneracy and Geometry in the Simplex Method MATH 310 : Degeneracy and Geometry in the Simplex Method Fayadhoi Ibrahima December 11, 2013 1 Introduction This project is exploring a bit deeper the study of the simplex method introduced in 1947 by

More information

CS261: A Second Course in Algorithms Lecture #7: Linear Programming: Introduction and Applications

CS261: A Second Course in Algorithms Lecture #7: Linear Programming: Introduction and Applications CS261: A Second Course in Algorithms Lecture #7: Linear Programming: Introduction and Applications Tim Roughgarden January 26, 2016 1 Preamble With this lecture we commence the second part of the course,

More information

Introduction to Linear Programming

Introduction to Linear Programming Introduction to Linear Programming Eric Feron (updated Sommer Gentry) (updated by Paul Robertson) 16.410/16.413 Historical aspects Examples of Linear programs Historical contributor: G. Dantzig, late 1940

More information

arxiv: v1 [cs.cc] 30 Jun 2017

arxiv: v1 [cs.cc] 30 Jun 2017 On the Complexity of Polytopes in LI( Komei Fuuda May Szedlá July, 018 arxiv:170610114v1 [cscc] 30 Jun 017 Abstract In this paper we consider polytopes given by systems of n inequalities in d variables,

More information

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs CMPUT 675: Topics in Algorithms and Combinatorial Optimization (Fall 2009) Lecture 10,11: General Matching Polytope, Maximum Flow Lecturer: Mohammad R Salavatipour Date: Oct 6 and 8, 2009 Scriber: Mohammad

More information

A Subexponential Randomized Simplex Algorithm

A Subexponential Randomized Simplex Algorithm s A Subexponential Randomized Gil Kalai (extended abstract) Shimrit Shtern Presentation for Polynomial time algorithms for linear programming 097328 Technion - Israel Institute of Technology May 14, 2012

More information

ACTUALLY DOING IT : an Introduction to Polyhedral Computation

ACTUALLY DOING IT : an Introduction to Polyhedral Computation ACTUALLY DOING IT : an Introduction to Polyhedral Computation Jesús A. De Loera Department of Mathematics Univ. of California, Davis http://www.math.ucdavis.edu/ deloera/ 1 What is a Convex Polytope? 2

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems

More information

Combinatorial Geometry & Topology arising in Game Theory and Optimization

Combinatorial Geometry & Topology arising in Game Theory and Optimization Combinatorial Geometry & Topology arising in Game Theory and Optimization Jesús A. De Loera University of California, Davis LAST EPISODE... We discuss the content of the course... Convex Sets A set is

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com 1.0 Introduction Linear programming

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

arxiv: v1 [math.co] 25 Sep 2015

arxiv: v1 [math.co] 25 Sep 2015 A BASIS FOR SLICING BIRKHOFF POLYTOPES TREVOR GLYNN arxiv:1509.07597v1 [math.co] 25 Sep 2015 Abstract. We present a change of basis that may allow more efficient calculation of the volumes of Birkhoff

More information

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming Linear Programming 3 describes a broad class of optimization tasks in which both the optimization criterion and the constraints are linear functions. Linear Programming consists of three parts: A set of

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

maximize c, x subject to Ax b,

maximize c, x subject to Ax b, Lecture 8 Linear programming is about problems of the form maximize c, x subject to Ax b, where A R m n, x R n, c R n, and b R m, and the inequality sign means inequality in each row. The feasible set

More information

Polytopes Course Notes

Polytopes Course Notes Polytopes Course Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2013 i Contents 1 Polytopes 1 1.1 Convex Combinations and V-Polytopes.....................

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Combinatorial Optimization G. Guérard Department of Nouvelles Energies Ecole Supérieur d Ingénieurs Léonard de Vinci Lecture 1 GG A.I. 1/34 Outline 1 Motivation 2 Geometric resolution

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 50 CS 473: Algorithms, Spring 2018 Introduction to Linear Programming Lecture 18 March

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam Advanced Linear Programming Organisation Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam E-mail: stougie@cwi.nl Marjan van den Akker Universiteit Utrecht marjan@cs.uu.nl Advanced Linear

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module 03 Simplex Algorithm Lecture - 03 Tabular form (Minimization) In this

More information

AMATH 383 Lecture Notes Linear Programming

AMATH 383 Lecture Notes Linear Programming AMATH 8 Lecture Notes Linear Programming Jakob Kotas (jkotas@uw.edu) University of Washington February 4, 014 Based on lecture notes for IND E 51 by Zelda Zabinsky, available from http://courses.washington.edu/inde51/notesindex.htm.

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

As a consequence of the operation, there are new incidences between edges and triangles that did not exist in K; see Figure II.9.

As a consequence of the operation, there are new incidences between edges and triangles that did not exist in K; see Figure II.9. II.4 Surface Simplification 37 II.4 Surface Simplification In applications it is often necessary to simplify the data or its representation. One reason is measurement noise, which we would like to eliminate,

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm April 25, 2005 We seek x 1,..., x n 0 which mini- Problem. mizes C(x 1,..., x n ) = c 1 x 1 + + c n x n, subject to the constraint Ax b, where A is m n, b = m 1. Through the introduction

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

Math 414 Lecture 2 Everyone have a laptop?

Math 414 Lecture 2 Everyone have a laptop? Math 44 Lecture 2 Everyone have a laptop? THEOREM. Let v,...,v k be k vectors in an n-dimensional space and A = [v ;...; v k ] v,..., v k independent v,..., v k span the space v,..., v k a basis v,...,

More information

Convex Geometry arising in Optimization

Convex Geometry arising in Optimization Convex Geometry arising in Optimization Jesús A. De Loera University of California, Davis Berlin Mathematical School Summer 2015 WHAT IS THIS COURSE ABOUT? Combinatorial Convexity and Optimization PLAN

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

The Simplex Algorithm for LP, and an Open Problem

The Simplex Algorithm for LP, and an Open Problem The Simplex Algorithm for LP, and an Open Problem Linear Programming: General Formulation Inputs: real-valued m x n matrix A, and vectors c in R n and b in R m Output: n-dimensional vector x There is one

More information

Lecture 5: Properties of convex sets

Lecture 5: Properties of convex sets Lecture 5: Properties of convex sets Rajat Mittal IIT Kanpur This week we will see properties of convex sets. These properties make convex sets special and are the reason why convex optimization problems

More information

New Directions in Linear Programming

New Directions in Linear Programming New Directions in Linear Programming Robert Vanderbei November 5, 2001 INFORMS Miami Beach NOTE: This is a talk mostly on pedagogy. There will be some new results. It is not a talk on state-of-the-art

More information

On Unbounded Tolerable Solution Sets

On Unbounded Tolerable Solution Sets Reliable Computing (2005) 11: 425 432 DOI: 10.1007/s11155-005-0049-9 c Springer 2005 On Unbounded Tolerable Solution Sets IRENE A. SHARAYA Institute of Computational Technologies, 6, Acad. Lavrentiev av.,

More information

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization? Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x

More information

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0 The Simplex Algorithm Chapter 5 Decision Procedures An Algorithmic Point of View D.Kroening O.Strichman Revision 1.0 Outline 1 Gaussian Elimination 2 Satisfiability with Simplex 3 General Simplex Form

More information

On the Hardness of Computing Intersection, Union and Minkowski Sum of Polytopes

On the Hardness of Computing Intersection, Union and Minkowski Sum of Polytopes On the Hardness of Computing Intersection, Union and Minkowski Sum of Polytopes Hans Raj Tiwary hansraj@cs.uni-sb.de FR Informatik Universität des Saarlandes D-66123 Saarbrücken, Germany Tel: +49 681 3023235

More information

Linear Programming. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015

Linear Programming. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Linear Programming 2015 Goodrich and Tamassia 1 Formulating the Problem q The function

More information