AMS : Combinatorial Optimization Homework Problems - Week V

Similar documents
Lecture 2 - Introduction to Polytopes

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

CS522: Advanced Algorithms

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Convex Geometry arising in Optimization

Polytopes Course Notes

Integer Programming Theory

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Some Advanced Topics in Linear Programming

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

ORIE 6300 Mathematical Programming I September 2, Lecture 3

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

Combinatorial Geometry & Topology arising in Game Theory and Optimization

11 Linear Programming

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Lecture 5: Duality Theory

Math 5593 Linear Programming Lecture Notes

arxiv: v1 [math.co] 15 Dec 2009

Linear programming and duality theory

Math 414 Lecture 2 Everyone have a laptop?

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Lecture 6: Faces, Facets

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Introduction to Mathematical Programming IE406. Lecture 4. Dr. Ted Ralphs

Lecture Notes 2: The Simplex Algorithm

6. Lecture notes on matroid intersection

Mathematical Programming and Research Methods (Part II)

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Chapter 15 Introduction to Linear Programming

A mini-introduction to convexity

60 2 Convex sets. {x a T x b} {x ã T x b}

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Numerical Optimization

Polar Duality and Farkas Lemma

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Convexity: an introduction

LECTURE 10 LECTURE OUTLINE

Conic Duality. yyye

Linear Programming Motivation: The Diet Problem

R n a T i x = b i} is a Hyperplane.

COMP331/557. Chapter 2: The Geometry of Linear Programming. (Bertsimas & Tsitsiklis, Chapter 2)

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

MATH 890 HOMEWORK 2 DAVID MEREDITH

The Simplex Algorithm

Lecture 4: Rational IPs, Polyhedron, Decomposition Theorem

Dual-fitting analysis of Greedy for Set Cover

Integer Programming Chapter 9

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Chapter 4 Concepts from Geometry

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 2: Convex Sets

Approximation Algorithms: The Primal-Dual Method. My T. Thai

1. Lecture notes on bipartite matching February 4th,

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

The simplex method and the diameter of a 0-1 polytope

Lecture 4: Linear Programming

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

The Affine Scaling Method

maximize c, x subject to Ax b,

Introduction to Modern Control Systems

Convexity Theory and Gradient Methods

Modeling and Analysis of Hybrid Systems

Modeling and Analysis of Hybrid Systems

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

LECTURE 1 Basic definitions, the intersection poset and the characteristic polynomial

Lecture 5: Properties of convex sets

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set

Lecture 16 October 23, 2014

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24

Linear Programming: Introduction

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

THEORY OF LINEAR AND INTEGER PROGRAMMING

Design and Analysis of Algorithms (V)

4 Integer Linear Programming (ILP)

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Solutions for Operations Research Final Exam

On Unbounded Tolerable Solution Sets

Lecture 2 September 3

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

FACES OF CONVEX SETS

4 Linear Programming (LP) E. Amaldi -- Foundations of Operations Research -- Politecnico di Milano 1

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

1. CONVEX POLYGONS. Definition. A shape D in the plane is convex if every line drawn between two points in D is entirely inside D.

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

New Directions in Linear Programming

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015

Convex Hull Representation Conversion (cddlib, lrslib)

1 Linear Programming. 1.1 Optimizion problems and convex polytopes 1 LINEAR PROGRAMMING

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.

Programming, numerics and optimization

Lecture 15: The subspace topology, Closed sets

Transcription:

AMS 553.766: Combinatorial Optimization Homework Problems - Week V For the following problems, A R m n will be m n matrices, and b R m. An affine subspace is the set of solutions to a a system of linear equations, i.e., {x R n : Ax = b} is an affine subspace of R n. A polytope is a bounded polyhedron. 1. Show the following: (i) The intersection of an affine subspace with a polyhedron is a polyhedron. Express the original polyhedron as P = {x : Ax b} and the sffine subspace as L = {x : A x = b }. The intersection P L = {x : A x = b, Ax b} = {x : A x b, A x b, Ax b} which is by definition a polyhedron. (ii) Let P 1, P 2 be two polytopes. Define C = {x + y : x P 1, y P 2 }. Show that C is a polytope. Express P 1 = conv{x 1,..., x p } and P 2 = conv{y 1,..., y q }. We claim that C = conv{x i + y j : i {1,..., p}, j {1,..., q}}. Consider z C, then z = x + y for some x P 1 and y P 2. We can write x = p λ ix i and y = q µ jy j, where λ i 0, p λ i = 1, µ j 0 and q µ j = 1. Observe that λ iµj = 1. Also, λ i µ j (x i + y j ) = p q λ i x i + µ j y j = x + y = z. Since λ iµ j (x i + y j ) conv{x i + y j : i {1,..., p}, j {1,..., q}}, we have z conv{x i + y j : i {1,..., p}, j {1,..., q}}. Therefore, C conv{x i + y j : i {1,..., p}, j {1,..., q}}. To show the reverse inclusion, consider z conv{x i + y j : i {1,..., p}, j {1,..., q}}; so we can write z = σ ij(x i + y j ) where σ ij 0 and σ ij = 1. We now simply this expression: σ ij (x i +y j ) = σ ij x i + σ ij y j = q p ( σ ij )x i + ( σ ij )y j = λ i x i + µ j y j, where λ i = q σ ij and µ j = p σ ij. Observe that p λ i = σ ij = 1. Similarly, q µ j = 1. Thus, p λ ix i P 1 and q µ jy j P 2. z = p λ ix i + q µ jy j P 1 + P 2. Thus, we express C as the convex hull of a finite set of points; by the Minkowski-Weyl theorem it is a polytope. (iii) The intersection of two polyhedra is a polyhedron. Thus, show that the intersection of a polyhedron with a polytope is a polytope. Let one polyhedron be represented by A 1 x b 1 and the second polyhedron be A 2 x b 2. The intersection is given by A 1 x b 1, A 2 b 2 which is another polyhedron. Since the intersection of a bounded set set with another set is also bounded, the intersection of a polytope with another polyhedron is a polytope. 1

(iv) Let T : R n R d be a linear map. Show that if P R n is a polytope, then T (P ) is a polytope in R d. We represent P as the convex hull conv{v 1,..., v t }. We claim that T (P ) = conv{t v 1,..., T v t } because of the following sequence of equivalences. y conv{t v 1,..., T v t } y = λ 1 T v 1 +... + λ t T v t y = T (λ 1 v 1 +... + λ t v t ) y T (P ) Thus, we express T (P ) as the convex hull of a finite set of points; by the Minkowski-Weyl theorem it is a polytope. 2. (Complementary slackness) Let x R n be an optimal solution to the problem max{c T x : Ax b} and y R m be an optimal solution to the problem max{y T b : y T A = c T, y 0}. Show that for every i = 1..., m either a i x = b i or y i = 0 (or both). (Here a i denotes the i-th row of A and b i is the i-th component of b.) [Hint: consider the vector (y ) T (Ax b)] The theorem is saying that in an optimal primal-dual pair of solutions for an LP, either a constraint is tight at the optimal primal solution or the corresponding dual multiplier is 0. We will in fact show that two feasible solutions x (for the primal) and y (for the dual) are optimal solutions if and only if for every i = 1..., m either a i x = b i or y i = 0 (or both). Observe that since x and y are feasible for the primal and dual problems respectively, c T x (y ) T b = ((y ) T A)x (y ) T b = (y ) T (Ax b) 0 Moreover, since each component of y is nonnegative and each component of Ax b is also nonnegative, c T x (y ) T b = 0 if and only if for every i = 1..., m either a i x = b i or y i = 0 (or both). Finally, c T x (y ) T b = 0 is equivalent to x and y being optimal solutions to the primal and dual problems respectively. 3. Find an example of a pair of primal and dual linear programs such that both problems are infeasible. [Hint: There is an example with 2 variables in each problem] Primal: Dual: max x 1 subject to x 1 + x 2 1 x 2 0 x 2 1 max y 1 y 3 subject to y 1 = 1 y 1 + y 2 y 3 = 0 y 1, y 2, y 3 0 4. Let P = {x R n : Ax b} be a nonempty polyhedron. Let C = {r R n : x + λr P for all x P, λ R + } where R + is the set of nonnegative real numbers. Show that (i) C is a convex cone. [This cone is called the recession cone of the polyhedron P ] (ii) C = {r R n : Ar 0}. (i) We need to first verify that C is convex. Consider any r 1, r 2 C and any µ [0, 1] and let r = µr 1 + (1 µ)r 2. For any x P, λ 0 x + λ r = x + λµr 1 + λ(1 µ)r 2. Since r 1 C, 2

x + λµr 1 P and since r 2 C, (x + λµr 1 ) + λ(1 µ)r 2 P. A very similar argument shows that C is a cone. (ii) Consider any r such that Ar 0. Then A(x + λr) = Ax + λar Ax for λ 0 and Ax b if x P. Thus, A(x + λr) b for all x P and λ 0. This shows that {r R n : Ar 0} C. Consider any r such that Ar b. This means, for some i = 1,..., m, a i r > 0. Since P is nonempty, let x P. Since a i r > 0, there exists λ 0 such that a i (x + λr) = a i x + λ(a i r) > b i. This shows that r C. Hence C {r R n : Ar 0}. 5. Let P = {x R n : Ax b} be a nonempty polyhedron. Let L = {r R n : x + λr P for all x P, λ R}. Show that (i) L is a linear subspace of R n. [This is called the lineality space of the polyhedron P ] (ii) L = {r R n : Ar = 0}. (i) For any r 1, r 2 L, x + λ(r 1 + r 2 ) = x + λr 1 + λr 2. Since r 1 L, x + λr 1 P and since r 2 L, (x + λr 1 ) + λr 2 P. Thus r 1 + r 2 L. Similarly, for any r L and µ R, x + λ(µr) = x + (λµ)r P for any lambda R since r L. Thus, µr L. Therefore, L is a linear subspace. (ii) Proof is similar to the case of the cone. 6. Using the notation from Problems 5 and 6, show that L = C ( C). We say that P is pointed if L = {0}. Suppose P ; show that P has at least one vertex if and only if P is pointed. C ( C) = {r R n : Ar 0} {r R n : A( r) 0} = {r R n : Ar 0, Ar 0} = {r R n : Ar = 0} = L. If P has a vertex z, then A z has rank n, thus A has rank n. Therefore Ar = 0 only has the 0 solution, and so L = {0}. So if P has a vertex, it is pointed. Assume P is pointed. So Ar = 0 has only the 0 solution and so A has rank n. Let z P since P is nonempty. If A z has rank n, then z is a vertex and we are done. Else, consider any r such that A z r = 0. We know that A has rank n, and thus there exists a row a i such that a i r 0. By choosing r or r, we can make sure a i r > 0. Then let ɛ = min {b i a i z i {1,...,m} a i r : i T z, a i r > 0} (1) (recall that T z is the index set of the rows in A z.) Let z = z + ɛr. Observe that A z z = A z (z + ɛr) = A z z = b z. Moreover, let i be the index which achieves the minimum in (1). Thus, a i z = a i (z + ɛr) = a i z + ( b i ai z a i r )(a i r) = b i. Thus, T z T z {i }. Since a i r > 0, rank(a z ) > rank(a z ). Continuing in this manner, we can find a sequence of points v in P with strictly increasing rank(a v ). Hence, we will at some point find a point v P where A v has rank n. At that point, v would be a vertex of P. 7. (The Diet Problem) Suppose you want to design a diet for your meals. You have certain food items (e.g., spinach, chicken, rice etc.); let us label these different food types as f 1,... f n. You can choose any nonnegative amount of a food item to put in your diet. Each food item has a per unit cost c 1,..., c n associated with it. You have to meet some nutritional constraints: for example, you must have at least 5g of protein, and at most 40g of protein in the meal. Let 3

us say there are {1,..., k} nutritional categories and each category has a lower bound l i and an upper bound u i that must be met. Suppose that each unit of food item f j provides a ij units of nutritional category i. How will you solve the problem of designing a diet satisfying the nutritional demands that has the least cost? Let x 1,..., x n denote the amounts of food items f 1,... f n in the best diet. Then, to satisfy the nutritional constraints for category i {1,..., k}, we must have a ij x j u i, a ij x j l i Thus, we will have 2k constraints. The objective to minimize is n c jx j. This is now a linear program (in the form max{c T x : Ax b} discussed in class) with A = (a ij ), b = (u 1,..., u k, l 1,..., l k ) and c = ( c 1,..., c n ). 8. (Linear Regression with different objectives) In linear regression, we have a bunch of labeled data points z 1,..., z k R n with real valued labels y 1,..., y k. We want to fit the best linear function to this labeled data. More precisely, we want to find parameters β = (β 1,..., β n ) so as to minimize the errors y j n β iz j i. The typical objective is the sum of the squares of the errors, i.e., we wish to minimize k (y j n β iz j i )2. Suppose we are interested in the following variant: Firstly, we don t want to allow arbitrary values of the parameters; we want more controlled regression. Suppose for each parameter β i, we have certain preset upper and lower bounds u i and l i respectively that we want the parameter to lie within. Also, instead of minimizing the sum of squares, suppose we want to minimize the sum of the absolute values, i.e., minimize k y j n β iz j i subject to these bound constraints on the parameter values (l 1 minimization). How would you solve this problem? What if you were interested in minimizing the largest error, i.e., minimize max j {1,...,k} y j n β iz j i (l minimization)? Let β 1,..., β n be the parameter values. The bound constraints are easy: β i u i, β i l i To model the objective k y j n β iz j i, we introduce new auxiliary variables t 1,..., t k and impose the constraints: for every j = 1,..., k. minimize y j β i z j i t j, y j + β i z j i t j, This forces y j n β iz j i t j for all j = 1,..., k. If we now k subject to these constraints, then it is not hard argue that the t j values will be pushed down to exactly y j n β iz j i, and out problem will be solved. For the l version, we introduce a single auxiliary variable t, and impose the constraints: y j t j β i z j i t, y j + 4 β i z j i t,

for every j = 1,..., k. This forces y j n β iz j i t for all j = 1,..., k. Now minimizing t subject to these constraints will force t to take the value of the maximum of y j n β iz j i, which is exactly what we want. 5