Semidefinite Programming (SDP) and the Goemans-Williamson MAXCUT Paper. Robert M. Freund. September 8, 2003

Size: px
Start display at page:

Download "Semidefinite Programming (SDP) and the Goemans-Williamson MAXCUT Paper. Robert M. Freund. September 8, 2003"

Transcription

1 1 Semidefinite Programming (SDP) and the Goemans-Williamson MAXCUT Paper Robert M. Freund September 8, 2003 This presentation is based on: Goemans, Michel X., and David P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. Journal of the ACM 42(6), November 1995, pp

2 2 Outline Alternate View of Linear Programming Facts about Symmetric and Semidefinite Matrices SDP SDP Duality Approximately Solving MAXCUT using SDP and Random Vectors Interior-Point Methods for SDP 2003 Massachusetts Institute of Technology 2

3 Linear Programming Alternative Perspective 3 LP : minimize c x s.t. a i x = b i, i =1,...,m n x R +. n c x means the linear function j=1 c jx j n n R + := {x R n is a convex cone. R + x 0} is the nonnegative orthant. K is convex cone if x, w K and α, β 0 αx + βw K Massachusetts Institute of Technology 3

4 Linear Programming Alternative Perspective 4 LP : minimize c x s.t. a i x = b i, i =1,...,m n x R +. Minimize the linear function c x, subject to the condition that x must solve m given equations a i x = b i,i =1,...,m, and that n x must lie in the convex cone K = R Massachusetts Institute of Technology 4

5 Linear Programming Alternative Perspective LP Dual Problem... 5 m LD : maximize y i b i s.t. For feasible solutions x of LP and (y, s) of LD, the duality gap is simply ( ) m m c x y i b i = c y i a i x = s x 0 i=1 m i=1 i=1 y i a i + s = c i=1 n s R Massachusetts Institute of Technology 5

6 Linear Programming Alternative Perspective...LP Dual Problem 6 If LP and LD are feasible, then there exists x and (y,s ) feasible for the primal and dual, respectively, for which m c x y i b i = s x =0 i= Massachusetts Institute of Technology 6

7 Facts about the Semidefinite Cone 7 If X is an n n matrix, then X is a symmetric positive semidefinite (SPSD) matrix if X = X T and v T Xv 0 for any v R n If X is an n n matrix, then X is a symmetric positive definite (SPD) matrix if X = X T and v T Xv > 0 for any v R n,v = Massachusetts Institute of Technology 7

8 Facts about the Semidefinite Cone 8 S n denotes the set of symmetric n n matrices S n + denotes the set of (SPSD) n n matrices. S n ++ denotes the set of (SPD) n n matrices Massachusetts Institute of Technology 8

9 Facts about the Semidefinite Cone 9 Let X, Y S n. X 0 denotes that X is SPSD X Y denotes that X Y 0 X 0 to denote that X is SPD, etc. Remark: S n = {X S n X 0} is a convex cone Massachusetts Institute of Technology 9

10 Facts about Eigenvalues and Eigenvectors 10 If M is a square n n matrix, then λ is an eigenvalue of M with corresponding eigenvector q if Mq = λq and q = 0. Let λ 1,λ 2,...,λ n enumerate the eigenvalues of M Massachusetts Institute of Technology 10

11 Facts about Eigenvalues and Eigenvectors 11 The corresponding eigenvectors q 1,q 2,...,q n of M can be chosen so that they are orthonormal, namely ( ) ) i T ( ( q j q =0for i i ) T ( i ) = j, and q q =1 Define: [ ] 2 n Q := q 1 q q Then Q is an orthonormal matrix: Q T Q = I, equivalently Q T = Q Massachusetts Institute of Technology 11

12 Facts about Eigenvalues and Eigenvectors 12 λ 1,λ 2,...,λ n are the eigenvalues of M 1 q,q 2,...,q n are the corresponding orthonormal eigenvectors of M [ ] 2 n Q := q 1 q q Q T Q = I, equivalently Q T = Q 1 Define D: λ λ 2 D :=.... Property: M = QDQ T. 0 λ n 2003 Massachusetts Institute of Technology 12

13 Facts about Eigenvalues and Eigenvectors 13 The decomposition of M into M = QDQ T is called its eigendecomposition Massachusetts Institute of Technology 13

14 Facts about Symmetric Matrices 14 If X S n, then X = QDQ T for some orthonormal matrix Q and some diagonal matrix D. The columns of Q form a set of n orthogonal eigenvectors of X, whose eigenvalues are the corresponding entries of the diagonal matrix D. X 0 if and only if X = QDQ T where the eigenvalues (i.e., the diagonal entries of D) are all nonnegative. X 0 if and only if X = QDQ T where the eigenvalues (i.e., the diagonal entries of D) are all positive Massachusetts Institute of Technology 14

15 Facts about Symmetric Matrices 15 If M is symmetric, then det(m ) = n 2003 Massachusetts Institute of Technology 15 j=1 λ j

16 Facts about Symmetric Matrices 16 Consider the matrix M defined as follows: ( ) P v M = T, v d where P 0, v is a vector, and d is a scalar. Then M 0 if T P 1 and only if d v v 0. For a given column vector a, the matrix X := aa T is SPSD, i.e., X = aa T 0. If M 0, then there is a matrix N for which M = N T N. To 1 see this, simply take N = D 2Q T Massachusetts Institute of Technology 16

17 SDP Semidefinite Programming Think about X 17 Let X S n. Think of X as: a matrix an array of n 2 components of the form (x 11,...,x nn ) an object (a vector) in the space S n. All three different equivalent ways of looking at X will be useful Massachusetts Institute of Technology 17

18 SDP Semidefinite Programming Linear Function of X 18 Let X S n. What will a linear function of X look like? If C(X) is a linear function of X, then C(X) can be written as C X, where n n C X := C ij X ij. i=1 j=1 There is no loss of generality in assuming that the matrix C is also symmetric Massachusetts Institute of Technology 18

19 SDP Semidefinite Programming Definition of SDP 19 SDP : minimize C X s.t. A i X = b i,i =1,...,m, X 0, X 0 is the same as X S n + The data for SDP consists of the symmetric matrix C (which is the data for the objective function) and the m symmetric matrices A 1,...,A m, and the m vector b, which form the m linear equations Massachusetts Institute of Technology 19

20 SDP Semidefinite Programming Example ( ) A 1 = 0 3 7, A 2 = 2 6 0, b =, and C = 2 9 0, The variable X will be the 3 3 symmetric matrix: x 11 x 12 x 13 X = x 21 x 22 x 23, x 31 x 32 x 33 SDP : minimize x 11 +4x 12 +6x 13 +9x 22 +0x 23 +7x 33 s.t. x 11 +0x 12 +2x 13 +3x x 23 +5x 33 = 11 0x 11 +4x x 13 +6x 22 +0x 23 +4x 33 = 19 x 11 x 12 x 13 X = x 21 x 22 x x 31 x 32 x Massachusetts Institute of Technology 20

21 SDP Semidefinite Programming...Example 21 SDP : minimize x 11 +4x 12 +6x 13 +9x 22 +0x 23 +7x 33 s.t. x 11 +0x 12 +2x 13 +3x x 23 +5x 33 =11 0x 11 +4x x 13 +6x 22 +0x 23 +4x 33 =19 x 11 x 12 x 13 X = x 21 x 22 x x 31 x 32 x 33 It may be helpful to think of X 0 as stating that each of the n eigenvalues of X must be nonnegative Massachusetts Institute of Technology 21

22 SDP Semidefinite Programming LP SDP 22 LP : minimize c x s.t. a i x = b i, i =1,...,m n x R +. Define: a i c A a ị2... 0, i =1,...,m, and C = c i = a in c n SDP : minimize C X s.t. A i X = b i,i =1,...,m, X ij =0, i =1,...,n, j = i +1,...,n, x x X = 0, x n 2003 Massachusetts Institute of Technology 22

23 23 SDP Duality m SDD : maximize y i b i s.t. i=1 m y i A i + S = C i=1 S 0. Notice m S = C y i A i Massachusetts Institute of Technology 23 i=1

24 24 SDP Duality and so equivalently: m SDD : maximize y i b i i=1 m s.t. C y i A i 0 i= Massachusetts Institute of Technology 24

25 SDP Duality Example ( ) A 1 = 0 3 7, A 2 = 2 6 0, b =, and C = SDD : maximize 11y 1 +19y s.t. y y S = S Massachusetts Institute of Technology 25

26 SDP Duality Example 26 SDD : maximize 11y 1 +19y 2 is the same as: s.t. y y S = S 0 SDD : maximize 11y 1 +19y 2 s.t. 1 1y 1 0y 2 2 0y 1 2y 2 3 1y 1 8y 2 2 0y 1 2y 2 9 3y 1 6y 2 0 7y 1 0y y 1 8y 2 0 7y 1 0y 2 7 5y 1 4y Massachusetts Institute of Technology 26

27 SDP Duality Weak Duality 27 Weak Duality Theorem: Given a feasible solution X of SDP and a feasible solution (y, S) of SDD, the duality gap is If m C X y i b i = S X 0. i=1 m C X y i b i =0, i=1 then X and (y, S) are each optimal solutions to SDP and SDD, respectively, and furthermore, SX = Massachusetts Institute of Technology 27

28 SDP Duality Strong Duality 28 Strong Duality Theorem: Let z P and z D denote the optimal objective function values of SDP and SDD, respectively. Suppose that there exists a feasible solution ˆX of SDP such that X ˆ 0, and that there exists a feasible solution (ˆ y,s) ˆ of SDD such that Ŝ 0. Then both SDP and SDD attain their optimal values, and z P = z D Massachusetts Institute of Technology 28

29 Some Important Weaknesses of SDP 29 There may be a finite or infinite duality gap. The primal and/or dual may or may not attain their optima. Both programs will attain their common optimal value if both programs have feasible solutions that are SPD. There is no finite algorithm for solving SDP. There is a simplex algorithm, but it is not a finite algorithm. There is no direct analog of a basic feasible solution for SDP Massachusetts Institute of Technology 29

30 30 The MAX CUT Problem M. Goemans and D. Williamson, Improved Approximation Algorithms for Maximum Cut and Satisf iability Problems using Semidef inite Programming, J. ACM , Massachusetts Institute of Technology 30

31 The MAX CUT Problem 31 G is an undirected graph with nodes N = {1,...,n} and edge set E. Let w ij = w ji be the weight on edge (i, j), for (i, j) E. We assume that w ij 0 for all (i, j) E. The MAX CUT problem is to determine a subset S of the nodes N for which the sum of the weights of the edges that cross from S to its complement S is maximized ( S := N \ S) Massachusetts Institute of Technology 31

32 The MAX CUT Problem Formulations 32 The MAX CUT problem is to determine a subset S of the nodes N for which the sum of the weights w ij of the edges that cross from S to its complement S is maximized ( S := N \ S). Let x j =1 for j S and x j = 1 for j S. n n 1 MAXCUT : maximize x 4 w ij (1 x i x j ) i=1 j=1 s.t. x j { 1, 1}, j =1,...,n Massachusetts Institute of Technology 32

33 The MAX CUT Problem Formulations 33 Let Then n n 1 MAXCUT : maximize x 4 w ij (1 x i x j ) i=1 j=1 Y ij = x i x j s.t. x j { 1, 1}, j =1,...,n. Y = xx T. i = 1,...,n, j =1,...,n Massachusetts Institute of Technology 33

34 The MAX CUT Problem Formulations 34 Also let W be the matrix whose (i, j) th element is w ij for i = 1,...,n and j = 1,...,n. Then n n 1 MAXCUT : maximize Y,x 4 w ij (1 Y ij ) i=1 j=1 s.t. x j { 1, 1}, j =1,...,n Y = xx T Massachusetts Institute of Technology 34

35 The MAX CUT Problem Formulations 35 n n 1 MAXCUT : maximize Y,x 4 w ij (1 Y ij ) i=1 j=1 s.t. x j { 1, 1}, j =1,...,n Y = xx T Massachusetts Institute of Technology 35

36 The MAX CUT Problem Formulations 36 The first set of constraints are equivalent to Y jj =1,j =1,...,n. n n 1 MAXCUT : maximize Y,x 4 w ij (1 Y ij ) i=1 j=1 s.t. Y jj =1, j =1,...,n Y = xx T Massachusetts Institute of Technology 36

37 The MAX CUT Problem Formulations 37 n n 1 MAXCUT : maximize Y,x w ij (1 Y ij ) 4 i=1 j=1 s.t. Y jj =1, j =1,...,n Y = xx T. Notice that the matrix Y = xx T is a rank-1 SPSD matrix Massachusetts Institute of Technology 37

38 The MAX CUT Problem Formulations 38 We relax this condition by removing the rank-1 restriction: n n 1 RELAX : maximize Y 4 w ij (1 Y ij ) i=1 j=1 s.t. Y jj =1, j =1,...,n Y 0. It is therefore easy to see that RELAX provides an upper bound on MAXCUT, i.e., MAXCUT RELAX Massachusetts Institute of Technology 38

39 The MAX CUT Problem Computing a Good Solution 39 n n 1 RELAX : maximize Y w 4 ij (1 Y ij ) i=1 j=1 Let Ŷ solve RELAX s.t. Y jj =1, j =1,...,n Y 0. Factorize Y ˆ = Vˆ T V ˆ ( ) V ˆ =[ˆ v v T 1 vˆ ˆ Y ij = V T 2 v n ] and ˆ ˆ V ˆ =ˆi vˆ j 2003 Massachusetts Institute of Technology 39 ij

40 The MAX CUT Problem Computing a Good Solution 40 Let ˆ Y solve RELAX Factorize Y ˆ = Vˆ T V ˆ ( ) V ˆ =[ˆ v v T 1 vˆ ˆ Y ij = V T 2 v n ] and ˆ ˆ V ˆ =ˆi vˆ j Let r be a random uniform vector on the unit n-sphere S n S := {i r T ˆv i 0} S := {i r T ˆv i < 0} 2003 Massachusetts Institute of Technology 40 ij

41 The MAX CUT Problem Computing a Good Solution 41 Proposition: ( v T ˆ v i ) v j ) ) = arccos(ˆ i v j ) P sign(r T ˆ = sign(r T ˆ. π 2003 Massachusetts Institute of Technology 41

42 The MAX CUT Problem Computing a Good Solution 42 ^ Vi 2003 Massachusetts Institute of Technology 42 0 ^ Vj

43 The MAX CUT Problem Computing a Good Solution 43 Let r be a random uniform vector on the unit n-sphere S n S := {i r T ˆv i 0} S := {i r T ˆv i < 0} Let E[Cut] denote the expected value of this cut. Theorem: E[Cut] MAXCUT 2003 Massachusetts Institute of Technology 43

44 The MAX CUT Problem Computing a Good Solution 44 ( ) E[Cut] = 1 w ij P sign(r T vˆ i ) = sign(r T vˆ j ) 2 i,j T 1 arccos(ˆi v vˆ j ) = 2 w ij π i,j 1 arccos(ŷ ij ) = 2 w ij π i,j = 1 w ij arccos(ŷ ij ) 2π i,j 2003 Massachusetts Institute of Technology 44

45 The MAX CUT Problem Computing a Good Solution 45 E[Cut] = 1 w ij arccos(ŷ ij ) 2π i,j ( ) = wij 1 ˆ 2 arccos( ˆ 1 Y ij ) Y 4 ij π 1 Yˆ i,j ij ( ) 1 w ij 1 Yˆ ij min 1 t 1 π 2 arccos(t) 4 1 t i,j 2 θ = RELAX min 0 θ π π 1 cos θ RELAX Massachusetts Institute of Technology 45

46 The MAX CUT Problem Computing a Good Solution 46 So we have MAXCUT E[Cut] RELAX MAXCUT This is an impressive result, in that it states that the value of the semidefinite relaxation is guaranteed to be no more than 12.2% higher than the value of NP -hard problem MAXCUT Massachusetts Institute of Technology 46

47 47 The Logarithmic Barrier Function for SPD Matrices Let X 0, equivalently X S n. + X will have n nonnegative eigenvalues, say λ 1 (X),...,λ n (X) 0 (possibly counting multiplicities). S n = {X S n λ j (X) 0,j =1,...,n, + and λ j (X) =0for some j {1,...,n}} Massachusetts Institute of Technology 47

48 48 The Logarithmic Barrier Function for SPD Matrices S+ n = {X S n λ j (X) 0,j =1,...,n, and λ j (X) = 0 for some j {1,...,n}}. A natural barrier function is: n n B(X) := ln(λ i (X)) = ln λ i (X) = ln(det(x)). j=1 j=1 This function is called the log-determinant function or the logarithmic barrier function for the semidefinite cone Massachusetts Institute of Technology 48

49 49 The Logarithmic Barrier Function for SPD Matrices n n B(X) := ln(λ i (X)) = ln λ i (X) = ln(det(x)). j=1 j=1 Quadratic Taylor expansion at X = X: ( ) ( ) X 1 2 DX 1 2 DX B(X + αd) B(X)+ α D + 1 α 2 X 1 2 X B(X) has the same remarkable properties in the context of interior-point methods for SDP as the barrier function n j=1 ln(x j ) does in the context of linear optimization Massachusetts Institute of Technology 49

50 Interior-point Methods for SDP Primal and Dual SDP 50 and SDP : minimize C X s.t. A i X = b i,i =1,...,m, X 0 m SDD : maximize y i b i m s.t. y i A i + S = C i=1 S 0. If X and (y, S) are feasible for the primal and the dual, the duality gap is: m C X y i b i = S X 0. Also, i=1 i=1 S X = 0 SX = Massachusetts Institute of Technology 50

51 Interior-point Methods for SDP Primal and Dual SDP 51 n n B(X) = ln(λ i (X)) = ln λ i (X) = ln(det(x)). j=1 j=1 Consider: BSDP (µ) : minimize C X µ ln(det(x)) s.t. A i X = b i,i =1,..., m, X 0. Let f µ (X) denote the objective function of BSDP (µ). Then: f µ (X) = C µx Massachusetts Institute of Technology 51

52 Interior-point Methods for SDP Primal and Dual SDP 52 f µ (X) = C µx 1 BSDP (µ) : minimize C X µ ln(det(x)) s.t. A i X = b i,i =1,..., m, X 0. Karush-Kuhn-Tucker conditions for BSDP (µ) are: A i X = b i,i =1,...,m, X 0, m C µx 1 = y i A i. i= Massachusetts Institute of Technology 52

53 Interior-point Methods for SDP Primal and Dual SDP 53 Define which implies A i X = b i,i =1,...,m, X 0, m C µx 1 = y i A i. i=1 S = µx 1, XS = µi, 2003 Massachusetts Institute of Technology 53

54 Interior-point Methods for SDP Primal and Dual SDP 54 and rewrite KKT conditions as: A i X = b i,i =1,...,m, X 0 m yi A i + S = C i=1 XS = µi Massachusetts Institute of Technology 54

55 Interior-point Methods for SDP Primal and Dual SDP 55 A i X = b i,i =1,...,m, X 0 m yi A i + S = C i=1 XS = µi. If (X, y, S) is a solution of this system, then X is feasible for SDP, (y, S) is feasible for SDD, and the resulting duality gap is n n n n S X = S ij X ij = (SX) jj = (µi) jj = nµ. i=1 j=1 j=1 j= Massachusetts Institute of Technology 55

56 Interior-point Methods for SDP Primal and Dual SDP 56 A i X = b i,i =1,...,m, X 0 m yi A i + S = C i=1 XS = µi. If (X, y, S) is a solution of this system, then X is feasible for SDP, (y, S) is feasible for SDD, the duality gap is S X = nµ Massachusetts Institute of Technology 56

57 Interior-point Methods for SDP Primal and Dual SDP 57 This suggests that we try solving BSDP (µ) for a variety of values of µ as µ 0. Interior-point methods for SDP are very similar to those for linear optimization, in that they use Newton s method to solve the KKT system as µ Massachusetts Institute of Technology 57

58 58 Website for SDP A good website for semidefinite programming is: helmberg/semidef.html Massachusetts Institute of Technology 58

59 59 Differential Evolution: a stochastic nonlinear optimization algorithm by Storn and Price, 1996 Presented by David Craft September 15, 2003 This presentation is based on: Storn, Rainer, and Kenneth Price. Differential Evolution A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization 11, 1997, pp

60 60 The highlights of Differential Evolution (DE) A population of solution vectors are successively updated by addition, subtraction, and component swapping, until the population converges, hopefully to the optimum. No derivatives are used. Very few parameters to set. A simple and apparently very reliable method.

61 61 DE: the algorithm Start with NP randomly chosen solution vectors. For each i in (1, NP), form a mutant vector v i = x r1 +F. (x r2 -x r3 ) Where r1, r2, and r3 are three mutually distinct randomly drawn indices from (1, NP), and also distinct from i, and 0<F<=2.

62 62 DE: forming the mutant vector v i = x r1 +F. (x r2 -x r3 ). Solution space. x i.. x r1. x r3... v i. x r2

63 63 DE: From old points to mutants

64 64 DE: Crossover x i and v i to form the trial vector original x. Possible trial vectors. mutant v

65 65 DE: Crossover x i and v i to form the trial vector u i x i = (x i1, x i2, x i3, x i4,x i5 ) v i = (v i1, v i2, v i3, v i4,v i5 ) u i = (,,,, ) For each component of vector, draw a random number in U[0,1]. Call this rand j. Let 0<=CR<1 be a cutoff. If rand j <=CR, u ij = v ij, else u ij = x ij. To ensure at least some crossover, one component of u i is selected at random to be from v i.

66 66 DE: Crossover x i and v i to form the trial vector u i x i = (x i1, x i2, x i3, x i4,x i5 ) v i = (v i1, v i2, v i3, v i4,v i5 ) So, for example, maybe we have u i = (v i1, x i2, x i3, x i4,v i5 ) Index 1 randomly selected as definite crossover rand 5 <=CR, so it crossed over too

67 67 DE: Selection If the objective value COST(u i ) is lower than COST(x i ), then u i replaces x i in the next generation. Otherwise, we keep x i.

68 68 Numerical verification Much of the paper is devoted to trying the algorithm on many functions, and comparing the algorithm to representative algorithms of other classes. These classes are: Annealing algorithms Evolutionary algorithms The method of stochastic differential equations Summary of tests: DE is the only algorithm which consistently found the optimal solution, and often with fewer function evaluations than the other methods.

69 69 Numerical verification: example The fifth De Jong function, or Shekel s Foxholes (See equation 10 on page 348 of the Differential Evolution paper.)

70 70 The rest of the talk Why is DE good? Variations of DE. How do we deal with constraints? An example from electricity load management.

71 71 Why is DE good? Simple vector subtraction to generate random direction. More variation in population (because solution has not converged yet) leads to more varied search over solution space. = (x r2 -x r3 ) Annealing versus self-annealing. [discuss: size and direction]

72 72 Variations of DE x r1 : instead of random, could use best (x r2 -x r3 ) : instead of single difference, could use more vectors, for more variation. for example (x r2 -x r3+ x r4 -x r5 ) Crossover: something besides bernoulli trials

73 73 Dealing with constraints Penalty methods for difficult constraints. Simple projection back to feasible set for l<=x<=u type constraints. Or, random value U[l,u] (when, why?)

74 74 Example: Appliance Job Scheduling Hourly electricity prices (cents/kwh): Power requirements for 3 different jobs (kw): Start time constraints.

75 75 Example: Appliance Job Scheduling Objective: find start times for each job which minimize cost. Cost includes a charge on the maximum power used throughout the day. This couples the problems! where J min t ( x ) + D( x) i= 1 i s. t. a x u i= 1,..., J i i i x i i ti( xi) = p( t) ei( t, xi) dt x i + l i t [0, T] i i Cost of job i started at time x i D( x) = r max e( t, x ) Demand charge

76 76 Convergence for different F Other settings: CR=0.3, NP=6

77 77 Appliance Job Scheduling: Solution Solution Total energy profile Electricity price over time

78 78 Wrap-up DE is widely used, easy to implement, extensions and variations available, but no convergence proofs. More information: DE homepage: practical advice (e.g. start with NP=10*D and CR=0.9, F=0.8), source codes, etc. DE bibliography, Almost entirely DE applications.

79 79 I. Integer programming part of Clarkson-paper II. Incremental Linear Programming, Section in Randomized Algorithms-book presented by Jan De Mot September 29, 2003 This presentation is based on: Clarkson, Kenneth L. Las Vegas Algorithms for Linear and Integer Programming When the Dimension is Small. Journal of the ACM 42(2), March 1995, pp Preliminary version in Proceedings of the 29th Annual IEEE Symposium on Foundations of Computer Science, and Chapter 9 of: Motwani, Rajeev, and Prabhakar Raghavan. Randomized Algorithms. Cambridge, UK: Cambridge University Press, /23

80 80 Outline Part I: Integer Linear Programming (ILP) Previous work Algorithm for solving Integer Linear Programs [Clarkson 1995] based on the mixed algorithm for LP (Susan) Concept Running Time Analysis Part II: Incremental Linear Programming Concept SeideLP [Seidel 1991] BasisLP [Sharir and Welzl 1992] 2/23

81 81 Part I: Integer Linear Programming 3/23

82 82 Previous Work [Lenstra 1983] showed how to solve an ILP in polynomial time when the numbers of variables is fixed. Subsequent improvements (e.g. by [Frank and Tardos 1987]) show that the fasted deterministic algorithm requires operations on -bit numbers. Running time of new ILP algorithm: This is substantially faster than Lenstra s for 4/23

83 83 ILP Problem Find the optimum of: where and 5/23

84 84 Notation and Preliminaries Let: denote the set of constraints defined by and denote the optimal solution of the ILP defined on (not the corresponding LP relaxation). Assume: Bounded solution by adding to a new set of constraints : where and where we use a result by [Schrijver 1986]: if an ILP has finite solution, then every coordinate of that optimum has size no more than where is the facet complexity of Unique solution by choosing the lexicographically largest point achieving the optimum value. 6/23

85 85 ILP Algorithm: Concept First it is established that an optimum is determined by a small set ([Bell 1977] and [Scarf 1977]): Lemma: There is a set with and with ILP algorithms are variations on the LP algorithms, with sample sizes using rather than and using Lenstra s algorithm in the base case. Here, we convert the mixed algorithm for LPs to a mixed algorithm for ILPs, establishing the right sample sizes and criteria for successful iterations in both the recursive and iterative part of the mixed algorithm. 7/23

86 ILP Algorithm: Details 86 Lemma 2, related to the LP recursive algorithm, needs to be redone due to the fact that is not unique. Reminder: why do we need lemma 2? We want to make sure the set of violated constraints does not become too big. Lemma 2 (ILP version): Let and let be a random subset of size with Let be the set of constraints violated by Then with probability Other necessary lemma s remain valid or can be adapted easily, yielding the following essential parameters for the ILP mixed algorithm: Recursive part: use Lenstra s algorithm for and require for a successful iteration. Iterative part: with a corresponding bound of 8/23

87 ILP Algorithm: Proof of Lemma 2 (ILP version) 87 Proof. Lemma 2 (ILP version): With probability Assume is empty. For not empty: similar proof. Let and let denote the number of constraints in violated by We know that for some with We want to find such that the probability that less then This probability is bounded above by: which is no more than: is 9/23

88 88 ILP Algorithm: Proof of Lemma 2 (cont d) which is again no more than: and using elementary bounds, this quantity is less than for 10/23

89 89 ILP Algorithm: Running Time We have the following theorem: The ILP algorithm requires expected row operations on -bit vectors, and expected operations on -bit numbers, as where the constant factors do not depend on or 11/23

90 90 Part II: Incremental Linear Programming 12/23

91 91 Incremental LP Randomized incremental algorithms for LP Concept: add constraints in random order, after adding each constraint, determine the optimum of the constraints added so far. Two algorithms will be discussed: SeideLP BasisLP 13/23

92 92 Algorithm SeideLP Input: A set of constraints Output: The optimum of the LP defined by 0. if output 1. Pick a random constraint Recursively find 2.1. if does not violate output to be the optimum 2.2. else project all the constraints of onto and recursively solve this new linear programming problem; 14/23

93 93 SeideLP: Running Time Let denote an upper bound on the expected running time for a problem with constraints in dimensions. Then: First term: cost of recursively solving the LP defined by the constraints Second term: checking whether violates Third term (with probability ): cost of projecting + recursively solving smaller LP. Theorem: There is a constant such that the recurrence satisfies the solution 15/23

94 94 SeideLP: Further Discussion In Step 2.2. we completely discard any information obtained from the solution of the LP From the above figure, it follows we must consider all constraints in But: Can we use to jump-start the recursive call in step 2.2.? RESULT: Algorithm BasisLP 16/23

95 95 Algorithm BasisLP Input: Output: A basis for 0. If output 1. Pick a random constraint BasisLP( ); 2.1. if does not violate output 2.2. else output BasisLP( Basis( )); Basis returns a basis for a set of or fewer constraints. 17/23

96 96 BasisLP: Why does it work? Each invocation of Basis occurs when the violation test in 2.1. fails (i.e. does violate ). What is the probability that we fail a violation test? Let Remember: Pr( violates the optimum of ) This probability decreases further if contains some of the constraints of This was indeed the motivation for modifying SeideLP to BasisLP. 18/23

97 BasisLP: Running Time 97 Notation: Given, we call enforcing in if Let denote minus the number of constraints that are enforcing in is called the hidden dimension of Lemma 1: If is enforcing in then (i) and (ii) is extreme in all such that So, the probability that a violation occurs can be bounded by We establish that the decreases by at least 1 at each recursive call in step 2.2. It turns out is likely to decrease much faster. Theorem: The expected running time of BasisLP is 19/23

98 98 BasisLP: Analysis Details Proof of Lemma 1. If is enforcing in then (i) We have which can not be true if were a subset of (ii) is extreme in all such that Assume the contrary: contradiction. a 20/23

99 99 BasisLP: Analysis Details (Cont d) Lemma 2: Let and let be an extreme constraint in Let be a basis of Then: (i) Any constraint that is enforcing in is also enforcing in (ii) is enforcing in (iii) Proof: (i) then: (ii) Since is extreme in (iii) Follows readily. So, the numerator of decreases by at least 1 at each execution. 21/23

100 BasisLP: Analysis Details (Cont d) 100 Show that this decrease is likely to be faster. Given and a random we bound the probability that violates If it does, check the probability distribution of the resulting hidden dimension. Lemma 3: Let be the extreme constraints of that are not in numbered so that Then, for all and for is enforcing in Basis (proof: immediate from lemma 2.) In other words: when then all of will be enforcing and the arguments of the recursive call will have hidden dimension Observation: since any is equally likely to be is uniformly distributed on the integers in and the resulting hidden dimension is uniformly distributed on the integers in 22/23

101 101 BasisLP: Analysis Details (Cont d) Let denote the maximum expected number of violation tests for a call to BasisLP with arguments where and We get: This yields: and consequently the expected running time of BasisLP is Augmenting the analysis with Clarkson s sampling technique improves the running time of the mixed algorithm to 23/23

102 102 Las Vegas Algorithms for Linear (and Integer) Programming when the Dimension is Small Kenneth L. Clarkson presented by Susan Martonosi September 29, 2003 This presentation is based on: Clarkson, Kenneth L. Las Vegas Algorithms for Linear and Integer Programming When the Dimension is Small. Journal of the ACM 42(2), March 1995, pp Preliminary version in Proceedings of the 29th Annual IEEE Symposium on Foundations of Computer Science, 1988.

103 103 Outline Applications of the algorithm Previous work Assumptions and notation Algorithm 1: Recurrent Algorithm Algorithm 2: Iterative Algorithm Algorithm 3: Mixed Algorithm Contribution of this paper to the field 1

104 104 Applications of the Algorithms Algorithms give a bound that is good in n (number of constraints), but bad in d (dimension). So we require the problem to have a small dimension. Chebyshevapproximation: fittinga function by a rational function where both the numerator and denominator have relatively small degree. The dimension is the sum of the degrees of the numerator and denominator. Linear separability: separatingtwo sets of points in d-dimensional space by a hyperplane Smallest enclosing circle problem: find a circle of smallest radius that encloses points in d dimensional space 2

105 105 Previous work Megiddo: Deterministic algorithm for LP in O(2 2d n) Clarkson; Dyer: O(3 d2 n) Dyer and Frieze: Randomized algo. with expected time no better than O(d 3d n) This paper s mixed algo.: Expected time O(d 2 n)+(log n)o(d) d/2+o(1) + O(d 4 n log n) as n 3

106 106 Assumptions Minimize x 1 subject to Ax b The polyhedron F(A, b) is non-empty and bounded and 0 F(A, b) The minimum we seek occurs at a unique point, which is a vertex of F(A, b) If a problem is bounded and has multiple optimal solutions with optimal value x 1, choose the one with the minimum Euclidean norm min{ x 2 x F(A, b),x 1 = x 1 } Each vertex of F(A, b) is defined by d or fewer constraints 4

107 107 Notation Let: H denote the set of constraints defined by A and b O(S) be the optimal value of the objective function for the LP defined on S H Each vertex of F (A, b) is defined by d or fewer constraints impliesthat B(H) H of size d or less such that O(B(H)) = O(H). We call this subset B(H) the basis of H. All other constraints in H\B(H) are redundant. a constraint h H be called extreme if O(H\h) < O(H) (these are the constraints in B(H)). 5

108 108 Algorithm 1: Recursive Try to eliminate redundant constraints Once our problem has a small number of constraints (n 9d 2 ), then use Simplex to solve it. Build up a smaller set of constraints that eventually include all of the extreme constraints and a small number of redundant constraints Choose r = d n unchosen constraints of H\S at random Recursively solve the problem on the subset of constraints, R S Determine which remainingconstraints (V ) are violated by this optimal solution Add V to S if it s not too big( V 2 n). Otherwise, if V is too big, then pick r new constraints We stop once V is empty: we ve found a set S R such that no other constraints in H are violated by its optimal solution. This optimal solution x is thus optimal for the original problem. 6

109 109 Recursive Algorithm Input: A set of constraints H. Output: The optimum B(H) 1. S ; C d 9d 2 2. If n C d return Simplex(H) 2.1 else repeat: choose R H\S at random, with R = r = d n x Recursive(R S) V {h H vertex defined by x violates h} if V 2 n then S S V until V = 2.2 return x 7

110 110 Recursive Algorithm: Proof Roadmap Questions: How do we know that S doesn t get too large before it has all extreme constraints? How do we know we will find a set of violated constraints V that s not too big(i.e. the loop terminates quickly)? Roadmap: Lemma 1. If the set V is nonempty, then it contains a constraint of B(H). Lemma 2. Let S H and let R H\S be a random subset of size r, with H\S = m. Let V H be the set of constraints violated by O(R S). Then the expected size of V is no more than d(m r+1). r d And we ll use this to show the followinglemma: 8

111 111 Lemma 3. The probability that any given execution of the loop body is successful ( V 2 n for this recursive version of the algorithm) is at least 1/2, and so on average, two executions or less are required to obtain a successful one This will leave us with a runningtime T (n, d) 2dT (3d n, d) + O(d 2 n) for n > 9d 2. 9

112 112 Recursive Algorithm: Proof of Lemma 1 Proof. Lemma 1: When V is nonempty, it contains a constraint of B(H). Suppose on the contrary that V contains no constraints of B(H). L Let a point x y if (x 1, x 2 ) (y 1, y 2 ) (x is better than y). Let x (T ) be the optimal solution over a set of constraints T. Then x (R S) satisfies all the constraints of B(H) (it is feasible), and thus x (R S) x (B(H)). However, since R S H, we know that x (R S) x (H) = x (B(H)). Thus, x (R S) has the same obj. fcn value and norm as x (B(H)). By the uniqueness of this point, x (R S) = x (B(H)) = x (H), and V =. Contradiction! So, every time V is added to S, at least one extreme constraint of H is added (so we ll do this at most d times). 10

113 113 Recursive Algorithm: Proof of Lemma 2 Proof. Lemma 2: The expected size of V is no more than d(m r+1). r d First assume problem nondegenerate. Let C H = {x (T S) T H\S}, subset of optima. Let C R = {x (T S) T R} The call Recursive(R S) returns an element x (R S): an element of C H unique element of C R satisfyingevery constraint in R. 11

114 114 Recursive Algorithm: Proof of Lemma 2 Choose x C H and let v x = number of constraints in H violated by x. E[ V ] = E[ x C v x I(x = x (R S))] = v x P x H x C H where and P x = P (x = x (R S)) Howtofind P x? { 1 if x = x (R S) I(x = x (R S)) = 0 otherwise 12

115 115 Recursive Algorithm: Proof of Lemma 2 Let N = number of subsets of H\S of size r s.t. x (subset) =x (R S). ( m) N Then N = r P x and P x = m. ( r ) To find N, notethat x (subset) C H and x (subset) =x (R S) only if x (subset) C R as well x (subset) satisfies all constraints of R Therefore, N = No. of subsets of H\S of size r s.t. x (subset) C R and x (subset) satisfies all constraints of R. 13

116 116 Recursive Algorithm: Proof of Lemma 2 For some such subset of H\S of size r and such that x (subset) =x (R S), let T be the minimal set of constraints such that x (subset) =x (T S). x (subset) C R implies T R nondegeneracy implies T is unique and T d Let i x = T. In order to have x (T S) =x (R S) (and thus x (subset) =x (R S)), when constructingour subset we must choose: the i x constraints of T R r i x constraints from H\S\T \V 14

117 117 ( ) ( m v x ix m vx ix r ix ) Therefore, N = and P x = ( m r ix r ) ( m r ) ( m v x ix r ix 1 ) d m r+1 E[ V ] m r+1 x C v x H ( m r d r ) r d m r+1 ( m v x ix r d r ix 1 ) (where the summand is E[No. of x C R violatingexactly one constraint in R] d) For the degenerate case, we can perturb the vector b by adding (ɛ, ɛ 2,..., ɛ n ) and show that the bound on V holds for this perturbed problem, and that the perturbed problem has at least as many violated constraints as the original degenerate problem. 15

118 118 Recursive Algorithm: Proof of Lemma 3 Proof. Lemma 3: P(successful execution) 1/2; E[Executions til 1st success] 2. Here, P(unsuccessful execution) = P ( V > 2 n) 2E[ V ] 2d m r+1 =2 n d r d n 1 n+1 (since r = d n) 2 n So, P(unsuccessful execution)= P ( V > 2 n) P ( V > 2E[ V ]) 1/2, by the Markov Inequality. P(successful execution) 1/2, and the expected number of loops until our first successful execution is less than 2. 16

119 119 Recursive Algorithm: Running Time As longas n > 9d 2, Have at most d +1 augmentations to S (succesful iterations), with expected 2 tries until success With each success, S grows by at most 2 n, since V 2 n After each success, we run the Recursive algorithm on a problem of size S R 2d n + d n = 3d n After each recursive call, we check for violated constraints, which takes O(nd) each of at most d +1 times T (n, d) 2(d +1)T (3d n, d) + O(d 2 n), for n > 9d 2 17

120 120 Algorithm 2: Iterative Doesn t call itself, calls Simplex directly each time Associates weight w h to each constraint which determines the probability with which it is selected Each time a constraint is violated, its weight is doubled Don t add V toaset S; rather reselect R (of size 9d 2 ) over and over until it includes the set B(H) 18

121 121 Algorithm 2: Iterative Input: A set of constraints H. Output: The optimum B(H) 1. h H, w h 1; C d =9d 2 2. If n C d, return Simplex(H) 2.1 else repeat: choose R H at random, with R = r = C d x Simplex(R) V {h H vertex defined by x violates h} w(h) if w(v ) 2 9d 1 then for h V, w h 2w h until V = 2.2 return x 19

122 122 Iterative Algorithm: Analysis Lemma 1: If the set V is nonempty, then it contains a constraint of B(H) still holds (proof as above with S = ). Lemma 2: Let S H and let R H\S be a random subset of size r, with H\S = m. Let V H be the set of constraints violated by O(R S). Then the expected size of V is no more than d(m r+1) r d still holds with the following changes. Consider each weight-doubling as the creation of multinodes. So size of a set is actually its weight. So we have S =, and thus H\S = m = w(h). +1 This gives us E[w(V )] d(w(h) 9d2 w(h) 9d 2 d Lemma 3: If we define a successful iteration to be w(v ) 2 w(h), then Lemma 3 9d 1 holds, and the probability that any given execution of the loop body is successful is at least 1/2, and so on average, two executions or less are required to obtain a successful one. 9d 1 20

123 123 Iterative Algorithm: Running Time The Iterative Algorithm runs in O(d 2 n log n)+(d log n)o(d) d/2+o(1) expected time, as n, where the constant factors do not depend on d. First start by showingexpected number of loop iterations = O(d log n) By Lemma 3.1, at least one extreme constraint h B(H) is doubled duringa successful iteration Let d = B(H). After kd successful executions w(b(h)) = 2 n h h B(H), where n h is thenumberoftimes h entered V and thus h B(H) n h kd h B(H) w h h B(H) 2k = d 2 k 2 When members of V are doubled, increase in w(h) = w(v ) 9d 1, so after kd successful iterations, we have w(h) n(1 + 2kd 2 9d 1 )kd ne 9d 1 21

124 124 V sure to be empty when w(b(h)) > w(h) (i.e. P (Choose B(H)) > 1). This gives us: k> ln(n/d ) ln 2 2d 9d 1 Within a loop:,or kd = O(d log n) successful iterations = O(d log n) iterations. Can select a sample R in O(n) time [Vitter 84] Determiningviolated constraints, V, is O(dn) ( 2C ) Simplex algorithm takes d O(1) time per vertex, times d vertices [?]. Using d/2 Stirling s approximation, this gives us O(d) d/2+o(1) for Simplex Total runningtime: O(d log n) [O(dn) + O(d) d/2+o(1) ]= O(d 2 n log n) +(d log n)o(d) d/2+o(1) 22

125 125 Algorithm 3: Mixed Follow the Recursive Algorithm, but rather than calling itself, call the Iterative Algorithm instead Runtime of Recursive: T (n, d) 2(d + 1)T (3d n, d) + O(d 2 n), for n > 9d 2 In place of T (3d (n), substitute in runtime of Iterative algorithm on 3d n constraints Runtime of Mixed Algorithm: O(d 2 n)+(d 2 log n)o(d) d/2+o(1) +O(d 4 n log n) 23

126 126 Contributions of this paper to the field Leadingterm in dependence on n is O(d 2 n), an improvement over O(d 3d n) Algorithm can also be applied to integer programming (Jan s talk) Algorithm was later applied as overlying algorithm to incremental algorithms (Jan s talk) to give a sub-exponential bound for linear programming (rather than usingsimplex once n 9d 2, use an incremental algorithm) 24

127 Polytopes, their diameter, and randomized simplex Presentation by: Dan Stratila Operations Research Center Session 4: October 6, 2003 Based primarily on: Gil Kalai. A subexponential randomized simplex algorithm (extended abstract). In STOC [Kal92a]. and on: Gil Kalai. Linear programming, the simplex algorithm and simple polytopes. Math. Programming (Ser. B), [Kal97].

128 Structure of the talk 1. Introduction to polytopes, linear programming, and the simplex method. 2. A few facts about polytopes. 3. Choosing the next pivot. Main result in this talk. 4. Subexponential randomized simplex algorithms. 5. Duality between two subexponential simplex algorithms. 6. The Hirsch conjecture, and applying randomized simplex to it. 7. Improving diameter results using an oracle for choosing pivots. 1

129 Polytopes and polyhedra A polyhedron P R d is the intersection of finitely many halfspaces, or in matrix notation P := {x R d : Ax b}, where A R n d and b R n. A polytope is a bounded polyhedron. Dimension of polyhedron P is dim(p ) := dim(aff(p )), where aff(p ) is the affine hull of all points in P. A polyhedron P R d with dim(p ) = k is often called a k-polyhedron. If d = k, P called full-dimensional. (Most of the time we assume full-dimensional d-polyhedra, not concerned much about the surrounding space.) An inequality ax β, where a R d and β R, is called valid if ax β for all x P. 2

130 Vertices, edges,..., facets A face F of P is the intersection of P with a valid inequality ax β, i.e. F := {x P : ax = β}. Faces of dimension d 1 are called facets, 1... edges, and 0... vertices. Vertices are points, basic feasible solutions (algebraic), or extreme points (linear cost). Since 0x 0 is valid, P is a d-dimensional face of P. 0x 1 is valid too, so is a face of P, and we define its dimension to be 1. Some vertices are connected by edges, so we can define a graph G = (V (G), E(G)), where V (G) = {v : v vert(p )} and E(G) = {(v, w) V (G) 2 : edge E of P s.t. v E, w E}. For unbounded polyhedra often a node is introduced in V (G), and we add graph arcs (v, ) whenever v E where E is an unbounded edge of P. 3

131 Example of a 3-polytope F 7 Figure 1: A 3-polytope (left) and its graph (right). Four vertices, three edges, and facet F are shown in corresponding colors F 3 4

132 Linear programming and the simplex method A linear programming problem max{cx : Ax b} is the problem of maximizing a linear function over a polyhedron. If problem bounded (cost of feas. sol. finite), optimum can be achieved at some vertex v. If problem unbounded, can find edge E of P = {x R d : Ax b} s.t. cx is unbounded on the edge. If problem bounded, vertex v is optimal cv cw for all w adjacent to v (for all (v, w) E(G)). Geometrically, the simplex method starts at a vertex (b.f.s.) and moves from one vertex to another along a cost-increasing edge (pivots) until it reaches an optimal vertex (optimal b.f.s). 5

133 Vertices as intersections of facets Any polytope can be represented by its facets P = {x R d : Ax b}, or by its vertices P = conv({v : v vert(p )}). If vertices are given, then LP is trivial just select the best one. Most of the time, facets are given. Number of vert. exponential in number of facets makes generating all vertices from the facets impractical. Represent a vertex v as intersection of d facets. Any vertex is situated at the intersection of at least d facets; any non-empty intersection of d facets yields a vertex. d When situated at a vertex v given by i=1 F i, easy to find all adjacent vertices. Remove each facet F i, and intersect with all other facets not in {F 1,..., F d }. Except when... 6

134 Degeneracy and simple polytopes When a vertex is at the intersection of > d facets, procedure above may leave us at the same vertex. Worse, sometimes need such changes before can move away from a vertex in cost-increasing direction. This is (geometric) degeneracy. In standard form degenerate vertices yield degenerate b.f. solutions. Other degenerate b.f. solutions may appear because of redundant constraints. If all vertices of P belong to at most d facets ( exactly d), P is called simple. Simple polytopes correspond to non-degenerate LPs, and have many properties [Zie95, Kal97]. We restrict ourselves to simple polytopes. Ok for two reasons: 1) any LP can be suitably perturbed to become non-degenerate; 2) perturbation can be made implicit in the algorithms. 7

135 A few facts about polytopes Disclaimer: results not used or related to subexponential simplex pivot rules (main result in this talk). The f -vector: f k (P ) :=# of k-faces of P. Degrees: let deg c (v) w.r.t. to some objective function c be the # of neighboring vertices w with cw < cv. The h-vector: h k,c (P ) :=# of vertices of degree k w.r.t. objective c in P. Note: there is always one vertex of degree d, and one of degree 0. Property: h k,c (P ) = h k (P ), independent of c. 8

136 h k,c (P ) = h k (P ), proof (1/2) Proof. Count p := {(F, v) : F is a k-face of P, v is max. on F }, in two ways. Pick facets. p = f k (P ). Because c in general position v unique for each F, hence On the other hand, pick a vertex v, and assume deg c (v) = r. Let T = {(v, w) : cv > cw}, by definition T = r. For simple polytopes, each vertex v has d adjacent edges, and any k of them define a k-face F that includes v. ( ) ( ) T r So, # of k-facets that contain v as local maximum is =. k k 9

137 h k,c (P ) = h k (P ), proof (2/2) Summing over all v vert(p ), we obtain f k (P ) = ( ) d r r=k h r,c(p ). k Equations linearly independent in h r,c. This completely determines h r,c (P ) in terms of f k (P ). But f k (P ) independent of c, so same true for h r (P ). 10

138 The Euler Formula and Dehn-Sommerville Relations We can expess h k (P ) = ( ) d r r=k ( 1)r k f r (P ). k We know that h 0 (P ) = h d (P ) = 1, hence f 0 (P ) f 1 (P )+ +( 1) d f d (P ) = 1, d or f + ( 1) d 1 0 (P ) f 1 (P ) + f d 1 (P ) = 1 ( 1). In 3 dimensions, V E + F = 2. Back to h k,c (P ), note that if deg c (v) = k then deg c (v) = d k. Because of independence of c, we obtain the Dehn-Sommerville Relations: h k (P ) = h d k (P ). 11

139 Cyclic polytopes and the upper bound theorem A cyclic d-polytope with n vertices is defined by n scalars t 1,..., t n conv({(t i, t 2 i,..., t i d ) : i = 1, d}). Can use other curves too. All cyclic d-polytopes with n vertices have same structure, denote by C(d, n). The polar C (d, n) := {x (R d ) : xv 1, v C(d, n)} is a simple polytope. Property: C(d, n) has the maximum number of k-facets for any polytope with n vertices. The polar C (d, n) has the maximum number of k-facets for any polytope with n facets (the face lattice). Exact expression for f k 1 elaborate, but a simple one is f k 1 = min{d,k} ( d i ) h i (P ). For more interesting details, see [Zie95]. i=0 k i as 12

140 Abstract objective functions and the combinatorial structure An abstract objective function assigns a value to every vertex of a simple polytope P, s.t. every non-empty face F of P has a unique local maximum vertex. AOFs are gen. of linear objective functions. Most results here apply. The combinatorial structure of a polytope is all the information on facet inclusion, e.g. all vertices, all edges and the vertices they are composed of, all 3-facets and their composition, etc. Lemma: Given graph G(P ) of simple polytope P, connected subgraph H = (V (H), E(H)) with k vertices defines a k-face if and only if AOF s.t. all vertices in V (H) come before all vertices in V (G(P )) \ V (H). Property: The combinatorial structure of any simple polytope is determined by its graph. 13

141 Main result in this talk context In the simplex algorithm we often make choices on which vertex to move to next. Criteria for choosing the next vertex are called pivot rules. In the early days, believed simple rules guarantee a polynomial number of vertices in path. Klee and Minty [KM72] have shown exponential behaviour. After that, not known even if LP can be solved in polynomial time at all, until [Kha79]. But still, Finding a pivot rule (deterministic or randomized) that would yield a polynomial number of vertex changes open since simplex introduced. For some f(n), exponential: f(n) Ω(k n ), k > 1. Polynomial: f(n) O(n k ) for some fixed k 1. Subexponential f(n) O(n k ) for any fixed k 1 and f(n) Ω(k n ) for any fixed k > 1. 14

Polytopes, their diameter, and randomized simplex

Polytopes, their diameter, and randomized simplex Polytopes, their diameter, and randomized simplex Presentation by: Dan Stratila Operations Research Center Session 4: October 6, 2003 Based primarily on: Gil Kalai. A subexponential randomized simplex

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix. CSE525: Randomized Algorithms and Probabilistic Analysis Lecture 9 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Keith Jia 1 Introduction to semidefinite programming Semidefinite programming is linear

More information

A Subexponential Randomized Simplex Algorithm

A Subexponential Randomized Simplex Algorithm s A Subexponential Randomized Gil Kalai (extended abstract) Shimrit Shtern Presentation for Polynomial time algorithms for linear programming 097328 Technion - Israel Institute of Technology May 14, 2012

More information

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture

More information

On Clarkson s Las Vegas Algorithms for Linear and Integer Programming When the Dimension is Small

On Clarkson s Las Vegas Algorithms for Linear and Integer Programming When the Dimension is Small On Clarkson s Las Vegas Algorithms for Linear and Integer Programming When the Dimension is Small Robert Bassett March 10, 2014 2 Question/Why Do We Care Motivating Question: Given a linear or integer

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Week 5. Convex Optimization

Week 5. Convex Optimization Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

New Directions in Linear Programming

New Directions in Linear Programming New Directions in Linear Programming Robert Vanderbei November 5, 2001 INFORMS Miami Beach NOTE: This is a talk mostly on pedagogy. There will be some new results. It is not a talk on state-of-the-art

More information

CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

THEORY OF LINEAR AND INTEGER PROGRAMMING

THEORY OF LINEAR AND INTEGER PROGRAMMING THEORY OF LINEAR AND INTEGER PROGRAMMING ALEXANDER SCHRIJVER Centrum voor Wiskunde en Informatica, Amsterdam A Wiley-Inter science Publication JOHN WILEY & SONS^ Chichester New York Weinheim Brisbane Singapore

More information

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs 11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

The Simplex Algorithm for LP, and an Open Problem

The Simplex Algorithm for LP, and an Open Problem The Simplex Algorithm for LP, and an Open Problem Linear Programming: General Formulation Inputs: real-valued m x n matrix A, and vectors c in R n and b in R m Output: n-dimensional vector x There is one

More information

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

More information

arxiv: v1 [cs.cc] 30 Jun 2017

arxiv: v1 [cs.cc] 30 Jun 2017 On the Complexity of Polytopes in LI( Komei Fuuda May Szedlá July, 018 arxiv:170610114v1 [cscc] 30 Jun 017 Abstract In this paper we consider polytopes given by systems of n inequalities in d variables,

More information

Notes taken by Mea Wang. February 11, 2005

Notes taken by Mea Wang. February 11, 2005 CSC2411 - Linear Programming and Combinatorial Optimization Lecture 5: Smoothed Analysis, Randomized Combinatorial Algorithms, and Linear Programming Duality Notes taken by Mea Wang February 11, 2005 Summary:

More information

Monotone Paths in Geometric Triangulations

Monotone Paths in Geometric Triangulations Monotone Paths in Geometric Triangulations Adrian Dumitrescu Ritankar Mandal Csaba D. Tóth November 19, 2017 Abstract (I) We prove that the (maximum) number of monotone paths in a geometric triangulation

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization? Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x

More information

Lecture Notes 2: The Simplex Algorithm

Lecture Notes 2: The Simplex Algorithm Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

Polytopes Course Notes

Polytopes Course Notes Polytopes Course Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2013 i Contents 1 Polytopes 1 1.1 Convex Combinations and V-Polytopes.....................

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation CSC411 - Linear Programming and Combinatorial Optimization Lecture 1: Semidefinite Programming(SDP) Relaxation Notes taken by Xinwei Gui May 1, 007 Summary: This lecture introduces the semidefinite programming(sdp)

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Investigating Mixed-Integer Hulls using a MIP-Solver

Investigating Mixed-Integer Hulls using a MIP-Solver Investigating Mixed-Integer Hulls using a MIP-Solver Matthias Walter Otto-von-Guericke Universität Magdeburg Joint work with Volker Kaibel (OvGU) Aussois Combinatorial Optimization Workshop 2015 Outline

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

Lecture 16 October 23, 2014

Lecture 16 October 23, 2014 CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 16 October 23, 2014 Scribe: Colin Lu 1 Overview In the last lecture we explored the simplex algorithm for solving linear programs. While

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33 Convex Optimization 2. Convex Sets Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2018 SJTU Ying Cui 1 / 33 Outline Affine and convex sets Some important examples Operations

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that ( Shelling (Bruggesser-Mani 1971) and Ranking Let be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that. a ranking of vertices

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

Trapezoidal decomposition:

Trapezoidal decomposition: Trapezoidal decomposition: Motivation: manipulate/analayze a collection of segments e.g. detect segment intersections e.g., point location data structure Definition. Draw verticals at all points binary

More information

Open problems in convex geometry

Open problems in convex geometry Open problems in convex geometry 10 March 2017, Monash University Seminar talk Vera Roshchina, RMIT University Based on joint work with Tian Sang (RMIT University), Levent Tunçel (University of Waterloo)

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

AMS : Combinatorial Optimization Homework Problems - Week V

AMS : Combinatorial Optimization Homework Problems - Week V AMS 553.766: Combinatorial Optimization Homework Problems - Week V For the following problems, A R m n will be m n matrices, and b R m. An affine subspace is the set of solutions to a a system of linear

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

ORIE 6300 Mathematical Programming I September 2, Lecture 3

ORIE 6300 Mathematical Programming I September 2, Lecture 3 ORIE 6300 Mathematical Programming I September 2, 2014 Lecturer: David P. Williamson Lecture 3 Scribe: Divya Singhvi Last time we discussed how to take dual of an LP in two different ways. Today we will

More information

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24 Convex Optimization Convex Sets ENSAE: Optimisation 1/24 Today affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization Frank de Zeeuw EPFL 2012 Today Introduction Graph problems - What combinatorial things will we be optimizing? Algorithms - What kind of solution are we looking for? Linear Programming

More information

Alternating Projections

Alternating Projections Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point

More information

Convex Geometry arising in Optimization

Convex Geometry arising in Optimization Convex Geometry arising in Optimization Jesús A. De Loera University of California, Davis Berlin Mathematical School Summer 2015 WHAT IS THIS COURSE ABOUT? Combinatorial Convexity and Optimization PLAN

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm Algorithmic Game Theory and Applications Lecture 6: The Simplex Algorithm Kousha Etessami Recall our example 1 x + y

More information

ACTUALLY DOING IT : an Introduction to Polyhedral Computation

ACTUALLY DOING IT : an Introduction to Polyhedral Computation ACTUALLY DOING IT : an Introduction to Polyhedral Computation Jesús A. De Loera Department of Mathematics Univ. of California, Davis http://www.math.ucdavis.edu/ deloera/ 1 What is a Convex Polytope? 2

More information

Integral Geometry and the Polynomial Hirsch Conjecture

Integral Geometry and the Polynomial Hirsch Conjecture Integral Geometry and the Polynomial Hirsch Conjecture Jonathan Kelner, MIT Partially based on joint work with Daniel Spielman Introduction n A lot of recent work on Polynomial Hirsch Conjecture has focused

More information

A Course in Convexity

A Course in Convexity A Course in Convexity Alexander Barvinok Graduate Studies in Mathematics Volume 54 American Mathematical Society Providence, Rhode Island Preface vii Chapter I. Convex Sets at Large 1 1. Convex Sets. Main

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 6

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 6 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 6 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 19, 2012 Andre Tkacenko

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Combinatorial Geometry & Topology arising in Game Theory and Optimization

Combinatorial Geometry & Topology arising in Game Theory and Optimization Combinatorial Geometry & Topology arising in Game Theory and Optimization Jesús A. De Loera University of California, Davis LAST EPISODE... We discuss the content of the course... Convex Sets A set is

More information

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec. Randomized rounding of semidefinite programs and primal-dual method for integer linear programming Dr. Saeedeh Parsaeefard 1 2 3 4 Semidefinite Programming () 1 Integer Programming integer programming

More information

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 In this lecture, we describe a very general problem called linear programming

More information

Coloring 3-Colorable Graphs

Coloring 3-Colorable Graphs Coloring -Colorable Graphs Charles Jin April, 015 1 Introduction Graph coloring in general is an etremely easy-to-understand yet powerful tool. It has wide-ranging applications from register allocation

More information

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25 Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Simplex Algorithm in 1 Slide

Simplex Algorithm in 1 Slide Administrivia 1 Canonical form: Simplex Algorithm in 1 Slide If we do pivot in A r,s >0, where c s

More information

J Linear Programming Algorithms

J Linear Programming Algorithms Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.]

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

More information

arxiv: v1 [math.co] 12 Dec 2017

arxiv: v1 [math.co] 12 Dec 2017 arxiv:1712.04381v1 [math.co] 12 Dec 2017 Semi-reflexive polytopes Tiago Royer Abstract The Ehrhart function L P(t) of a polytope P is usually defined only for integer dilation arguments t. By allowing

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

CS 435, 2018 Lecture 2, Date: 1 March 2018 Instructor: Nisheeth Vishnoi. Convex Programming and Efficiency

CS 435, 2018 Lecture 2, Date: 1 March 2018 Instructor: Nisheeth Vishnoi. Convex Programming and Efficiency CS 435, 2018 Lecture 2, Date: 1 March 2018 Instructor: Nisheeth Vishnoi Convex Programming and Efficiency In this lecture, we formalize convex programming problem, discuss what it means to solve it efficiently

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING

More information

Convex sets and convex functions

Convex sets and convex functions Convex sets and convex functions Convex optimization problems Convex sets and their examples Separating and supporting hyperplanes Projections on convex sets Convex functions, conjugate functions ECE 602,

More information

Polyhedral Computation Today s Topic: The Double Description Algorithm. Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010

Polyhedral Computation Today s Topic: The Double Description Algorithm. Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010 Polyhedral Computation Today s Topic: The Double Description Algorithm Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010 1 Convexity Review: Farkas-Type Alternative Theorems Gale

More information