Local Constraints in Combinatorial Optimization

Size: px
Start display at page:

Download "Local Constraints in Combinatorial Optimization"

Transcription

1 Local Constraints in Combinatorial Optimization Madhur Tulsiani Institute for Advanced Study

2 Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time.

3 Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)?

4 Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)? How does one reason about increasingly larger local constraints?

5 Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)? How does one reason about increasingly larger local constraints? Does approximation get better as constraints get larger?

6 LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +) Sherali-Adams Lasserre

7 LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +). Sherali-Adams Las (2) Lasserre Las (1). SA (2) SA (1). LS (2) LS (1). LS (2) + LS (1) +

8 LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +). Sherali-Adams Las (2) Lasserre Las (1). SA (2) SA (1). LS (2) LS (1). LS (2) + LS (1) + Can optimize over r th level in time n O(r). n th level is tight.

9 LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels.

10 LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels. Lower bounds rule out large and natural class of algorithms.

11 LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels. Lower bounds rule out large and natural class of algorithms. Performance measured by considering integrality gap at various levels. Integrality Gap = Optimum of Relaxation Integer Optimum (for maximization)

12 Why bother? - Conditional - All polytime algorithms - Unconditional - Restricted class of algorithms NP-Hardness LP/SDP Hierarchies UG-Hardness

13 What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is convex combination of /1 solutions.

14 What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is convex combination of /1 solutions. 1 =

15 What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is marginal of distribution over /1 solutions. 1 =

16 What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is marginal of distribution over /1 solutions. 1 = Hierarchies add variables for conditional/joint probabilities.

17 Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables.

18 Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables.

19 Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1

20 Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1

21 Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1/2 1 1

22 Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1/2? 1/2 1???? 1

23 The Lovàsz-Schrijver Hierarchy

24 The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P).

25 The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n )

26 The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n ) Restriction: x = (x 1,..., x n ) LS(P) if Y satisfying (think Y ij = E [z i z j ] = P [z i z j ]) Y = Y T Y ii = x i i Y i x i P, Y x Y i 1 x i P i

27 The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n ) Restriction: x = (x 1,..., x n ) LS(P) if Y satisfying (think Y ij = E [z i z j ] = P [z i z j ]) Y = Y T Y ii = x i i Y i x i P, Y x Y i 1 x i P i Above is an LP (SDP) in n 2 + n variables.

28 Action replay

29 Action replay

30 Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1

31 Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1

32 Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1/2 1 1

33 Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1/2? 1/2 1???? 1

34 The Sherali-Adams Hierarchy

35 The Sherali-Adams Hierarchy Start with a /1 integer linear program.

36 The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1])

37 The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints:

38 The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints: a i z i b [( ) ] E a i z i z 5 z 7 (1 z 9 ) i i E [b z 5 z 7 (1 z 9 )]

39 The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints: a i z i b [( ) ] E a i z i z 5 z 7 (1 z 9 ) i i E [b z 5 z 7 (1 z 9 )] a i (X {i,5,7} X {i,5,7,9} ) b (X {5,7} X {5,7,9} ) i LP on n r variables.

40 Sherali-Adams Locally Consistent Distributions

41 Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1

42 Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2.

43 Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2. D({1, 2, 3}) and D({1, 2, 4}) must agree with D({1, 2}).

44 Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2. D({1, 2, 3}) and D({1, 2, 4}) must agree with D({1, 2}). SA (r) LCD (r+k) = LCD (r). If each constraint has at most k vars, = SA (r)

45 The Lasserre Hierarchy

46 The Lasserre Hierarchy Start with a /1 integer quadratic program.

47 The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i.

48 The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 i S 1 S 2 z i

49 The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 i S 1 S 2 z i

50 The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 = P[All vars in S 1 S 2 are 1] i S 1 S 2 z i

51 The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 = P[All vars in S 1 S 2 are 1] i S 1 S 2 z i (Y ) + original constraints + consistency constraints.

52 The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 )

53 The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 ) Y S1,S 2 only depends on S 1 S 2. (Y S1,S 2 = P[All vars in S 1 S 2 are 1])

54 The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 ) Y S1,S 2 only depends on S 1 S 2. (Y S1,S 2 = P[All vars in S 1 S 2 are 1]) Original quadratic constraints as inner products. SDP for Independent Set maximize subject to X i V U{i} 2 U{i}, U {j} = US1, U S2 = US3, U S4 (i, j) E S 1 S 2 = S 3 S 4 US1, U S2 [, 1] S1, S 2

55 And if you just woke up...

56 And if you just woke up.... SA (2) SA (1). LS (2) LS (1). Las (2) Las (1). LS (2) + LS (1) +

57 And if you just woke up... U S.. X SA (2) S LS (2) + SA (1) LS. (1) + LS (2) LS (1). Las (2) Las (1) Y Y i x i, x Y i 1 x i

58 Local Distributions

59 Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n

60 Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels

61 Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels 1/2 + ɛ

62 Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels v = 1 1/2 + ɛ

63 Creating conditional distributions

64 Creating conditional distributions Tree 1/2 1 = /2 1

65 Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = = ( 1 2 ɛ) + ( ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w.

66 Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = = ( 1 2 ɛ) + ( ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w. v 1/2 + ɛ

67 Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = = ( 1 2 ɛ) + ( ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w ɛ v 1 8ɛ 1/2 + ɛ 8ɛ 4ɛ

68 Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = = ( 1 2 ɛ) + ( ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w ɛ v 1 8ɛ 1/2 + ɛ 8ɛ 4ɛ O(1/ɛ log(1/ɛ))

69 Cheat while you can

70 Cheat while you can

71 Cheat while you can

72 Cheat while you can easy

73 Cheat while you can easy doable

74 Cheat while you can easy doable

75 Cheat while you can easy doable Valid distributions exist for O(log n) levels.

76 Cheat while you can easy doable Valid distributions exist for O(log n) levels. Can be extended to Ω(n) levels (but needs other ideas). [STT 7]

77 Cheat while you can easy doable Valid distributions exist for O(log n) levels. Can be extended to Ω(n) levels (but needs other ideas). [STT 7] Similar ideas also useful in constructing metrics which are locally l 1 (but not globally). [CMM 7]

78 Local Satisfiability for Expanding CSPs

79 CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1

80 CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3])

81 CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n

82 CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n

83 CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n In fact, γ S variables appearing in only one constraint in S.

84 Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable.

85 Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )]

86 Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]]

87 Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)]

88 Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)] = 1/8

89 Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. Can take γ (k 2) and any αn constraints. Just require E[C(z 1,..., z k )] over any k 2 vars to be constant. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)] = 1/8

90 Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1

91 Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α]

92 Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α] Need distributions D(S) such that D(S 1 ), D(S 2 ) agree on S 1 S 2.

93 Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α] Need distributions D(S) such that D(S 1 ), D(S 2 ) agree on S 1 S 2. Distributions should locally look like" supported on satisfying assignments.

94 Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.

95 Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.

96 Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.

97 Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables. Find set of constraints C such that G C S remains expanding. D(S) = uniform over assignments satisfying C

98 Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables. Find set of constraints C such that G C S remains expanding. D(S) = uniform over assignments satisfying C Remaining constraints independent" of this assignment.

99 Vectors for Linear CSPs

100 A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n )

101 A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i.

102 A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i. Consider the psd matrix Ỹ Ỹ S1,S 2 = E [ ZS1 Z ] S2 = E ( 1) zi i S 1 S 2

103 A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i. Consider the psd matrix Ỹ Ỹ S1,S 2 = E [ ZS1 Z ] S2 = E ( 1) zi i S 1 S 2 Write program for inner products of vectors W S s.t. Ỹ S1,S 2 = W S1, W S2

104 Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r

105 Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r [Schoenebeck 8]: If width 2r resolution does not derive contradiction, then SDP value =1 after r levels.

106 Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r [Schoenebeck 8]: If width 2r resolution does not derive contradiction, then SDP value =1 after r levels. Expansion guarantees there are no width 2r contradictions.

107 Schonebeck s construction

108 Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3}

109 Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C.

110 Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C. No contradictions ensure each S C can be uniquely assigned ±e C. W S2 = e C2 W S3 = e C2 W S1 = e C1

111 Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C. No contradictions ensure each S C can be uniquely assigned ±e C. Relies heavily on constraints being linear equations. W S2 = e C2 W S3 = e C2 W S1 = e C1

112 Reductions

113 Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B?

114 Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms.

115 Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms. Reduction from integer program A to integer program B. Each variable z i of B is a boolean function of few (say 5) variables z i1,..., z i5 of A.

116 Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms. Reduction from integer program A to integer program B. Each variable z i of B is a boolean function of few (say 5) variables z i1,..., z i5 of A. To show: If A has good vector solution, so does B.

117 A generic transformation A f B z i = f (z i1,..., z i5 )

118 A generic transformation A f B z i = f (z i1,..., z i5 ) U {z i } = ˆf (S) WS S {i 1,...,i 5}

119 What can be proved MAX k-csp Independent Set Approximate Graph Coloring Chromatic Number NP-hard UG-hard Gap Levels 2 k 2 2 k 2 k 2k k+o(k) 2k Ω(n) n n 2 2 c log n log log n 2 (log n)3/4+ɛ 2 c 1 log n log log n l vs log2 l l vs. 2 l/2 4l 2 Ω(n) n n 2 2 c log n log log n 2 (log n)3/4+ɛ 2 c 1 log n log log n Vertex Cover ɛ 1.36 Ω(n δ )

120 The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 =

121 The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = Need vectors for subsets of vertices in the G Φ.

122 The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = Need vectors for subsets of vertices in the G Φ. Every vertex (or set of vertices) in G Φ is an indicator function!

123 The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = Need vectors for subsets of vertices in the G Φ. Every vertex (or set of vertices) in G Φ is an indicator function! U {(z1,z 2,z 3 )=(,,1)} = 1 8 (W + W {1} + W {2} W {3} + W {1,2} W {2,3} W {1,3} W {1,2,3} )

124 Graph Products v 1 w 1 v 2 w 2 v 3 w 3

125 Graph Products v 1 w 1 v 2 w 2 v 3 w 3

126 Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} =?

127 Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 }

128 Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G).

129 Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G). Intuition: Independent set in product graph is product of independent sets in G.

130 Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G). Intuition: Independent set in product graph is product of independent sets in G. Together give a gap of n 2 O( log n log log n).

131 A few problems

132 Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank).

133 Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank). What if there is a program that uses poly(n) constraints (size), but takes them from up to level n?

134 Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank). What if there is a program that uses poly(n) constraints (size), but takes them from up to level n? If proved, is this kind of hardness closed under local reductions?

135 Problem 2: Generalize Schoenebeck s technique

136 Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap).

137 Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap). We have distributions, but not vectors for other type of CSPs.

138 Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap). We have distributions, but not vectors for other type of CSPs. What extra constraints do vectors capture?

139 Thank You Questions?

Rank Bounds and Integrality Gaps for Cutting Planes Procedures

Rank Bounds and Integrality Gaps for Cutting Planes Procedures " THEORY OF COMPUTING, Volume 0 (2005), pp. 78 100 http://theoryofcomputing.org Rank Bounds and Integrality Gaps for Cutting Planes Procedures Joshua Buresh-Oppenheim Nicola Galesi Shlomo Hoory Avner Magen

More information

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix. CSE525: Randomized Algorithms and Probabilistic Analysis Lecture 9 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Keith Jia 1 Introduction to semidefinite programming Semidefinite programming is linear

More information

Towards Efficient Higher-Order Semidefinite Relaxations for Max-Cut

Towards Efficient Higher-Order Semidefinite Relaxations for Max-Cut Towards Efficient Higher-Order Semidefinite Relaxations for Max-Cut Miguel F. Anjos Professor and Canada Research Chair Director, Trottier Energy Institute Joint work with E. Adams (Poly Mtl), F. Rendl,

More information

PTAS for Matroid Matching

PTAS for Matroid Matching PTAS for Matroid Matching Jon Lee 1 Maxim Sviridenko 1 Jan Vondrák 2 1 IBM Watson Research Center Yorktown Heights, NY 2 IBM Almaden Research Center San Jose, CA May 6, 2010 Jan Vondrák (IBM Almaden) PTAS

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation CSC411 - Linear Programming and Combinatorial Optimization Lecture 1: Semidefinite Programming(SDP) Relaxation Notes taken by Xinwei Gui May 1, 007 Summary: This lecture introduces the semidefinite programming(sdp)

More information

9. Graph Isomorphism and the Lasserre Hierarchy.

9. Graph Isomorphism and the Lasserre Hierarchy. Sum of squares and integer programming relaxations Lecture 9 March, 014 9. Graph Isomorphism and the Lasserre Hierarchy. Lecturer: Massimo Lauria Let G and H be two graphs, we say that two graphs are isomorphic,

More information

On linear and semidefinite programming relaxations for hypergraph matching

On linear and semidefinite programming relaxations for hypergraph matching Math. Program., Ser. A DOI 10.1007/s10107-011-0451-5 FULL LENGTH PAPER On linear and semidefinite programming relaxations for hypergraph matching Yuk Hei Chan Lap Chi Lau Received: November 009 / Accepted:

More information

Lecture 1: Introduction and Overview

Lecture 1: Introduction and Overview CS369H: Hierarchies of Integer Programming Relaxations Spring 2016-2017 Lecture 1: Introduction and Overview Professor Moses Charikar Scribes: Paris Syminelakis Overview We provide a short introduction

More information

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 While we have good algorithms for many optimization problems, the previous lecture showed that many

More information

1 Unweighted Set Cover

1 Unweighted Set Cover Comp 60: Advanced Algorithms Tufts University, Spring 018 Prof. Lenore Cowen Scribe: Yuelin Liu Lecture 7: Approximation Algorithms: Set Cover and Max Cut 1 Unweighted Set Cover 1.1 Formulations There

More information

Rounding procedures in Semidefinite Programming

Rounding procedures in Semidefinite Programming Rounding procedures in Semidefinite Programming Philippe Meurdesoif Université Bordeaux 1 & INRIA-Bordeaux CIMINLP, March 19, 2009. 1 Presentation 2 Roundings for MaxCut problems 3 Randomized roundings

More information

Lecture 7: Asymmetric K-Center

Lecture 7: Asymmetric K-Center Advanced Approximation Algorithms (CMU 18-854B, Spring 008) Lecture 7: Asymmetric K-Center February 5, 007 Lecturer: Anupam Gupta Scribe: Jeremiah Blocki In this lecture, we will consider the K-center

More information

Lecture and notes by: Nate Chenette, Brent Myers, Hari Prasad November 8, Property Testing

Lecture and notes by: Nate Chenette, Brent Myers, Hari Prasad November 8, Property Testing Property Testing 1 Introduction Broadly, property testing is the study of the following class of problems: Given the ability to perform (local) queries concerning a particular object (e.g., a function,

More information

Automated Precision Tuning using Semidefinite Programming

Automated Precision Tuning using Semidefinite Programming Automated Precision Tuning using Semidefinite Programming Victor Magron, RA Imperial College joint work with G. Constantinides and A. Donaldson British-French-German Conference on Optimization 15 June

More information

Enclosures of Roundoff Errors using SDP

Enclosures of Roundoff Errors using SDP Enclosures of Roundoff Errors using SDP Victor Magron, CNRS Jointly Certified Upper Bounds with G. Constantinides and A. Donaldson Metalibm workshop: Elementary functions, digital filters and beyond 12-13

More information

PCP and Hardness of Approximation

PCP and Hardness of Approximation PCP and Hardness of Approximation January 30, 2009 Our goal herein is to define and prove basic concepts regarding hardness of approximation. We will state but obviously not prove a PCP theorem as a starting

More information

P vs. NP. Simpsons: Treehouse of Horror VI

P vs. NP. Simpsons: Treehouse of Horror VI P vs. NP Simpsons: Treehouse of Horror VI Attribution These slides were prepared for the New Jersey Governor s School course The Math Behind the Machine taught in the summer of 2012 by Grant Schoenebeck

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 23.1 Introduction We spent last week proving that for certain problems,

More information

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI Outline Review the column generation in Generalized Assignment Problem (GAP) GAP Examples in Branch and Price 2 Assignment Problem The assignment

More information

Certification of Roundoff Errors with SDP Relaxations and Formal Interval Methods

Certification of Roundoff Errors with SDP Relaxations and Formal Interval Methods Certification of Roundoff Errors with SDP Relaxations and Formal Interval Methods Victor Magron, CNRS VERIMAG Jointly Certified Upper Bounds with G. Constantinides and A. Donaldson Jointly Certified Lower

More information

CS369G: Algorithmic Techniques for Big Data Spring

CS369G: Algorithmic Techniques for Big Data Spring CS369G: Algorithmic Techniques for Big Data Spring 2015-2016 Lecture 11: l 0 -Sampling and Introduction to Graph Streaming Prof. Moses Charikar Scribe: Austin Benson 1 Overview We present and analyze the

More information

Parallel Algorithms for Geometric Graph Problems Grigory Yaroslavtsev

Parallel Algorithms for Geometric Graph Problems Grigory Yaroslavtsev Parallel Algorithms for Geometric Graph Problems Grigory Yaroslavtsev http://grigory.us Appeared in STOC 2014, joint work with Alexandr Andoni, Krzysztof Onak and Aleksandar Nikolov. The Big Data Theory

More information

Lower Bounds for. Local Approximation. Mika Göös, Juho Hirvonen & Jukka Suomela (HIIT) Göös et al. (HIIT) Local Approximation 2nd April / 19

Lower Bounds for. Local Approximation. Mika Göös, Juho Hirvonen & Jukka Suomela (HIIT) Göös et al. (HIIT) Local Approximation 2nd April / 19 Lower Bounds for Local Approximation Mika Göös, Juho Hirvonen & Jukka Suomela (HIIT) Göös et al. (HIIT) Local Approximation nd April 0 / 9 Lower Bounds for Local Approximation We prove: Local algorithms

More information

An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem

An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem and more recent developments CATS @ UMD April 22, 2016 The Asymmetric Traveling Salesman Problem (ATSP) Problem

More information

NP-Completeness. Algorithms

NP-Completeness. Algorithms NP-Completeness Algorithms The NP-Completeness Theory Objective: Identify a class of problems that are hard to solve. Exponential time is hard. Polynomial time is easy. Why: Do not try to find efficient

More information

Umans Complexity Theory Lectures

Umans Complexity Theory Lectures Introduction Umans Complexity Theory Lectures Lecture 5: Boolean Circuits & NP: - Uniformity and Advice, - NC hierarchy Power from an unexpected source? we know P EXP, which implies no polytime algorithm

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Week 5. Convex Optimization

Week 5. Convex Optimization Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is

More information

Bipartite Vertex Cover

Bipartite Vertex Cover Bipartite Vertex Cover Mika Göös Jukka Suomela University of Toronto & HIIT University of Helsinki & HIIT Göös and Suomela Bipartite Vertex Cover 18th October 2012 1 / 12 LOCAL model Göös and Suomela Bipartite

More information

An incomplete survey of matrix completion. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

An incomplete survey of matrix completion. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison An incomplete survey of matrix completion Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Abstract Setup: Matrix Completion M = M ij known for black cells M ij unknown for

More information

Rank Bounds and Integrality Gaps for Cutting Planes Procedures

Rank Bounds and Integrality Gaps for Cutting Planes Procedures Rank Bounds and Integrality Gaps for Cutting Planes Procedures Joshua Buresh-Oppenheim Nicola Galesi Shlomo Hoory University of Toronto Universitat Politecnica de Catalunya University of Toronto bureshop@cs.toronto.edu

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

1. Lecture notes on bipartite matching February 4th,

1. Lecture notes on bipartite matching February 4th, 1. Lecture notes on bipartite matching February 4th, 2015 6 1.1.1 Hall s Theorem Hall s theorem gives a necessary and sufficient condition for a bipartite graph to have a matching which saturates (or matches)

More information

Lecture 14: Linear Programming II

Lecture 14: Linear Programming II A Theorist s Toolkit (CMU 18-859T, Fall 013) Lecture 14: Linear Programming II October 3, 013 Lecturer: Ryan O Donnell Scribe: Stylianos Despotakis 1 Introduction At a big conference in Wisconsin in 1948

More information

6.889 Sublinear Time Algorithms February 25, Lecture 6

6.889 Sublinear Time Algorithms February 25, Lecture 6 6.9 Sublinear Time Algorithms February 5, 019 Lecture 6 Lecturer: Ronitt Rubinfeld Scribe: Michal Shlapentokh-Rothman 1 Outline Today, we will discuss a general framework for testing minor-free properties

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Sub-exponential Approximation Schemes: From Dense to Almost-Sparse. Dimitris Fotakis Michael Lampis Vangelis Paschos

Sub-exponential Approximation Schemes: From Dense to Almost-Sparse. Dimitris Fotakis Michael Lampis Vangelis Paschos Sub-exponential Approximation Schemes: From Dense to Almost-Sparse Dimitris Fotakis Michael Lampis Vangelis Paschos NTU Athens Université Paris Dauphine Feb 18, STACS 2016 Motivation Sub-Exponential Approximation

More information

Sub-exponential Approximation Schemes: From Dense to Almost-Sparse. Dimitris Fotakis Michael Lampis Vangelis Paschos

Sub-exponential Approximation Schemes: From Dense to Almost-Sparse. Dimitris Fotakis Michael Lampis Vangelis Paschos Sub-exponential Approximation Schemes: From Dense to Almost-Sparse Dimitris Fotakis Michael Lampis Vangelis Paschos NTU Athens Université Paris Dauphine Nov 5, 2015 Overview Things you will hear in this

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Frédéric Giroire FG Simplex 1/11 Motivation Goal: Find good solutions for difficult problems (NP-hard). Be able to quantify the goodness of the given solution. Presentation of

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

An SDP Approach to Multi-level Crossing Minimization

An SDP Approach to Multi-level Crossing Minimization An SDP Approach to Multi-level Crossing Minimization P. Hungerländer joint work with M. Chimani, M. Jünger, P. Mutzel University of Klagenfurt - Department of Mathematics 15th Combinatorial Optimization

More information

The Ordered Covering Problem

The Ordered Covering Problem The Ordered Covering Problem Uriel Feige Yael Hitron November 8, 2016 Abstract We introduce the Ordered Covering (OC) problem. The input is a finite set of n elements X, a color function c : X {0, 1} and

More information

Lecture 16: Gaps for Max-Cut

Lecture 16: Gaps for Max-Cut Advanced Approximation Algorithms (CMU 8-854B, Spring 008) Lecture 6: Gaps for Max-Cut Mar 6, 008 Lecturer: Ryan O Donnell Scribe: Ravishankar Krishnaswamy Outline In this lecture, we will discuss algorithmic

More information

Optimization I : Brute force and Greedy strategy

Optimization I : Brute force and Greedy strategy Chapter 3 Optimization I : Brute force and Greedy strategy A generic definition of an optimization problem involves a set of constraints that defines a subset in some underlying space (like the Euclidean

More information

The complement of PATH is in NL

The complement of PATH is in NL 340 The complement of PATH is in NL Let c be the number of nodes in graph G that are reachable from s We assume that c is provided as an input to M Given G, s, t, and c the machine M operates as follows:

More information

Fast Clustering using MapReduce

Fast Clustering using MapReduce Fast Clustering using MapReduce Alina Ene, Sungjin Im, Benjamin Moseley UIUC KDD 2011 Clustering Massive Data Group web pages based on their content Group users based on their online behavior Finding communities

More information

Binary Positive Semidefinite Matrices and Associated Integer Polytopes

Binary Positive Semidefinite Matrices and Associated Integer Polytopes Binary Positive Semidefinite Matrices and Associated Integer Polytopes Adam N. Letchford 1 and Michael M. Sørensen 2 1 Department of Management Science, Lancaster University, Lancaster LA1 4YW, United

More information

Graph Theory and Optimization Approximation Algorithms

Graph Theory and Optimization Approximation Algorithms Graph Theory and Optimization Approximation Algorithms Nicolas Nisse Université Côte d Azur, Inria, CNRS, I3S, France October 2018 Thank you to F. Giroire for some of the slides N. Nisse Graph Theory and

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined

More information

Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes

Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes Venkatesan Guruswami Carnegie Mellon University Pittsburgh, PA 53 Email: guruswami@cmu.edu Ray Li Carnegie Mellon University

More information

Lecture 6: Linear Programming for Sparsest Cut

Lecture 6: Linear Programming for Sparsest Cut Lecture 6: Linear Programming for Sparsest Cut Sparsest Cut and SOS The SOS hierarchy captures the algorithms for sparsest cut, but they were discovered directly without thinking about SOS (and this is

More information

A List Heuristic for Vertex Cover

A List Heuristic for Vertex Cover A List Heuristic for Vertex Cover Happy Birthday Vasek! David Avis McGill University Tomokazu Imamura Kyoto University Operations Research Letters (to appear) Online: http://cgm.cs.mcgill.ca/ avis revised:

More information

Questions? You are given the complete graph of Facebook. What questions would you ask? (What questions could we hope to answer?)

Questions? You are given the complete graph of Facebook. What questions would you ask? (What questions could we hope to answer?) P vs. NP What now? Attribution These slides were prepared for the New Jersey Governor s School course The Math Behind the Machine taught in the summer of 2011 by Grant Schoenebeck Large parts of these

More information

Transductive Learning: Motivation, Model, Algorithms

Transductive Learning: Motivation, Model, Algorithms Transductive Learning: Motivation, Model, Algorithms Olivier Bousquet Centre de Mathématiques Appliquées Ecole Polytechnique, FRANCE olivier.bousquet@m4x.org University of New Mexico, January 2002 Goal

More information

Approximation Algorithms

Approximation Algorithms 18.433 Combinatorial Optimization Approximation Algorithms November 20,25 Lecturer: Santosh Vempala 1 Approximation Algorithms Any known algorithm that finds the solution to an NP-hard optimization problem

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

A Lottery Model for Center-type Problems With Outliers

A Lottery Model for Center-type Problems With Outliers A Lottery Model for Center-type Problems With Outliers David G. Harris Thomas Pensyl Aravind Srinivasan Khoa Trinh Abstract In this paper, we give tight approximation algorithms for the k-center and matroid

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions. CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT

More information

BOOLEAN MATRIX FACTORIZATIONS. with applications in data mining Pauli Miettinen

BOOLEAN MATRIX FACTORIZATIONS. with applications in data mining Pauli Miettinen BOOLEAN MATRIX FACTORIZATIONS with applications in data mining Pauli Miettinen MATRIX FACTORIZATIONS BOOLEAN MATRIX FACTORIZATIONS o THE BOOLEAN MATRIX PRODUCT As normal matrix product, but with addition

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Probabilistic embedding into trees: definitions and applications. Fall 2011 Lecture 4

Probabilistic embedding into trees: definitions and applications. Fall 2011 Lecture 4 Probabilistic embedding into trees: definitions and applications. Fall 2011 Lecture 4 Instructor: Mohammad T. Hajiaghayi Scribe: Anshul Sawant September 21, 2011 1 Overview Some problems which are hard

More information

Logarithmic Time Prediction

Logarithmic Time Prediction Logarithmic Time Prediction John Langford Microsoft Research DIMACS Workshop on Big Data through the Lens of Sublinear Algorithms The Multiclass Prediction Problem Repeatedly 1 See x 2 Predict ŷ {1,...,

More information

Models of distributed computing: port numbering and local algorithms

Models of distributed computing: port numbering and local algorithms Models of distributed computing: port numbering and local algorithms Jukka Suomela Adaptive Computing Group Helsinki Institute for Information Technology HIIT University of Helsinki FMT seminar, 26 February

More information

Approximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs

Approximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs Approximation slides 1 An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs Approximation slides 2 Linear independence A collection of row vectors {v T i } are independent

More information

New Upper Bounds for MAX-2-SAT and MAX-2-CSP w.r.t. the Average Variable Degree

New Upper Bounds for MAX-2-SAT and MAX-2-CSP w.r.t. the Average Variable Degree New Upper Bounds for MAX-2-SAT and MAX-2-CSP w.r.t. the Average Variable Degree Alexander Golovnev St. Petersburg University of the Russian Academy of Sciences, St. Petersburg, Russia alex.golovnev@gmail.com

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel. CS261, Winter 2017. Instructor: Ashish Goel. Problem set 2 Electronic submission to Gradescope due 11:59pm Thursday 2/16. Form a group of 2-3 students that is, submit one homework with all of your names.

More information

Abstract. A graph G is perfect if for every induced subgraph H of G, the chromatic number of H is equal to the size of the largest clique of H.

Abstract. A graph G is perfect if for every induced subgraph H of G, the chromatic number of H is equal to the size of the largest clique of H. Abstract We discuss a class of graphs called perfect graphs. After defining them and getting intuition with a few simple examples (and one less simple example), we present a proof of the Weak Perfect Graph

More information

New Results on Simple Stochastic Games

New Results on Simple Stochastic Games New Results on Simple Stochastic Games Decheng Dai 1 and Rong Ge 2 1 Tsinghua University, ddc02@mails.tsinghua.edu.cn 2 Princeton University, rongge@cs.princeton.edu Abstract. We study the problem of solving

More information

Lifts of convex sets and cone factorizations

Lifts of convex sets and cone factorizations Lifts of convex sets and cone factorizations João Gouveia Universidade de Coimbra 20 Dec 2012 - CORE - Université Catholique de Louvain with Pablo Parrilo (MIT) and Rekha Thomas (U.Washington) Lifts of

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.

More information

Ellipsoid II: Grötschel-Lovász-Schrijver theorems

Ellipsoid II: Grötschel-Lovász-Schrijver theorems Lecture 9 Ellipsoid II: Grötschel-Lovász-Schrijver theorems I László Lovász. Ryan O Donnell We saw in the last lecture that the Ellipsoid Algorithm can solve the optimization problem max s.t. c x Ax b

More information

Lecture 7: Counting classes

Lecture 7: Counting classes princeton university cos 522: computational complexity Lecture 7: Counting classes Lecturer: Sanjeev Arora Scribe:Manoj First we define a few interesting problems: Given a boolean function φ, #SAT is the

More information

Constraining strategies for Kaczmarz-like algorithms

Constraining strategies for Kaczmarz-like algorithms Constraining strategies for Kaczmarz-like algorithms Ovidius University, Constanta, Romania Faculty of Mathematics and Computer Science Talk on April 7, 2008 University of Erlangen-Nürnberg The scanning

More information

Algorithms for Euclidean TSP

Algorithms for Euclidean TSP This week, paper [2] by Arora. See the slides for figures. See also http://www.cs.princeton.edu/~arora/pubs/arorageo.ps Algorithms for Introduction This lecture is about the polynomial time approximation

More information

12.1 Formulation of General Perfect Matching

12.1 Formulation of General Perfect Matching CSC5160: Combinatorial Optimization and Approximation Algorithms Topic: Perfect Matching Polytope Date: 22/02/2008 Lecturer: Lap Chi Lau Scribe: Yuk Hei Chan, Ling Ding and Xiaobing Wu In this lecture,

More information

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec. Randomized rounding of semidefinite programs and primal-dual method for integer linear programming Dr. Saeedeh Parsaeefard 1 2 3 4 Semidefinite Programming () 1 Integer Programming integer programming

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization Frank de Zeeuw EPFL 2012 Today Introduction Graph problems - What combinatorial things will we be optimizing? Algorithms - What kind of solution are we looking for? Linear Programming

More information

Image Segmentation with a Bounding Box Prior Victor Lempitsky, Pushmeet Kohli, Carsten Rother, Toby Sharp Microsoft Research Cambridge

Image Segmentation with a Bounding Box Prior Victor Lempitsky, Pushmeet Kohli, Carsten Rother, Toby Sharp Microsoft Research Cambridge Image Segmentation with a Bounding Box Prior Victor Lempitsky, Pushmeet Kohli, Carsten Rother, Toby Sharp Microsoft Research Cambridge Dylan Rhodes and Jasper Lin 1 Presentation Overview Segmentation problem

More information

Chapter 10 Part 1: Reduction

Chapter 10 Part 1: Reduction //06 Polynomial-Time Reduction Suppose we could solve Y in polynomial-time. What else could we solve in polynomial time? don't confuse with reduces from Chapter 0 Part : Reduction Reduction. Problem X

More information

Using the FGLSS-reduction to Prove Inapproximability Results for Minimum Vertex Cover in Hypergraphs

Using the FGLSS-reduction to Prove Inapproximability Results for Minimum Vertex Cover in Hypergraphs Using the FGLSS-reduction to Prove Inapproximability Results for Minimum Vertex Cover in Hypergraphs Oded Goldreich Abstract. Using known results regarding PCP, we present simple proofs of the inapproximability

More information

Semidefinite Programs for Exact Recovery of a Hidden Community (and Many Communities)

Semidefinite Programs for Exact Recovery of a Hidden Community (and Many Communities) Semidefinite Programs for Exact Recovery of a Hidden Community (and Many Communities) Bruce Hajek 1 Yihong Wu 1 Jiaming Xu 2 1 University of Illinois at Urbana-Champaign 2 Simons Insitute, UC Berkeley

More information

Towards the Proof of the PCP Theorem

Towards the Proof of the PCP Theorem CS640 Computational Complexity Towards the Proof of the PCP Theorem Instructor: Manindra Agrawal Scribe: Ramprasad Saptharishi Last class we saw completed our discussion on expander graphs. We shall now

More information

Hardness of Approximation for the TSP. Michael Lampis LAMSADE Université Paris Dauphine

Hardness of Approximation for the TSP. Michael Lampis LAMSADE Université Paris Dauphine Hardness of Approximation for the TSP Michael Lampis LAMSADE Université Paris Dauphine Sep 2, 2015 Overview Hardness of Approximation What is it? How to do it? (Easy) Examples The PCP Theorem What is it?

More information

Colour Refinement. A Simple Partitioning Algorithm with Applications From Graph Isomorphism Testing to Machine Learning. Martin Grohe RWTH Aachen

Colour Refinement. A Simple Partitioning Algorithm with Applications From Graph Isomorphism Testing to Machine Learning. Martin Grohe RWTH Aachen Colour Refinement A Simple Partitioning Algorithm with Applications From Graph Isomorphism Testing to Machine Learning Martin Grohe RWTH Aachen Outline 1. Colour Refinement and Weisfeiler Lehman 2. Colour

More information

Dynamic Collision Detection

Dynamic Collision Detection Distance Computation Between Non-Convex Polyhedra June 17, 2002 Applications Dynamic Collision Detection Applications Dynamic Collision Detection Evaluating Safety Tolerances Applications Dynamic Collision

More information

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming

More information

No more questions will be added

No more questions will be added CSC 2545, Spring 2017 Kernel Methods and Support Vector Machines Assignment 2 Due at the start of class, at 2:10pm, Thurs March 23. No late assignments will be accepted. The material you hand in should

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 29 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/7/2016 Approximation

More information

Error correction guarantees

Error correction guarantees Error correction guarantees Drawback of asymptotic analyses Valid only as long as the incoming messages are independent. (independence assumption) The messages are independent for l iterations only if

More information

An exact algorithm for max-cut in sparse graphs

An exact algorithm for max-cut in sparse graphs An exact algorithm for max-cut in sparse graphs F. Della Croce a M. J. Kaminski b, V. Th. Paschos c a D.A.I., Politecnico di Torino, Italy b RUTCOR, Rutgers University c LAMSADE, CNRS UMR 7024 and Université

More information

Robust Combinatorial Optimization with Exponential Scenarios

Robust Combinatorial Optimization with Exponential Scenarios Robust Combinatorial Optimization with Exponential Scenarios Uriel Feige Kamal Jain Mohammad Mahdian Vahab Mirrokni November 15, 2006 Abstract Following the well-studied two-stage optimization framework

More information

Polynomial Flow-Cut Gaps and Hardness of Directed Cut Problems

Polynomial Flow-Cut Gaps and Hardness of Directed Cut Problems Polynomial Flow-Cut Gaps and Hardness of Directed Cut Problems Julia Chuzhoy Sanjeev Khanna July 17, 2007 Abstract We study the multicut and the sparsest cut problems in directed graphs. In the multicut

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information