Local Constraints in Combinatorial Optimization Madhur Tulsiani Institute for Advanced Study
Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time.
Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)?
Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)? How does one reason about increasingly larger local constraints?
Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)? How does one reason about increasingly larger local constraints? Does approximation get better as constraints get larger?
LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +) Sherali-Adams Lasserre
LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +). Sherali-Adams Las (2) Lasserre Las (1). SA (2) SA (1). LS (2) LS (1). LS (2) + LS (1) +
LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +). Sherali-Adams Las (2) Lasserre Las (1). SA (2) SA (1). LS (2) LS (1). LS (2) + LS (1) + Can optimize over r th level in time n O(r). n th level is tight.
LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels.
LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels. Lower bounds rule out large and natural class of algorithms.
LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels. Lower bounds rule out large and natural class of algorithms. Performance measured by considering integrality gap at various levels. Integrality Gap = Optimum of Relaxation Integer Optimum (for maximization)
Why bother? - Conditional - All polytime algorithms - Unconditional - Restricted class of algorithms NP-Hardness LP/SDP Hierarchies UG-Hardness
What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is convex combination of /1 solutions.
What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is convex combination of /1 solutions. 1 = 1 3 1 + 1 3 + 1 3 1
What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is marginal of distribution over /1 solutions. 1 = 1 3 1 + 1 3 + 1 3 1
What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is marginal of distribution over /1 solutions. 1 = 1 3 1 + 1 3 + 1 3 1 Hierarchies add variables for conditional/joint probabilities.
Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables.
Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables.
Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1
Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1
Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1/2 1 1
Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1/2? 1/2 1???? 1
The Lovàsz-Schrijver Hierarchy
The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P).
The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n )
The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n ) Restriction: x = (x 1,..., x n ) LS(P) if Y satisfying (think Y ij = E [z i z j ] = P [z i z j ]) Y = Y T Y ii = x i i Y i x i P, Y x Y i 1 x i P i
The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n ) Restriction: x = (x 1,..., x n ) LS(P) if Y satisfying (think Y ij = E [z i z j ] = P [z i z j ]) Y = Y T Y ii = x i i Y i x i P, Y x Y i 1 x i P i Above is an LP (SDP) in n 2 + n variables.
Action replay
Action replay
Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1
Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1
Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1/2 1 1
Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1/2? 1/2 1???? 1
The Sherali-Adams Hierarchy
The Sherali-Adams Hierarchy Start with a /1 integer linear program.
The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1])
The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints:
The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints: a i z i b [( ) ] E a i z i z 5 z 7 (1 z 9 ) i i E [b z 5 z 7 (1 z 9 )]
The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints: a i z i b [( ) ] E a i z i z 5 z 7 (1 z 9 ) i i E [b z 5 z 7 (1 z 9 )] a i (X {i,5,7} X {i,5,7,9} ) b (X {5,7} X {5,7,9} ) i LP on n r variables.
Sherali-Adams Locally Consistent Distributions
Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1
Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2.
Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2. D({1, 2, 3}) and D({1, 2, 4}) must agree with D({1, 2}).
Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2. D({1, 2, 3}) and D({1, 2, 4}) must agree with D({1, 2}). SA (r) LCD (r+k) = LCD (r). If each constraint has at most k vars, = SA (r)
The Lasserre Hierarchy
The Lasserre Hierarchy Start with a /1 integer quadratic program.
The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i.
The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 i S 1 S 2 z i
The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 i S 1 S 2 z i
The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 = P[All vars in S 1 S 2 are 1] i S 1 S 2 z i
The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 = P[All vars in S 1 S 2 are 1] i S 1 S 2 z i (Y ) + original constraints + consistency constraints.
The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 )
The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 ) Y S1,S 2 only depends on S 1 S 2. (Y S1,S 2 = P[All vars in S 1 S 2 are 1])
The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 ) Y S1,S 2 only depends on S 1 S 2. (Y S1,S 2 = P[All vars in S 1 S 2 are 1]) Original quadratic constraints as inner products. SDP for Independent Set maximize subject to X i V U{i} 2 U{i}, U {j} = US1, U S2 = US3, U S4 (i, j) E S 1 S 2 = S 3 S 4 US1, U S2 [, 1] S1, S 2
And if you just woke up...
And if you just woke up.... SA (2) SA (1). LS (2) LS (1). Las (2) Las (1). LS (2) + LS (1) +
And if you just woke up... U S.. X SA (2) S LS (2) + SA (1) LS. (1) + LS (2) LS (1). Las (2) Las (1) Y Y i x i, x Y i 1 x i
Local Distributions
Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n
Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels
Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels 1/2 + ɛ
Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels v = 1 1/2 + ɛ
Creating conditional distributions
Creating conditional distributions Tree 1/2 1 = 1 2 + 1 2 1/2 1
Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w.
Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w. v 1/2 + ɛ
Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w. 1 1 4ɛ v 1 8ɛ 1/2 + ɛ 8ɛ 4ɛ
Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w. 1 1 4ɛ v 1 8ɛ 1/2 + ɛ 8ɛ 4ɛ O(1/ɛ log(1/ɛ))
Cheat while you can
Cheat while you can
Cheat while you can
Cheat while you can easy
Cheat while you can easy doable
Cheat while you can easy doable
Cheat while you can easy doable Valid distributions exist for O(log n) levels.
Cheat while you can easy doable Valid distributions exist for O(log n) levels. Can be extended to Ω(n) levels (but needs other ideas). [STT 7]
Cheat while you can easy doable Valid distributions exist for O(log n) levels. Can be extended to Ω(n) levels (but needs other ideas). [STT 7] Similar ideas also useful in constructing metrics which are locally l 1 (but not globally). [CMM 7]
Local Satisfiability for Expanding CSPs
CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1
CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3])
CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n
CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n
CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n In fact, γ S variables appearing in only one constraint in S.
Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable.
Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )]
Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]]
Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)]
Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)] = 1/8
Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. Can take γ (k 2) and any αn constraints. Just require E[C(z 1,..., z k )] over any k 2 vars to be constant. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)] = 1/8
Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1
Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α]
Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α] Need distributions D(S) such that D(S 1 ), D(S 2 ) agree on S 1 S 2.
Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α] Need distributions D(S) such that D(S 1 ), D(S 2 ) agree on S 1 S 2. Distributions should locally look like" supported on satisfying assignments.
Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.
Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.
Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.
Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables. Find set of constraints C such that G C S remains expanding. D(S) = uniform over assignments satisfying C
Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables. Find set of constraints C such that G C S remains expanding. D(S) = uniform over assignments satisfying C Remaining constraints independent" of this assignment.
Vectors for Linear CSPs
A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n )
A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i.
A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i. Consider the psd matrix Ỹ Ỹ S1,S 2 = E [ ZS1 Z ] S2 = E ( 1) zi i S 1 S 2
A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i. Consider the psd matrix Ỹ Ỹ S1,S 2 = E [ ZS1 Z ] S2 = E ( 1) zi i S 1 S 2 Write program for inner products of vectors W S s.t. Ỹ S1,S 2 = W S1, W S2
Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r
Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r [Schoenebeck 8]: If width 2r resolution does not derive contradiction, then SDP value =1 after r levels.
Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r [Schoenebeck 8]: If width 2r resolution does not derive contradiction, then SDP value =1 after r levels. Expansion guarantees there are no width 2r contradictions.
Schonebeck s construction
Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3}
Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C.
Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C. No contradictions ensure each S C can be uniquely assigned ±e C. W S2 = e C2 W S3 = e C2 W S1 = e C1
Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C. No contradictions ensure each S C can be uniquely assigned ±e C. Relies heavily on constraints being linear equations. W S2 = e C2 W S3 = e C2 W S1 = e C1
Reductions
Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B?
Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms.
Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms. Reduction from integer program A to integer program B. Each variable z i of B is a boolean function of few (say 5) variables z i1,..., z i5 of A.
Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms. Reduction from integer program A to integer program B. Each variable z i of B is a boolean function of few (say 5) variables z i1,..., z i5 of A. To show: If A has good vector solution, so does B.
A generic transformation A f B z i = f (z i1,..., z i5 )
A generic transformation A f B z i = f (z i1,..., z i5 ) U {z i } = ˆf (S) WS S {i 1,...,i 5}
What can be proved MAX k-csp Independent Set Approximate Graph Coloring Chromatic Number NP-hard UG-hard Gap Levels 2 k 2 2 k 2 k 2k k+o(k) 2k Ω(n) n n 2 2 c log n log log n 2 (log n)3/4+ɛ 2 c 1 log n log log n l vs. 2 1 25 log2 l l vs. 2 l/2 4l 2 Ω(n) n n 2 2 c log n log log n 2 (log n)3/4+ɛ 2 c 1 log n log log n Vertex Cover 1.36 2 - ɛ 1.36 Ω(n δ )
The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11
The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11 Need vectors for subsets of vertices in the G Φ.
The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11 Need vectors for subsets of vertices in the G Φ. Every vertex (or set of vertices) in G Φ is an indicator function!
The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11 Need vectors for subsets of vertices in the G Φ. Every vertex (or set of vertices) in G Φ is an indicator function! U {(z1,z 2,z 3 )=(,,1)} = 1 8 (W + W {1} + W {2} W {3} + W {1,2} W {2,3} W {1,3} W {1,2,3} )
Graph Products v 1 w 1 v 2 w 2 v 3 w 3
Graph Products v 1 w 1 v 2 w 2 v 3 w 3
Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} =?
Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 }
Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G).
Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G). Intuition: Independent set in product graph is product of independent sets in G.
Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G). Intuition: Independent set in product graph is product of independent sets in G. Together give a gap of n 2 O( log n log log n).
A few problems
Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank).
Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank). What if there is a program that uses poly(n) constraints (size), but takes them from up to level n?
Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank). What if there is a program that uses poly(n) constraints (size), but takes them from up to level n? If proved, is this kind of hardness closed under local reductions?
Problem 2: Generalize Schoenebeck s technique
Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap).
Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap). We have distributions, but not vectors for other type of CSPs.
Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap). We have distributions, but not vectors for other type of CSPs. What extra constraints do vectors capture?
Thank You Questions?