Local Constraints in Combinatorial Optimization

Similar documents
Rank Bounds and Integrality Gaps for Cutting Planes Procedures

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

Towards Efficient Higher-Order Semidefinite Relaxations for Max-Cut

PTAS for Matroid Matching

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation

9. Graph Isomorphism and the Lasserre Hierarchy.

On linear and semidefinite programming relaxations for hypergraph matching

Lecture 1: Introduction and Overview

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

1 Unweighted Set Cover

Rounding procedures in Semidefinite Programming

Lecture 7: Asymmetric K-Center

Lecture and notes by: Nate Chenette, Brent Myers, Hari Prasad November 8, Property Testing

Automated Precision Tuning using Semidefinite Programming

Enclosures of Roundoff Errors using SDP

PCP and Hardness of Approximation

P vs. NP. Simpsons: Treehouse of Horror VI

6 Randomized rounding of semidefinite programs

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

Certification of Roundoff Errors with SDP Relaxations and Formal Interval Methods

CS369G: Algorithmic Techniques for Big Data Spring

Parallel Algorithms for Geometric Graph Problems Grigory Yaroslavtsev

Lower Bounds for. Local Approximation. Mika Göös, Juho Hirvonen & Jukka Suomela (HIIT) Göös et al. (HIIT) Local Approximation 2nd April / 19

An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem

NP-Completeness. Algorithms

Umans Complexity Theory Lectures

Linear Programming Duality and Algorithms

Week 5. Convex Optimization

Bipartite Vertex Cover

An incomplete survey of matrix completion. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Rank Bounds and Integrality Gaps for Cutting Planes Procedures

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

1. Lecture notes on bipartite matching February 4th,

Lecture 14: Linear Programming II

6.889 Sublinear Time Algorithms February 25, Lecture 6

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Sub-exponential Approximation Schemes: From Dense to Almost-Sparse. Dimitris Fotakis Michael Lampis Vangelis Paschos

Sub-exponential Approximation Schemes: From Dense to Almost-Sparse. Dimitris Fotakis Michael Lampis Vangelis Paschos

Approximation Algorithms

6. Lecture notes on matroid intersection

An SDP Approach to Multi-level Crossing Minimization

The Ordered Covering Problem

Lecture 16: Gaps for Max-Cut

Optimization I : Brute force and Greedy strategy

The complement of PATH is in NL

Fast Clustering using MapReduce

Binary Positive Semidefinite Matrices and Associated Integer Polytopes

Graph Theory and Optimization Approximation Algorithms

Notes for Lecture 24

Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes

Lecture 6: Linear Programming for Sparsest Cut

A List Heuristic for Vertex Cover

Questions? You are given the complete graph of Facebook. What questions would you ask? (What questions could we hope to answer?)

Transductive Learning: Motivation, Model, Algorithms

Approximation Algorithms

1. Lecture notes on bipartite matching

A Lottery Model for Center-type Problems With Outliers

Linear Programming in Small Dimensions

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

BOOLEAN MATRIX FACTORIZATIONS. with applications in data mining Pauli Miettinen

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

1 Linear programming relaxation

Probabilistic embedding into trees: definitions and applications. Fall 2011 Lecture 4

Logarithmic Time Prediction

Models of distributed computing: port numbering and local algorithms

Approximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs

New Upper Bounds for MAX-2-SAT and MAX-2-CSP w.r.t. the Average Variable Degree

11.1 Facility Location

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.

Abstract. A graph G is perfect if for every induced subgraph H of G, the chromatic number of H is equal to the size of the largest clique of H.

New Results on Simple Stochastic Games

Lifts of convex sets and cone factorizations

Approximation Algorithms

Ellipsoid II: Grötschel-Lovász-Schrijver theorems

Lecture 7: Counting classes

Constraining strategies for Kaczmarz-like algorithms

Algorithms for Euclidean TSP

12.1 Formulation of General Perfect Matching

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.

Combinatorial Optimization

Image Segmentation with a Bounding Box Prior Victor Lempitsky, Pushmeet Kohli, Carsten Rother, Toby Sharp Microsoft Research Cambridge

Chapter 10 Part 1: Reduction

Using the FGLSS-reduction to Prove Inapproximability Results for Minimum Vertex Cover in Hypergraphs

Semidefinite Programs for Exact Recovery of a Hidden Community (and Many Communities)

Towards the Proof of the PCP Theorem

Hardness of Approximation for the TSP. Michael Lampis LAMSADE Université Paris Dauphine

Colour Refinement. A Simple Partitioning Algorithm with Applications From Graph Isomorphism Testing to Machine Learning. Martin Grohe RWTH Aachen

Dynamic Collision Detection

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

No more questions will be added

Algorithm Design and Analysis

Error correction guarantees

An exact algorithm for max-cut in sparse graphs

Robust Combinatorial Optimization with Exponential Scenarios

Polynomial Flow-Cut Gaps and Hardness of Directed Cut Problems

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Lec13p1, ORF363/COS323

11. APPROXIMATION ALGORITHMS

Transcription:

Local Constraints in Combinatorial Optimization Madhur Tulsiani Institute for Advanced Study

Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time.

Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)?

Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)? How does one reason about increasingly larger local constraints?

Local Constraints in Approximation Algorithms Linear Programming (LP) or Semidefinite Programming (SDP) based approximation algorithms impose constraints on few variables at a time. When can local constraints help in approximating a global property (eg. Vertex Cover, Chromatic Number)? How does one reason about increasingly larger local constraints? Does approximation get better as constraints get larger?

LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +) Sherali-Adams Lasserre

LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +). Sherali-Adams Las (2) Lasserre Las (1). SA (2) SA (1). LS (2) LS (1). LS (2) + LS (1) +

LP/SDP Hierarchies Various hierarchies give increasingly powerful programs at different levels (rounds). Lovász-Schrijver (LS, LS +). Sherali-Adams Las (2) Lasserre Las (1). SA (2) SA (1). LS (2) LS (1). LS (2) + LS (1) + Can optimize over r th level in time n O(r). n th level is tight.

LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels.

LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels. Lower bounds rule out large and natural class of algorithms.

LP/SDP Hierarchies Powerful computational model capturing most known LP/SDP algorithms within constant number of levels. Lower bounds rule out large and natural class of algorithms. Performance measured by considering integrality gap at various levels. Integrality Gap = Optimum of Relaxation Integer Optimum (for maximization)

Why bother? - Conditional - All polytime algorithms - Unconditional - Restricted class of algorithms NP-Hardness LP/SDP Hierarchies UG-Hardness

What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is convex combination of /1 solutions.

What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is convex combination of /1 solutions. 1 = 1 3 1 + 1 3 + 1 3 1

What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is marginal of distribution over /1 solutions. 1 = 1 3 1 + 1 3 + 1 3 1

What Hierarchies want Example: Maximum Independent Set for graph G = (V, E) minimize u x u subject to x u + x v 1 (u, v) E x u [, 1] Hope: x 1,..., x n is marginal of distribution over /1 solutions. 1 = 1 3 1 + 1 3 + 1 3 1 Hierarchies add variables for conditional/joint probabilities.

Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables.

Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables.

Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1

Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1

Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1/2 1 1

Lovász-Schrijver in action r th level optimizes over distributions conditioned on r variables. 2/3 1/2 1/2 1/2 1/2 1/2 1/2? 1/2 1???? 1

The Lovàsz-Schrijver Hierarchy

The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P).

The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n )

The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n ) Restriction: x = (x 1,..., x n ) LS(P) if Y satisfying (think Y ij = E [z i z j ] = P [z i z j ]) Y = Y T Y ii = x i i Y i x i P, Y x Y i 1 x i P i

The Lovàsz-Schrijver Hierarchy Start with a /1 integer program and a relaxation P. Define tigher relaxation LS(P). Hope: Fractional (x 1,..., x n ) = E [(z 1,..., z n )] for integral (z 1,..., z n ) Restriction: x = (x 1,..., x n ) LS(P) if Y satisfying (think Y ij = E [z i z j ] = P [z i z j ]) Y = Y T Y ii = x i i Y i x i P, Y x Y i 1 x i P i Above is an LP (SDP) in n 2 + n variables.

Action replay

Action replay

Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1

Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1

Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1/2 1 1

Action replay 2/3 1/2 1/2 1/2 1/2 1/2 1/2? 1/2 1???? 1

The Sherali-Adams Hierarchy

The Sherali-Adams Hierarchy Start with a /1 integer linear program.

The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1])

The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints:

The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints: a i z i b [( ) ] E a i z i z 5 z 7 (1 z 9 ) i i E [b z 5 z 7 (1 z 9 )]

The Sherali-Adams Hierarchy Start with a /1 integer linear program. Add big variables X S for S r (think X S = E [ i S z i] = P [All vars in S are 1]) Constraints: a i z i b [( ) ] E a i z i z 5 z 7 (1 z 9 ) i i E [b z 5 z 7 (1 z 9 )] a i (X {i,5,7} X {i,5,7,9} ) b (X {5,7} X {5,7,9} ) i LP on n r variables.

Sherali-Adams Locally Consistent Distributions

Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1

Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2.

Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2. D({1, 2, 3}) and D({1, 2, 4}) must agree with D({1, 2}).

Sherali-Adams Locally Consistent Distributions Using z 1 1, z 2 1 X {1,2} 1 X {1} X {1,2} 1 X {2} X {1,2} 1 1 X {1} X {2} + X {1,2} 1 X {1}, X {2}, X {1,2} define a distribution D({1, 2}) over {, 1} 2. D({1, 2, 3}) and D({1, 2, 4}) must agree with D({1, 2}). SA (r) LCD (r+k) = LCD (r). If each constraint has at most k vars, = SA (r)

The Lasserre Hierarchy

The Lasserre Hierarchy Start with a /1 integer quadratic program.

The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i.

The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 i S 1 S 2 z i

The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 i S 1 S 2 z i

The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 = P[All vars in S 1 S 2 are 1] i S 1 S 2 z i

The Lasserre Hierarchy Start with a /1 integer quadratic program. Think big" variables Z S = i S z i. Associated psd matrix Y (moment matrix) 2 3 Y S1,S 2 = E ˆZ S1 Z S2 = E 4 Y 5 = P[All vars in S 1 S 2 are 1] i S 1 S 2 z i (Y ) + original constraints + consistency constraints.

The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 )

The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 ) Y S1,S 2 only depends on S 1 S 2. (Y S1,S 2 = P[All vars in S 1 S 2 are 1])

The Lasserre hierarchy (constraints) Y is psd. (i.e. find vectors U S satisfying Y S1,S 2 = U S1, U S2 ) Y S1,S 2 only depends on S 1 S 2. (Y S1,S 2 = P[All vars in S 1 S 2 are 1]) Original quadratic constraints as inner products. SDP for Independent Set maximize subject to X i V U{i} 2 U{i}, U {j} = US1, U S2 = US3, U S4 (i, j) E S 1 S 2 = S 3 S 4 US1, U S2 [, 1] S1, S 2

And if you just woke up...

And if you just woke up.... SA (2) SA (1). LS (2) LS (1). Las (2) Las (1). LS (2) + LS (1) +

And if you just woke up... U S.. X SA (2) S LS (2) + SA (1) LS. (1) + LS (2) LS (1). Las (2) Las (1) Y Y i x i, x Y i 1 x i

Local Distributions

Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n

Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels

Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels 1/2 + ɛ

Ω(log n) level LS gap for Vertex Cover [ABLT 6] Random Sparse Graphs Girth = Ω(log n) VC (1 ɛ)n (1/2 + ɛ,..., 1/2 + ɛ) survives at Ω(log n) levels v = 1 1/2 + ɛ

Creating conditional distributions

Creating conditional distributions Tree 1/2 1 = 1 2 + 1 2 1/2 1

Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w.

Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w. v 1/2 + ɛ

Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w. 1 1 4ɛ v 1 8ɛ 1/2 + ɛ 8ɛ 4ɛ

Creating conditional distributions Tree Locally a tree 1/2 1 1/2 + ɛ 1 = 1 2 + 1 2 = ( 1 2 ɛ) + ( 1 2 + ɛ) 1/2 1 1/2 + ɛ 1 1 w.p. 4ɛ/(1 + 2ɛ) o.w. 1 1 4ɛ v 1 8ɛ 1/2 + ɛ 8ɛ 4ɛ O(1/ɛ log(1/ɛ))

Cheat while you can

Cheat while you can

Cheat while you can

Cheat while you can easy

Cheat while you can easy doable

Cheat while you can easy doable

Cheat while you can easy doable Valid distributions exist for O(log n) levels.

Cheat while you can easy doable Valid distributions exist for O(log n) levels. Can be extended to Ω(n) levels (but needs other ideas). [STT 7]

Cheat while you can easy doable Valid distributions exist for O(log n) levels. Can be extended to Ω(n) levels (but needs other ideas). [STT 7] Similar ideas also useful in constructing metrics which are locally l 1 (but not globally). [CMM 7]

Local Satisfiability for Expanding CSPs

CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1

CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3])

CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n

CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n

CSP Expansion MAX k-csp: m constraints on k-tuples of (n) boolean variables. Satisfy maximum. e.g. MAX 3-XOR (linear equations mod 2) z 1 + z 2 + z 3 = z 3 + z 4 + z 5 = 1 Expansion: Every set S of constraints involves at least β S variables (for S < αm). (Used extensively in proof complexity e.g. [BW1], [BGHMP3]) C 1 z 1.. C m z n In fact, γ S variables appearing in only one constraint in S.

Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable.

Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )]

Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]]

Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)]

Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)] = 1/8

Local Satisfiability C 1 C 2 C 3 z 1 z 2 z 3 z 4 z 5 z 6 Take γ =.9 Can show any three 3-XOR constraints are simultaneously satisfiable. Can take γ (k 2) and any αn constraints. Just require E[C(z 1,..., z k )] over any k 2 vars to be constant. E z1...z 6 [C 1 (z 1, z 2, z 3 ) C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 )] = E z2...z 6 [C 2 (z 3, z 4, z 5) C 3 (z 4, z 5, z 6 ) E z1 [C 1 (z 1, z 2, z 3 )]] = E z4,z 5,z 6 [C 3 (z 4, z 5, z 6 ) E z3 [C 2 (z 3, z 4, z 5)] (1/2)] = 1/8

Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1

Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α]

Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α] Need distributions D(S) such that D(S 1 ), D(S 2 ) agree on S 1 S 2.

Sherali-Adams LP for CSPs Variables: X (S,α) for S t, partial assignments α {, 1} S mx X maximize C i (α) X (Ti,α) i=1 α {,1} T i subject to X (S {i},α ) + X (S {i},α 1) = X (S,α) i / S X (S,α) X (, ) = 1 X (S,α) P[Vars in S assigned according to α] Need distributions D(S) such that D(S 1 ), D(S 2 ) agree on S 1 S 2. Distributions should locally look like" supported on satisfying assignments.

Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.

Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.

Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables.

Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables. Find set of constraints C such that G C S remains expanding. D(S) = uniform over assignments satisfying C

Obtaining integrality gaps for CSPs C 1 z 1.. C m z n Want to define distribution D(S) for set S of variables. Find set of constraints C such that G C S remains expanding. D(S) = uniform over assignments satisfying C Remaining constraints independent" of this assignment.

Vectors for Linear CSPs

A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n )

A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i.

A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i. Consider the psd matrix Ỹ Ỹ S1,S 2 = E [ ZS1 Z ] S2 = E ( 1) zi i S 1 S 2

A new look Lasserre Start with a { 1, 1} quadratic integer program. (z 1,..., z n ) (( 1) z 1,..., ( 1) z n ) Define big variables Z S = i S ( 1)z i. Consider the psd matrix Ỹ Ỹ S1,S 2 = E [ ZS1 Z ] S2 = E ( 1) zi i S 1 S 2 Write program for inner products of vectors W S s.t. Ỹ S1,S 2 = W S1, W S2

Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r

Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r [Schoenebeck 8]: If width 2r resolution does not derive contradiction, then SDP value =1 after r levels.

Gaps for 3-XOR SDP for MAX 3-XOR maximize subject to X 1 + ( 1) b i W{i1,i 2,i 3 }, W C i (z i1 +z i2 +z i3 =b i ) 2 WS1, W S2 = WS3, W S4 S 1 S 2 = S 3 S 4 W S = 1 S, S r [Schoenebeck 8]: If width 2r resolution does not derive contradiction, then SDP value =1 after r levels. Expansion guarantees there are no width 2r contradictions.

Schonebeck s construction

Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3}

Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C.

Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C. No contradictions ensure each S C can be uniquely assigned ±e C. W S2 = e C2 W S3 = e C2 W S1 = e C1

Schonebeck s construction z 1 + z 2 + z 3 = 1 mod 2 = ( 1) z 1+z 2 = ( 1) z 3 = W {1,2} = W {3} Equations of width 2r divide S r into equivalence classes. Choose orthogonal e C for each class C. No contradictions ensure each S C can be uniquely assigned ±e C. Relies heavily on constraints being linear equations. W S2 = e C2 W S3 = e C2 W S1 = e C1

Reductions

Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B?

Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms.

Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms. Reduction from integer program A to integer program B. Each variable z i of B is a boolean function of few (say 5) variables z i1,..., z i5 of A.

Spreading the hardness around (Reductions) [T] If problem A reduces to B, can we say Integrality Gap for A = Integrality Gap for B? Reductions are (often) local algorithms. Reduction from integer program A to integer program B. Each variable z i of B is a boolean function of few (say 5) variables z i1,..., z i5 of A. To show: If A has good vector solution, so does B.

A generic transformation A f B z i = f (z i1,..., z i5 )

A generic transformation A f B z i = f (z i1,..., z i5 ) U {z i } = ˆf (S) WS S {i 1,...,i 5}

What can be proved MAX k-csp Independent Set Approximate Graph Coloring Chromatic Number NP-hard UG-hard Gap Levels 2 k 2 2 k 2 k 2k k+o(k) 2k Ω(n) n n 2 2 c log n log log n 2 (log n)3/4+ɛ 2 c 1 log n log log n l vs. 2 1 25 log2 l l vs. 2 l/2 4l 2 Ω(n) n n 2 2 c log n log log n 2 (log n)3/4+ɛ 2 c 1 log n log log n Vertex Cover 1.36 2 - ɛ 1.36 Ω(n δ )

The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11

The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11 Need vectors for subsets of vertices in the G Φ.

The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11 Need vectors for subsets of vertices in the G Φ. Every vertex (or set of vertices) in G Φ is an indicator function!

The FGLSS Construction Reduces MAX k-csp to Independent Set in graph G Φ. z 1 + z 2 + z 3 = 1 z 3 + z 4 + z 5 = 1 1 11 11 111 1 11 Need vectors for subsets of vertices in the G Φ. Every vertex (or set of vertices) in G Φ is an indicator function! U {(z1,z 2,z 3 )=(,,1)} = 1 8 (W + W {1} + W {2} W {3} + W {1,2} W {2,3} W {1,3} W {1,2,3} )

Graph Products v 1 w 1 v 2 w 2 v 3 w 3

Graph Products v 1 w 1 v 2 w 2 v 3 w 3

Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} =?

Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 }

Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G).

Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G). Intuition: Independent set in product graph is product of independent sets in G.

Graph Products v 1 w 1 v 2 w 2 v 3 w 3 U {(v1,v 2,v 3 )} = U {v1 } U {v2 } U {v3 } Similar transformation for sets (project to each copy of G). Intuition: Independent set in product graph is product of independent sets in G. Together give a gap of n 2 O( log n log log n).

A few problems

Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank).

Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank). What if there is a program that uses poly(n) constraints (size), but takes them from up to level n?

Problem 1: Size vs. Rank All previous bounds are on the number of levels (rank). What if there is a program that uses poly(n) constraints (size), but takes them from up to level n? If proved, is this kind of hardness closed under local reductions?

Problem 2: Generalize Schoenebeck s technique

Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap).

Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap). We have distributions, but not vectors for other type of CSPs.

Problem 2: Generalize Schoenebeck s technique Technique seems specialized for linear equations. Breaks down even if there are few local contradictions (which doesn t rule out a gap). We have distributions, but not vectors for other type of CSPs. What extra constraints do vectors capture?

Thank You Questions?