Lecture 16: Gaps for Max-Cut

Similar documents
6 Randomized rounding of semidefinite programs

Lecture 7: Asymmetric K-Center

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Week 5. Convex Optimization

On the optimality of the random hyperplane rounding technique for MAX CUT Uriel Feige and Gideon Schechtman Faculty of Mathematics and Computer Scienc

Lecture 6: Spectral Graph Theory I

2 The Fractional Chromatic Gap

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

CS 598CSC: Approximation Algorithms Lecture date: March 2, 2011 Instructor: Chandra Chekuri

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

Approximation Algorithms

Coloring 3-Colorable Graphs

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

Algorithms for Euclidean TSP

Chapel Hill Math Circle: Symmetry and Fractals

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14

11.1 Facility Location

1 The Traveling Salesperson Problem (TSP)

PCP and Hardness of Approximation

Primal-Dual Methods for Approximation Algorithms

Stanford University CS359G: Graph Partitioning and Expanders Handout 18 Luca Trevisan March 3, 2011

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017

1 Unweighted Set Cover

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

princeton univ. F 15 cos 521: Advanced Algorithm Design Lecture 2: Karger s Min Cut Algorithm

Lecture Notes: Euclidean Traveling Salesman Problem

Approximation Algorithms

Graph Contraction. Graph Contraction CSE341T/CSE549T 10/20/2014. Lecture 14

Polynomial-Time Approximation Algorithms

Lecture 14: Linear Programming II

Figure 1: An example of a hypercube 1: Given that the source and destination addresses are n-bit vectors, consider the following simple choice of rout

1 Better Approximation of the Traveling Salesman

4 Fractional Dimension of Posets from Trees

Linear Programming Duality and Algorithms

Local Search Approximation Algorithms for the Complement of the Min-k-Cut Problems

Lecture 2 September 3

3 Fractional Ramsey Numbers

Lecture 5: Properties of convex sets

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Lecture 11: Clustering and the Spectral Partitioning Algorithm A note on randomized algorithm, Unbiased estimates

γ(ɛ) (a, b) (a, d) (d, a) (a, b) (c, d) (d, d) (e, e) (e, a) (e, e) (a) Draw a picture of G.

CPSC 536N: Randomized Algorithms Term 2. Lecture 10

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Randomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.

Lecture 2 - Introduction to Polytopes

The Ordered Covering Problem

Approximation Algorithms

2010 SMT Power Round

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Testing Continuous Distributions. Artur Czumaj. DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science

1.7 The Heine-Borel Covering Theorem; open sets, compact sets

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

Nesting points in the sphere. Dan Archdeacon. University of Vermont. Feliu Sagols.

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Algebra of Sets. Aditya Ghosh. April 6, 2018 It is recommended that while reading it, sit with a pen and a paper.

12.1 Formulation of General Perfect Matching

Lecture and notes by: Sarah Fletcher and Michael Xu November 3rd, Multicommodity Flow

4 Integer Linear Programming (ILP)

4 Greedy Approximation for Set Cover

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

9. Graph Isomorphism and the Lasserre Hierarchy.

9.5 Equivalence Relations

2 Solving the Online Fractional Algorithm and Online Rounding Scheme

1 Minimum Cut Problem

Ellipsoid II: Grötschel-Lovász-Schrijver theorems

Figure : Example Precedence Graph

Judicious bisections

Integral Geometry and the Polynomial Hirsch Conjecture

1 Overview. 2 Applications of submodular maximization. AM 221: Advanced Optimization Spring 2016

Approximation Algorithms

Improved approximation ratios for traveling salesperson tours and paths in directed graphs

Approximating Min-sum Set Cover 1

CSC 373: Algorithm Design and Analysis Lecture 3

Lecture 6: Linear Programming for Sparsest Cut

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

9.1 Cook-Levin Theorem

Math 734 Aug 22, Differential Geometry Fall 2002, USC

arxiv:math/ v3 [math.co] 24 Dec 1999

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

In this lecture, we ll look at applications of duality to three problems:

Discrete Mathematics

The Simplex Algorithm

arxiv: v1 [math.co] 5 Nov 2010

1. Lecture notes on bipartite matching February 4th,

AM 221: Advanced Optimization Spring 2016

SAMPLING AND THE MOMENT TECHNIQUE. By Sveta Oksen

ORIE 6300 Mathematical Programming I September 2, Lecture 3

CS264: Homework #4. Due by midnight on Wednesday, October 22, 2014

Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011

From acute sets to centrally symmetric 2-neighborly polytopes

Bicriteria Network Design via Iterative Rounding

Fast Clustering using MapReduce

Planar graphs. Math Prof. Kindred - Lecture 16 Page 1

1. Chapter 1, # 1: Prove that for all sets A, B, C, the formula

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

A Genus Bound for Digital Image Boundaries

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Transcription:

Advanced Approximation Algorithms (CMU 8-854B, Spring 008) Lecture 6: Gaps for Max-Cut Mar 6, 008 Lecturer: Ryan O Donnell Scribe: Ravishankar Krishnaswamy Outline In this lecture, we will discuss algorithmic (Goemans-Williamson) and integrality gaps for the Max-Cut problem. We begin by recalling the semidefinite programming formulation for Max-Cut. max (u,v) w uv ( u v) subject to v v = v V Let us denote the optimal value of the SDP by Sdp(G). Also, let Opt(G) be the value of the maximum cut in G, and Alg GW (G) be the expected value of a cut which the Goemans Williamson algorithm returns. For any given graph G, we know the following to be true: Also, we know the following as well, Sdp(G) Opt(G) Alg GW (G) Theorem.. [] For all graphs G, Alg GW (G) α GW Sdp(G), where α GW is a constant 0.878. But this only gives us a ratio comparing Sdp(G) and Alg GW (G), and is thus a worry in the following sense: If the graph is such that OPT(G) 0.5, the Goemans Williamson algorithm could actually return a cut of size 0.5 (if Sdp(G) were also nearly 0.5), performing worse than any random cut. This leads us to the following questions:. How much more can Sdp(G) be when compared to Opt(G)?. Is the analysis of Goemans and Williamson actually tight? Is there an instance where Alg GW (G) = α GW Opt(G)? Let s first recap their algorithm: After solving the SDP, they generate a random hyperplane, and partition the vertices based on which side of the hyperplane their associated vectors lie. Therefore, P[(u, v) is cut] = arccos(ρ uv )/, whereas the contribution to Sdp(G) by the edge (u, v) is ρ uv. Here, ρ uv is the dot product u v. A comparison of the contribution of an edge (u, v) (such that

an embedding has u v = ρ) towards the SDP with the expected contribution to the goemans williamson cut is shown in figure. Hence, if a non-negligible fraction of the edges (u, v) have their corresponding end-points dot products ρ uv distant from ρ = 0.69 (see figure ), we notice that it is possible to obtain better guarantees on the approximation factor. 0.9 0.8 0.7 Gap: 0.878 0.845 0.74 arccos(rho)/pi 0.5 0.5*rho 0.6 0.5 0.4 0.3 0. 0. 0.69 0 0.8 0.6 0.4 0. 0 0. 0.4 0.6 0.8 Dot Product u.v Figure : The SDP vs GW Algorithm Values Comparison Curve Integrality Gap Getting back to the questions we had in mind, the first one was resolved by Feige and Schehtman [] where they established a tight integrality gap instance for this SDP formulation. Theorem.. [Feige and Schechtman []] For every constant ɛ, there exists a graph G such that Opt(G) (α GW + ɛ)sdp(g) Note.. On such graphs G, the Goemans Williamson algorithm actually finds the optimal cut!

We shall now give a sketch of the construction used in [] to obtain the integrality gap instance. The idea is to embed a graph on a sphere (thereby giving a feasible SDP solution this will lower bound Sdp(G)), and then prove that Opt(G) on this instance is not high. Here are some useful notations. Definition. An embedded graph is a weighted graph G = (V, ) where V S d for some d N, where S d is the surface of a d dimensional sphere, i.e S d = {x R d : x = }. Definition. Given such an embedded graph, define Obj(G) to be (u,v) w uv( u v). It is simple to see the following fact (as the embedding itself is one feasible solution to the SDP). Fact.3. For all embedded graphs G, Obj(G) Sdp(G). Let us denote the value of the dot product ρ which actually minimizes arccos ρ be denoted by ρ ρ. As marked in the figure, ρ 0.69. To get our instance, we try to find an embedding such that Obj(G) Opt(G). Since we need the gap between the Goemans Williamson Algorithm (on the integrality gap instance we hope to create, Alg GW (G) = Opt(G)) and the SDP value to be 0.878, we want all our edges (u, v) included in the embedded graph to satisfy the property u v ρ. If we accomplish this, we would ensure that Sdp(G) Obj(G) ρ, while the expected value of the Goemans Williamson algorithm arccos ρ when run on our embedding is. We would then be done if we show that Opt(G) is no more than this quantity. The following will be a sketch of the instance we construct. It will be an infinite graph G = (V, ) with the vertices being all points on the surface of the d dimensional sphere (i.e V = S d ). And the idea suggested above was to put in all possible edges whose embedded vertices have a dot product of ρ between them. As the instance we are creating is an infinite (the vertex set is, in fact, even uncountable) graph it becomes necessary to think of the edges as being a probability distribution over pairs of points on the sphere (symmetric distribution over V V ). We therefore slightly redefine our notions of Obj and Opt as follows: and Opt(G ) = Obj(G ) = max f:v {,} [ u v Pr ] [f(u) f(v)] Now we actually define the edge distribution. is the distribution such that a random draw from amounts to picking u and v randomly and independently, but conditioned on u v being at most ρ. We now present a fact about the tail bounds on this random distribution. Fact.4. For large values of d, [ u v u v ρ ] = ρ. We are overlooking a technical issue about the usage of max here as we are not discussing the existence of the maximum value over our domain 3

In fact, a lower bound of ρ O( d ) can be shown. By virtue of the way we have defined the distribution, Obj(G ) ρ. What about Opt(G )? Let the optimal cut of the surface of the sphere be (A, A C ) where A S d has a fractional surface area a (this is the measure of A). The notion of a measure of a set is the fraction of vertices it has, and the function µ ρ (A) (on the product measure space) represents the fraction of the edges crossing the set A. That is, µ ρ (A) = Pr [A(u) A(v)], where A(u) = if and only if u A. The hardest part of the integrality gap proof is the following theorem. Theorem.5. [Theorem 4, []] Fix an a between 0 and and a ρ between 0 and. Then the maximum of µ ρ (A) where A ranges over all (measurable) subsets of S d of measure a is attained for a(ny) cap of measure a. ssentially, it proves that the best cut is a cap (a cap is the portion of the sphere s surface above some hyperplane; see figure ). In fact, Feige and Schechtman go on to prove that the best cap is a hemisphere (where the hyperplane defining the cap passes through the origin). Here is the intuition as to why. For any cap C, if we consider H to be the hemisphere centered at the same point as the center of C, we can observe that for point any p in H / C, the measure of ({p}, H C ) (the edges from p to H C ) is at least as much as the measure of ({p}, C) (by symmetry of H with respect of C, for every edge from p to some point in C, there is a mirror image in H C to which p will have an edge). Thus, including all such p into C would not decrease the measure of the new set. Hence, the hemisphere has at least as much measure as that of the cap. And by symmetry of G, all hemispheres will have the same value. This means that the value of the optimal cut on the graph is at most the expected value of the Goemans Williamson algorithm (since all hyperplanes passing through the origin have the same cut measure) when run on our embedding! Since we have the expected dot product u v ρ when (u, v), the expected value of the Goemans Williamson cut is approximately arccos ρ. Therefore, Hence, Opt(G) arccosρ + o() Opt(G ) Sdp(G ) 0.878 + o() This establishes the integrality gap result. We can then get a finite graph from this by discretizing the sphere into small portions of size O(ɛ), and choosing the small portions on the surface as vertices. We would get the integrality gap at 0.878 + ɛ. The reason why we don t define the edge distribution to be such that the dot product is exactly ρ is that the argument that establishes any cap to be the optimal cut breaks down. 4

3 Algorithmic Gap Figure : C is the cap on the d dimensional sphere The following section is based on a result by Karloff [3]. We now focus on the second question that of the algorithmic gap: Are there instances G where Alg GW (G) (α GW + ɛ)opt(g) (for any constant ɛ)? Note that such instances would imply that Sdp(G) = Opt(G). But a point to note here is that Alg GW (G) is a quantity which actually depends on the particular SDP embedding. Should we therefore strive to show Alg GW (G) 0.878Sdp(G) for all optimal SDP embeddings? No as the algorithm would perform optimally on the magic hidden SDP embedding of the integral cut solution! Therefore, we just want to get some embedding whose value is Sdp(G), such that the Goemans Williamson algorithm performs poorly on that. Then we could argue that the SDP could have unfortunately output that optimal embedding, and the algorithm would get a much smaller cut than the optimal. Let s give such an instance. Recalling the LP integrality instance we created for Max-Coverage, we again use the notion of binary strings corresponding to vertices. The graph G has vertex set V = { k, k } k, a discrete cube embedded on the sphere of dimension k. Note that on these vertices, the quantity u v is equal to the fractional hamming distance between the vectors u and v. Now for the edges, we present a random experiment which defines the probability distribution over the edges (which will be our weighted distribution of our edges): pick a vertex u { k, k } k uniformly at random, and form v by negating each coordinate of u with probability ρ, and set (u, v) to be an edge 3. Note that the expected dot product [ u v] = ρ! fixed) 3 This actually sets a tiny probability on self loops at vertices, which we neglect in the proof (but this issue can be 5

Also, Obj(G) = [ ] u v = = = ρ [ u v] [ k ] u i v i Further, since our edge distribution is an independent experiment on pairs of vertices with the expected value of u v = ρ, the random edge from this distribution will have its dot product concentrated (we can use chernoff bounds to bound the tails on both sides of the mean) about the mean (i.e u v ρ + o()), where the o() term decreases with higher k. Hence, the Goemans i= arccos ρ Williamson algorithm (on this embedding) would return a cut of size at most + o(). We therefore need to show: () that there exists an integral cut of value Obj(G) (this would establish Opt(G) Obj(G)), and () that the embedding we have produced is actually optimal to the SDP! (i.e we need to show that Sdp(G) Obj(G)). If we establish these, we would have proved that there is a graph G where Opt(G) = Sdp(G) = Obj(G) = ρ, and that there is an SDP optimal embedding of G on which the Goemans Williamson algorithm fares poorly. We would then be done! We discuss the first part in this lecture, and leave the next part to after spring break. To show that there are solutions with value Obj(G), we revert back to dictator cuts! That is, we cut the cube into parts, based on any coordinate (say i). Let s denote this cut by f i : V {, }. The value of this cut is, V al = Pr [f i(u) f i (v)] = Pr [u i v i ] = ρ where the last step follows by the definition of our edge distribution. This actually shows that there are k such good cuts, but unfortunately, starting from the embedding we ve constructed, the Goemans Williamson algorithm performs poorly. It remains to show that our embedding is actually optimal, that is, the SDP optimizer could unfortunately output it. We now conclude this lecture by describing what we shall show in future lectures, after introducing fourier analysis. F : { [, } k S d, k k ] Fu.Fv ρ 6

References [] U. Feige and G. Schechtman. On the optimality of the random hyperplane rounding technique for max cut. Random Struct. Algorithms, 0(3):403 440, 00. [] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM, 4(6):5 45, 995. [3] H. Karloff. How good is the goemans williamson max cut algorithm? SIAM J. Comput., 9():336 350, 000. 7