Lecture 9: Pipage Rounding Method

Similar documents
6. Lecture notes on matroid intersection

CS 598CSC: Approximation Algorithms Lecture date: March 2, 2011 Instructor: Chandra Chekuri

Lecture 12: Randomized Rounding Algorithms for Symmetric TSP

5. Lecture notes on matroid intersection

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

In this lecture, we ll look at applications of duality to three problems:

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

PTAS for Matroid Matching

12.1 Formulation of General Perfect Matching

1 Linear programming relaxation

Lecture 6: Faces, Facets

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs

An Experimental Evaluation of the Best-of-Many Christofides Algorithm for the Traveling Salesman Problem

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Shannon Switching Game

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

Integer Programming Theory

Abstract. A graph G is perfect if for every induced subgraph H of G, the chromatic number of H is equal to the size of the largest clique of H.

Linear Matroid Intersection is in quasi-nc

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

8 Matroid Intersection

3 No-Wait Job Shops with Variable Processing Times

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

Linear Programming Duality and Algorithms

Combinatorial Optimization

Bicriteria Network Design via Iterative Rounding

Lecture 8: The Traveling Salesman Problem

An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem

Solution to Graded Problem Set 4

Lecture 4: Rational IPs, Polyhedron, Decomposition Theorem

Approximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs

On linear and semidefinite programming relaxations for hypergraph matching

The Maximum Facility Location Problem

Lecture 2 - Introduction to Polytopes

Algorithm design in Perfect Graphs N.S. Narayanaswamy IIT Madras

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

Online Stochastic Matching CMSC 858F: Algorithmic Game Theory Fall 2010

A Correlation Inequality for Whitney-Tutte Polynomials

CS522: Advanced Algorithms

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Matching and Planarity

11.1 Facility Location

Lecture 3: Convex sets

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

2 The Mixed Postman Problem with Restrictions on the Arcs

V1.0: Seth Gilbert, V1.1: Steven Halim August 30, Abstract. d(e), and we assume that the distance function is non-negative (i.e., d(x, y) 0).

CS6841: Advanced Algorithms IIT Madras, Spring 2016 Lecture #1: Properties of LP Optimal Solutions + Machine Scheduling February 24, 2016

Lecture Orientations, Directed Cuts and Submodular Flows

Matt Weinberg. Princeton University

Theorem 2.9: nearest addition algorithm

Advanced Combinatorial Optimization September 17, Lecture 3. Sketch some results regarding ear-decompositions and factor-critical graphs.

1 Matching in Non-Bipartite Graphs

Improved Approximations for Graph-TSP in Regular Graphs

A Note on Polyhedral Relaxations for the Maximum Cut Problem

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:

60 2 Convex sets. {x a T x b} {x ã T x b}

Convex Sets (cont.) Convex Functions

Optimization of submodular functions

CPS 102: Discrete Mathematics. Quiz 3 Date: Wednesday November 30, Instructor: Bruce Maggs NAME: Prob # Score. Total 60

Graph Theory II. Po-Shen Loh. June edges each. Solution: Spread the n vertices around a circle. Take parallel classes.

Monochromatic loose-cycle partitions in hypergraphs

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

1 Bipartite maximum matching

Lecture 4: Primal Dual Matching Algorithm and Non-Bipartite Matching. 1 Primal/Dual Algorithm for weighted matchings in Bipartite Graphs

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

The Structure of Bull-Free Perfect Graphs

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Distributed Submodular Maximization in Massive Datasets. Alina Ene. Joint work with Rafael Barbosa, Huy L. Nguyen, Justin Ward

GRAPH DECOMPOSITION BASED ON DEGREE CONSTRAINTS. March 3, 2016

Ellipsoid II: Grötschel-Lovász-Schrijver theorems

CMPSCI611: The Simplex Algorithm Lecture 24

Rank Bounds and Integrality Gaps for Cutting Planes Procedures

Small Survey on Perfect Graphs

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Graph theory. Po-Shen Loh. June We begin by collecting some basic facts which can be proved via bare-hands techniques.

NETWORK SURVIVABILITY AND POLYHEDRAL ANALYSIS

14 More Graphs: Euler Tours and Hamilton Cycles

Approximation Algorithms

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation

Rigidity, connectivity and graph decompositions

CS 435, 2018 Lecture 2, Date: 1 March 2018 Instructor: Nisheeth Vishnoi. Convex Programming and Efficiency

1. Lecture notes on bipartite matching

Graphs and Algorithms 2015

Math 5593 Linear Programming Lecture Notes

Expected Approximation Guarantees for the Demand Matching Problem

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

A Lottery Model for Center-type Problems With Outliers

Planar Graphs 2, the Colin de Verdière Number

Polytopes Course Notes

Speeding up MWU based Approximation Schemes and Some Applications

Solutions to Problem Set 2

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

A Verification Based Method to Generate Cutting Planes for IPs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Lecture 4: 3SAT and Latin Squares. 1 Partial Latin Squares Completable in Polynomial Time

CS 435, 2018 Lecture 9, Date: 3 May 2018 Instructor: Nisheeth Vishnoi. Cutting Plane and Ellipsoid Methods for Linear Programming

Transcription:

Recent Advances in Approximation Algorithms Spring 2015 Lecture 9: Pipage Rounding Method Lecturer: Shayan Oveis Gharan April 27th Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. In this lecture we study a different randomized rounding method called pipage rounding method. materials of this lecture are based on the work of Chekuri, Vondrák and Zenklusen [CVZ10. This method is stronger than the maximum entropy rounding by sampling method in some aspects and is weaker in some other aspects. From a highlevel point of view, one can use this method to round a fractional point in a matroid polytope to a basis in polynomial time making sure that the underlying elements of the matroid are negatively correlated. So, in particular, one can use this method to round a fractional point in the spanning tree polytope to a thin spanning tree similar to the idea in the last lecture. But as we will see in the next few lectures, the pipage rounding method does not necessarily satisfy several of the strong negative dependence properties that are satisfied by the maximum entropy distributions of spanning trees. The 9.1 Matroids We start this lecture by giving a short overview of matroids. Definition 9.1. For a set E of elements and I w E, we say M(E, I) is a matroid, if 1. For all S I and T S, T I. This is also known as the downward closed property. 2. For all S, T I such that S < T there is an element e T \ S such that S {e} I. We say S is an independent set of M if S I. We also say T is a base of M if T I and I has no independent set of size larger than T. As an example, it is easy to see that spanning trees of a graph form the bases of a matroid. Let M = (E, F) where E is the set of edges of G and F is the set of forests of G, i.e., for any S F is a set of edges with not cycles. Obviously, F is downward closed. In addition, if T > S and S, T F, then T has an edge that connects two connected components of S. So, we can add that edge to S making sure that it remains a forest. It follows that M is a matroid, a.k.a., the graphic matroid and its bases are the spanning trees of G. The rank function of a matroid is a function r : 2 E N such that for any S E, r(s) is the size of the largest independent set of M that is contained in S, r(s) = max{ T : T I, T S}. One of the important properties of the rank function is that it is a submodular function, i.e., f(s) + f(t ) f(s T ) + f(s T ), S, T. We will see applications of this property throughout the lecture. 9-1

9-2 Lecture 9: Pipage Rounding Method A famous example of matroids is the linear matroid where E corresponds to a set of vectors in a vector space and a I is the sets of vectors that are linearly independent. Many examples of matroids are just special cases of the linear matroid, e.g., the graphic matroid. 9.1.1 Matroid Polytope Given a matroid M, the matroid polytope is the convex hull of all vectors corresponding to the independent sets of M. For a set S E and x : E R, let x(s) := e S x(e). x(s) r(s) S E, x(e) 0 e E. It is easy to see that the above polytope contains all independent sets of M. In addition its vertices are just the independent sets. We can also characterize the matroid base polytope, that is the convex hull of all indicator vectors of the bases of M. x(e) = r(e) x(s) r(s) S E, x(e) 0 e E. For example, observe that the spanning tree polytope that we described in the last lecture is the same as the above polytope when M is the graphic matroid of G. Although the above polytopes have exponentially many constraints one can use the ellipsoid algorithm to test if a given point x is in the matroid (base) polytope. To see that one can use some general theorems that say separation and optimization are equivalent algorithmic tasks, optimizing a linear function over a matroid is easy so is the separation problem. Another way to see this is that for a given point x it is enough to test if for any set S, x(s) r(s). We define a function f : 2 E R where f(s) = r(s) x(s). It is easy to see that f is a submodular function. So all we need to do is to find the minimum of a submodular function. This is a very well studied problem and it can be done in polynomial time for any given submodular function [GLS81. 9.2 Pipage Rounding Method In this section we describe the pipage rounding algorithm. This algorithm is first proposed by Ageev and Sviridenco [AS04 to round a fractional matching into an integral one. Chekuri, Vondrák and Zenklusen [CVZ10 observed that a randomized version of the pipage rounding algorithm gives a negatively correlated distribution, so it can be used instead of the rounding by sampling method to get an O(log(n)/ log log(n)) approximation for ATSP. In the basic version we start from a fractional point x in the matroid base polytope and we round to an integral base T. The method will have two properties, i) For any element e, P [e T = x(e). ii) Elements are negatively correlated, i.e., for any set of elements S E, P [S T e T x(e).

Lecture 9: Pipage Rounding Method 9-3 x 2 x x 1 Figure 9.1: An iteration of the pipage rounding method. given the point x, we choose two variables say x(e), x(f) and move randomly along the direction of the line that keeps the sum of x(e), x(f) invariant until we hit the polytope, i.e., the blue crosses. As we proved in lecture 3 the second property will imply that for any set F E, the sum of indicator random variables of elements in F is highly concentrated around its expected value. This, in addition to (i) are the only properties of the rounding by sampling method that we used to in the last lecture to show that a random spanning tree is O(log(n)/ log log(n))-thin. Therefore, one can use the pipage rounding method to get a different (and perhaps simpler) algorithm to round the solution of the Held-Karp relaxation to a thin spanning tree. We say a set S E is tight with respect to a fractional point x in the matroid base polytope if x(s) = r(s). Before describing the algorithm we discuss a nice property of tight sets. We say S, T E cross if S \ T, T \ S, S T are nonempty. It is easy to see that if S, T are crossing tight sets then S T, S T are tight as well. This again follows by submodularity of the rank function. In particular, if S, T are tight then, x(s) + x(t ) = r(s) + r(t ) r(s T ) + r(s T ) x(s T ) + x(s T ) = x(s) + x(t ). The first inequality uses submodularity and the second inequality uses feasibility of x in the matroid (base) polytope. So, all of inequalities must be equalities, in particular, S T and S T must be tight sets. Now, we are ready to describe the pipage rounding algorithm. Given a nonintegral point x, we start by choosing a minimal tight set, say T with at least two fractional elements say 0 < x(e), x(f) < 1. We consider the line that is +1 in the direction of x(e) and 1 in the direction of x(f) and we randomly move forward or backward in this line, until we hit the faces of the polytope (see Figure 9.1). If we hit an integrality face, i.e., if either of x(e) or x(f) become integral then we have made progress. Otherwise, a new set, say S, becomes tight. As we keep x(t ) invariant, T is still a tight set, so either S T or S, T cross; in both of these cases S T is a new tight set that is a proper subset of T with at least one fractional element. So, we run the same algorithm on S T. We keep doing this procedure until we get an a new integral element in x. It is easy to see that after at most E r(e) iterations we get to an actual integral point. We didn t say how we move on the line +x(e), x(f). The idea is simple, say x 1, x 2 are the intersections of this line with the polytope (see Figure 9.1). With probability p we go to x 1 and with probability 1 p we go to x 2 for a value of p where px 1 + (1 p)x 2 = x.

9-4 Lecture 9: Pipage Rounding Method It is easy to see that after i-th step of the algorithm the the current (fractional) solution is the same as the starting point in expectation, so we preserve the marginal probabilities. In the next section we also show that elements are negatively correlated. This basically follows from the fact that whenever we increase a variable x(e) we simultaneously decrease x(f), so in this step, e, f are negatively correlated and e is independent of the rest of the elements. 9.3 Negative Correlation In this section we show that pipage rounding method produces a negatively correlated distribution of the bases of a matroid. Let X i (e) be a random variable indicating the fractional point after the i-th step of the pipage rounding method. So, at the beginning X 0 (e) = x(e) with probability 1 and at the end of the algorithm it is an integral random variable. By the definition of the pipage rounding algorithm, X i (e) is indeed a martingale, its expectation is preserved throughout the algorithm, and we want to prove that they are negatively correlated. Fix a set F, all we need to show is that [ E X i (e) x(e). (9.1) We use an inductive proof. We show that for any i, [ [ E X i+1 (e) E X i (e). (9.2) In fact we prove a simpler claim, that is [ E X i+1 (e) X i(e) X i (e). (9.3) Note that in the LHS we are conditioning on the whole vector X i. Taking expectations from both sides of the above inequality implies (9.2). So, in the rest of this part we prove the above equation. Say in step i + 1 we move along the line we only change the variables corresponding to the elements f 1, f 2, i.e., we either increase X i (f 1 ) and decrease X i (f 2 ) or vice versa. Then, we for all e f 1, f 2, X i+1 (e) = X i (e). In addition, we have the following two properties: i) E [X i+1 (f 1 ) X i (f 1 ) = X i (f 1 ) and E [X i+1 (f 2 ) X i (f 2 ) = X i (f 2 ). ii) X i+1 (f 1 ) + X i+1 (f 2 ) = X i (f 1 ) + X i (f 2 ) with probability 1. If f 1, f 2 / F, then (9.3) holds trivially. If f 1 F and f 2 / F, then [ E X i+1 (e) X i = E [X i+1 (f 1 ) X i X i (e) = X i (e), \{f 1} where the second equality uses property (i).

REFERENCES 9-5 Perhaps the most interesting case is when f 1, f 2 F. In this case the negative correlation essentially follows from property (ii). First, note that similar to above all we need to show is that E [X i+1 (f 1 ) X i+1 (f 2 ) X i X i (f 1 ) X i (f 2 ). (9.4) Since the sum of X i (f 1 ) + X i (f 2 ) is preserved with probability 1 we have E [ (X i+1 (f 1 ) + X i+1 (f 2 )) 2 X i = (Xi (f 1 ) + X i (f 2 )) 2. On the other hand, X i (f 1 ) X i (f 2 ) is preserved in expectation. Using the fact that for any random variable Y, E [ Y 2 E [Y 2, we get E [ (X i+1 (f 1 ) X i+1 (f 2 )) 2 X i (Xi (f 1 ) + X i (f 2 )) 2. The difference of the above two equation proves (9.4). This was the last case, so we proved (9.3) and (9.1). 9.4 Conclusion Chekuri, Vondrak and Zenklusen also introduce another algorithm that is similar to pipage rounding with some additional properties in some aspects, called the randomized swap rounding. They extend the above ideas to round a fractional point in the intersection of two matroids, e.g., a fractional matching in a bipartite graph. In this case it is impossible to round a fractional perfect matching into an integral one while satisfying the negative correlation property. To see this suppose G is a cycle of length 2n and x(e) = 1/2 for all edges. Let M 1 be the odd edges and M 2 be the even edges of the cycle, i.e., x = 1 M1 /2 + 1 M2 /2 where 1 F is the indictor vector of a set F of edges. Observe that the only possible decomposition of x as a convex combination of perfect matchings is x = 1 M1 /2 + 1 M2 /2. And, obviously in this decomposition edges within M 1 (or those within M 2 ) are positively correlated. It is a very interesting to better understand the distribution of bases produced in a pipage rounding method. Although we have an algorithm to get a sample of this distribution we do not know how to analytically describe this distribution, e.g., is it possible to show that the distribution of spanning trees in the pipage rounding method is not a weighted uniform distribution of spanning trees. What are other properties of this distribution, does it satisfy negative association (see next two lectures for the definition of negative association)? References [AS04 [CVZ10 [GLS81 A. Ageev and M. Sviridenko. Pipage Rounding: A New Method of Constructing Algorithms with Proven Performance Guarantee. In: Journal of Combinatorial Optimization 8.3 (2004), pp. 307 328 (cit. on p. 9-2). C. Chekuri, J. Vondrák, and R. Zenklusen. Dependent Randomized Rounding via Exchange Properties of Combinatorial Structures. In: FOCS. 2010, pp. 575 584 (cit. on pp. 9-1, 9-2). M. Grötschel, L. Lovasz, and A. Schrijver. The ellipsoid method and its consequences in combinatorial optimization. In: Combinatorica 1 (1981), pp. 169 197 (cit. on p. 9-2).