Primal-Dual Methods for Approximation Algorithms Nadia Hardy, April 2004
The following is based on: Approximation Algorithms for NP-Hard Problems. D.Hochbaum, ed. Chapter 4: The primal-dual method for approximation algorithms and its application to network design problems. M.Goemans and D.Williamson. Lecture Notes on Approximation Algorithms. D.Williamson. 1998. A presentation made by D.Williamson on Primal-Dual Methods. Primal-Dual Approximation Algorithms for Feedback Problems in Planar Graphs. M.Goemans and D.Williamson. Combinatorica, 18:37-59, 1998.
We are concerned with designing approximation algorithms for combinatorial optimization problems that can be modelled as IP s. General approach: In the Lectures: (1) Formulate the problem as an IP (2) Relax it to an LP (3) Do something clever to get a feasible solution for the IP...... in polytime!... and not too far from opt!! Randomize rounding Iterative rounding Semidefinite programming Today: Primal-Dual Methods
Some history... The works of Egerváry in 1930 and Khun in 1955 lead to the formulation of a Primal-Dual Method for LP problems by Dantzig, Ford and Fulkerson (1956). In 1981 the truly first Primal-Dual approx alg was presented by Bar-Yehuda and Even (for Vertex Cover). The 80s went by... In the 90s the method was shown as a powerful tool for developing approx alg. Work was done in different instances of the hitting set problem and network design problems (Agrawal, Klein and Ravi 1995; Goemans and Williamson 1995; Saran, Vazirani and Young 1992, Klein and Ravi 1993; and others).
Consider a general LP formulation: PRIMAL Its DUAL is min cx Ax b x 0 max yb ya c y 0 Complementary Slackness Conditions (CS) PRIMAL: x i > 0 j y j a ji = c i DUAL: y j > 0 i a ij x i = b j Theorem 1: If x and y are optimal for the primal and the dual respectively, then they satisfy cx = yb Theorem 2: If x and y are feasible, then they are optimal they satisfy PCS and DCS.
The idea for Dantzig, Ford and Fulkerson Primal-Dual Method to solve LP problems: y 0 While there does not exist feasible x satisfying Complementary Slackness conditions Return x Get direction of increase for the dual The idea for the Primal-Dual Method for approx alg: y 0 While there does not exist feasible integral x satisfying Primal CS conditions Return x Get direction of increase for the dual The DUAL CS conditions are not enforced.
Example: Hitting Set Problem Input: ground set E = {e 1, e 2,..., e n } subsets T 1,..., T m E cost c e e E Goal: Find a min-cost A E s.t. A T i i = 1,..., m (1) IP formulation: min e E c e x e e T i x e 1 i = 1,..., m x e {0, 1} e E (2) LP relaxation: x e [0, 1] e E (3) Consider the dual problem: max m i y i i:e T i y i c e e E y i 0 i = 1,..., m
Remember the scheme shown before for the Primal-Dual Method for approx alg: y 0 While there does not exists an integral solution obeying PCS conditions Get direction of increase for the dual Return feasible integral x obeying PCS Recall the PCS condition: x e > 0 i:e T i y i = c e. To stop we need to check the while condition: set x e = 1 e A, where A = {e E s.t. i:e T i y i = c e }. Notice that our x satisfy PCS. If A is feasible, stop. Otherwise... Claim: there is some direction of increase for the dual. If A is not feasible there exists a violated set T j : A T j =. This means that e T j, i:e T i y i < c e.
Algorithm (Bar-Yehuda and Even, 1981) y 0 A While A is not feasible Pick violated set T j Increase y j until i:e T i y i = c e for some e T j A A {e} The algorithm returns a feasible solution and is polynomial in the size of the input... well, what if the number of sets we want to hit is exponential in the size of our ground set E? Approximation guarantee: let A f denote the output of the algorithm, if we can find α s.t. y i > 0 implies A f T i α i then e A f c e α i y i αop T Example: Vertex Cover, A f T i 2.
Recall that in our history overview the 80s went by without much improvement... at least for the Primal-Dual Method for approx alg. In the 90s two new ideas were introduced: (1) Clean up the final solution. y 0 A k 0 While A is not feasible k k + 1 Pick violated set T j Increase y j until A A {e k } For j k downto 1 If A {e j } is feasible A A {e j } i:e T i y i = c e for some e k T j
What improvement does this make in the performance guarantee? Definition. A subset B E is a minimal augmentation of a subset A E if: (1) A B is feasible, and (2) for any e B, A B {e} is not feasible. Theorem. Let T (A) be the violated set the algorithm chooses given an infeasible set A. If for any infeasible A and any minimal augmentation B of A, B T (A) β, then e A f c e = i A f T i y i β i y i βop T Proof : It follows from A T lj max B T lj β, where the max is taken over all minimal augmentations of {e 1,..., e j 1 }. Hence, if we can find β, the maximum number of elements of any violated set chosen by the algorithm that could possibly be introduced under a minimal augmentation, the above algorithm is β-approx.
Example: Shortest s-t path Let G be an undirected graph with two distinguished vertices s and t. We want to find the shortest s t path. This problem can be seen as an instance of the Hitting set problem. Ground set: the set of edges E Costs: c e 0, e E Sets to hit: T i = δ(s i ), s S i, t / S i We apply the primal-dual algorithm with the clean up idea. Suppose that whenever A is infeasible, the algorithm chooses the violated set T k = δ(s k ), where S k is the connected component of (V, A) containing s. The algorithm is a 1-approximation algorithm for the shortest s t path problem, i.e., it gives an optimal solution.
(2) Increase dual on several sets simultaneously. y 0 A k 0 While A is not feasible k k + 1 Γ Violated(A) Increase y j for T j Γ uniformly until i:e T i y i = c e for some e k T j Γ A A {e k } For j k downto 1 If A {e j } is feasible A A {e j }
What improvement does this second idea make in the performance guarantee? Theorem. If for any infeasible A and any minimal augmentation B of A, T i V iolated(a) B T i γ V iolated(a) where V iolated(a) are the violated sets chosen by the algorithm given A, the performance guarantee of the algorithm is γ.
Some results from the last 25 years... For α = 2 For α = 9/4 Vertex cover (Bar-Yehuda and Even, 81) Steiner tree (lot of people) Generalized Steiner tree (Agrawal, Klein and Ravi, 91) Matching w/triangle inequality T-joins Partition into trees/pahts/cycles of at leas k vertices Prize-collecting TSP Feedback vertex set in planar graphs (Goemans and Williamson 92-97) Feedback vertex set/directed FV set/subset FV set Bipartization problem (Goemans and Williamson 96)