Approximation Basics

Similar documents
Chapter Design Techniques for Approximation Algorithms

Approximation Algorithms

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

Algorithm Design and Analysis

Theorem 2.9: nearest addition algorithm

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017

Approximation Algorithms

Introduction to Approximation Algorithms

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS

Approximability Results for the p-center Problem

Approximation Algorithms

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

Approximation Algorithms

Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011

arxiv:cs/ v1 [cs.cc] 28 Apr 2003

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

Prove, where is known to be NP-complete. The following problems are NP-Complete:

val(y, I) α (9.0.2) α (9.0.3)

Notes for Lecture 24

11. APPROXIMATION ALGORITHMS

Mathematical and Algorithmic Foundations Linear Programming and Matchings

1 Better Approximation of the Traveling Salesman

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS

Polynomial-Time Approximation Algorithms

Algorithms for Euclidean TSP

Approximation Algorithms

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

Approximation Algorithms

Introduction to Graph Theory

An Efficient Approximation for the Generalized Assignment Problem

Copyright 2000, Kevin Wayne 1

1 Linear programming relaxation

A List Heuristic for Vertex Cover

On Covering a Graph Optimally with Induced Subgraphs

3 No-Wait Job Shops with Variable Processing Times

Solution for Homework set 3

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

1. Lecture notes on bipartite matching

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.

COMP Analysis of Algorithms & Data Structures

CSC 373: Algorithm Design and Analysis Lecture 3

4.1 Interval Scheduling

Approximation Algorithms

On Approximating Minimum Vertex Cover for Graphs with Perfect Matching

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

Outline. CS38 Introduction to Algorithms. Approximation Algorithms. Optimization Problems. Set Cover. Set cover 5/29/2014. coping with intractibility

Greedy algorithms is another useful way for solving optimization problems.

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

CMPSCI611: Approximating SET-COVER Lecture 21

1 The Traveling Salesperson Problem (TSP)

Advanced Algorithms. On-line Algorithms

Improved Results on Geometric Hitting Set Problems

CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh HW#3 Due at the beginning of class Thursday 02/26/15

The Ordered Covering Problem

UML CS Algorithms Qualifying Exam Fall, 2003 ALGORITHMS QUALIFYING EXAM

Local Search Approximation Algorithms for the Complement of the Min-k-Cut Problems

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Algorithmic complexity of two defence budget problems

Judicious bisections

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

Solving NP-hard Problems on Special Instances

Lecture 14: Linear Programming II

Partha Sarathi Mandal

CPSC 536N: Randomized Algorithms Term 2. Lecture 10

Fall CS598CC: Approximation Algorithms. Chandra Chekuri

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Design and Analysis of Algorithms

The minimum cost spanning tree problem

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Approximation Algorithms

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

The Set Cover with Pairs Problem

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

Bottleneck Steiner Tree with Bounded Number of Steiner Vertices

Greedy Approximations

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

Introduction to Approximation Algorithms

Lecture 7: Asymmetric K-Center

Graph Theory and Optimization Approximation Algorithms

Approximation Algorithms

Lecture 10 October 7, 2014

SCHEDULING WITH RELEASE TIMES AND DEADLINES ON A MINIMUM NUMBER OF MACHINES *

CS261: Problem Set #1

COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)

31.6 Powers of an element

Extremal Graph Theory: Turán s Theorem

Optimization I : Brute force and Greedy strategy

On Modularity Clustering. Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari)

CS261: Problem Set #2

Lecture 1. 2 Motivation: Fast. Reliable. Cheap. Choose two.

Scribe from 2014/2015: Jessica Su, Hieu Pham Date: October 6, 2016 Editor: Jimmy Wu

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Bipartite Roots of Graphs

Transcription:

Milestones, Concepts, and Examples Xiaofeng Gao Department of Computer Science and Engineering Shanghai Jiao Tong University, P.R.China Spring 2015 Spring, 2015 Xiaofeng Gao 1/53

Outline History NP Optimization Definition of Approximation 1 History NP Optimization Definition of Approximation 2 3 Maximum Cut Problem Job Scheduling Spring, 2015 Xiaofeng Gao 2/53

History of Approximation History NP Optimization Definition of Approximation 1966 Graham: First analyzed algorithms by approximation ratio 1971 Cook: Gave the concepts of NP-Completeness 1972 Karp: Introduced plenty NP-Hard combinatorial optimization problems 1970 s Approximation became a popular research area 1979 Garey & Johnson: Computers and Intractability: A guide to the Theory of NP-Completeness Spring, 2015 Xiaofeng Gao 3/53

Books History NP Optimization Definition of Approximation CS 351 Stanford Univ (1991-1992) Rajeev Motwani Lecture Notes on Approximation Algorithms Volume I (1997) Hochbaum (Editor) Approximation Algorithms for NP-Hard Problems (1999) Ausiello, Crescenzi, Gambosi, etc. Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties Spring, 2015 Xiaofeng Gao 4/53

Books (2) History NP Optimization Definition of Approximation (2001) Vijay V. Vazirani Approximation Algorithms (2010) D.P. Williamson & D.B. Shmoys The Design of Approximation Algorithms (2012) D.Z Du, K-I. Ko & X.D. Hu Design and Analysis of Approximation Algorithms Spring, 2015 Xiaofeng Gao 5/53

Hardness History NP Optimization Definition of Approximation There are not much hardness results until 1990s... Theorem (PCP Theorem, ALMSS 92) There is no PTAS for MAX-3SAT unless P = NP ALMSS: Arora, Lund, Motwani, Sudan, and Szegedy Conjecture (Unique Games Conjecture, Knot 02) The Unique Game is NP-hard to approximate for any constant ratio. Subhash Khot Spring, 2015 Xiaofeng Gao 6/53

NP Optimization Problem History NP Optimization Definition of Approximation A NP Optimization Problem P is a fourtuple (I, sol, m, goal) s.t. I is the set of the instances of P and is recognizable in polynomial time. Given an instance x of I, sol(x) is the set of short feasible solutions of x and x and y such that y p( x ), it is decidable in polynomial time whether y sol(x). Given an instance x and a feasible solution y of x, m(x, y) is a polynomial time computable measure function providing a positive integer which is the value of y. goal {max, min} denotes maximization or minimization. Spring, 2015 Xiaofeng Gao 7/53

History NP Optimization Definition of Approximation An Example of NP Optimization Problem Example: Minimum Vertex Cover Given a graph G = (V, E), the Minimum Vertex Cover problem (MVC) is to find a vertex cover of minimum size, that is, a minimum node subset U V such that, for each edge (v i, v j ) E, either v i U or v j U. Spring, 2015 Xiaofeng Gao 8/53

History NP Optimization Definition of Approximation An Example of NP Optimization Problem Example: Minimum Vertex Cover Given a graph G = (V, E), the Minimum Vertex Cover problem (MVC) is to find a vertex cover of minimum size, that is, a minimum node subset U V such that, for each edge (v i, v j ) E, either v i U or v j U. Justification MVC is an NP Optimization Problem I = {G = (V, E) G is a graph}; poly-time decidable sol(g) = {U V (v i, v j ) E[v i U v j U]}; short feasible solution set and poly-time decidable m(g, U) = U ; poly-time computable function goal = min. Spring, 2015 Xiaofeng Gao 8/53

NPO Class History NP Optimization Definition of Approximation Definition: (NPO Class) The class NPO is the set of all NP optimization problems. Definition: (Goal of NPO Problem) The goal of an NPO problem with respect to an instance x is to find an optimum solution, that is, a feasible solution y such that m(x, y) = goal{m(x, y ) : y sol(x)}. Spring, 2015 Xiaofeng Gao 9/53

History NP Optimization Definition of Approximation What is Approximation Algorithm? Definition: (Approximation Algorithm) Given an NP optimization problem P = (I, sol, m, goal), an algorithm A is an approximation algorithm for P if, for any given instance x I, it returns an approximate solution, that is a feasible solution A(x) sol(x) with guaranteed quality. Spring, 2015 Xiaofeng Gao 10/53

History NP Optimization Definition of Approximation What is Approximation Algorithm? Definition: (Approximation Algorithm) Given an NP optimization problem P = (I, sol, m, goal), an algorithm A is an approximation algorithm for P if, for any given instance x I, it returns an approximate solution, that is a feasible solution A(x) sol(x) with guaranteed quality. Note: Guaranteed quality is the difference between approximation and heuristics. Approximation for PO, NPO and NP-hard Optimization. Decision, Optimization, and Constructive Problems. Spring, 2015 Xiaofeng Gao 10/53

r-approximation History NP Optimization Definition of Approximation Definition: (Approximation Ratio) Let P be an NPO problem. Given an instance x and a feasible solution y of x, we define the performance ratio of y with respect to x as { m(x, y) R(x, y) = max opt(x), opt(x) }. m(x, y) Definition: (r-approximation) Given an optimization problem P and an approximation algorithm A for P, A is said to be an r-approximation for P if, given any input instance x of P, the performance ratio of the approximate solution A(x) is bounded by r, say, R(x, A(x)) r. Spring, 2015 Xiaofeng Gao 11/53

APX Class History NP Optimization Definition of Approximation Definition: (F-APX) Given a class of functions F, an NPO problem P belongs to the class F-APX if an r-approximation polynomial time algorithm A for P exists, for some function r F. Example: F is constant functions P APX. F is O(log n) functions P log-apx. F is O(n k ) functions (polynomials) p poly-apx. F is O(2 nk ) functions P exp-apx. Spring, 2015 Xiaofeng Gao 12/53

Special Case History NP Optimization Definition of Approximation Definition: (Polynomial Time Approximation Scheme PTAS) An NPO problem P belongs to the class PTAS if an algorithm A exists such that, for any rational value ǫ > 0, when applied A to input (x,ǫ), it returns an (1+ǫ)-approximate solution of x in time polynomial in x. Definition: (Fully PTAS FPTAS) An NPO problem P belongs to the class FPTAS if an algorithm A exists such that, for any rational value ǫ > 0, when applied A to input (x,ǫ), it returns a (1+ǫ)-approximate solution of x in time polynomial both in x and in 1 ǫ. Spring, 2015 Xiaofeng Gao 13/53

Approximation Class Inclusion History NP Optimization Definition of Approximation If P NP, then FPTAS PTAS APX Log-APX Poly-APX Exp-APX NPO Constant-Factor Approximation (APX) Reduce App. Ratio Reduce Time Complexity PTAS ((1+ǫ)-Appx) Test Existence Reduce Time Complexity Spring, 2015 Xiaofeng Gao 14/53

Outline 1 History NP Optimization Definition of Approximation 2 3 Maximum Cut Problem Job Scheduling Spring, 2015 Xiaofeng Gao 15/53

Procedure Given: Goal: Steps: An instance of the problem specifies a set of items Determine a subset of the items that satisfies the problem constraints Maximize or minimize the measure function Sort the items according to some criterion Incrementally build the solution starting from the empty set Consider items one at a time, and maintain a set of selected items Terminate when break the problem constraints Spring, 2015 Xiaofeng Gao 16/53

Problem Instance: Given a universe U = {e 1,, e n } of n elements, a collection of subsets S = {S 1,..., S m } of U, and a cost function c : S Q +. Solution: A subcollection S S that covers all elements of U. Measure: Total cost of the chosen subcollection, c(s). S i S Spring, 2015 Xiaofeng Gao 17/53

An Example 1 2 3 S 1 4 5 6 S 2 7 8 9 S 3 10 S 6 S 4 11 12 S 5 U = {1, 2,, 12} S = {S 1, S 2,, S 6 } S 1 = {1, 2, 3, 4, 5, 6} S 2 = {5, 6, 8, 9} S 3 = {1, 4, 7, 10} S 4 = {2, 4, 7, 8, 11} S 5 = {3, 6, 9, 12} S 6 = {10, 11} Spring, 2015 Xiaofeng Gao 18/53

An Example 1 2 3 S 1 4 5 6 S 2 7 8 9 S 3 10 S 6 S 4 11 12 S 5 U = {1, 2,, 12} S = {S 1, S 2,, S 6 } S 1 = {1, 2, 3, 4, 5, 6} S 2 = {5, 6, 8, 9} S 3 = {1, 4, 7, 10} S 4 = {2, 4, 7, 8, 11} S 5 = {3, 6, 9, 12} S 6 = {10, 11} Optimal Solution: S = {S 3, S 4, S 5 } Spring, 2015 Xiaofeng Gao 18/53

Algorithm 1 Greedy Set Cover Input: U with n item; S with m subsets; cost function c(s i ). Output: Subset S S such that e i = U. e i S k S 1: C 2: while C U do 3: Find the most cost-effective set S. Set C C S. 4: e S\C, set price(e) = c(s) S C. 5: end while 6: Output selected S. The cost-effectiveness of a set S is the average cost at which it covers new elements; The price of an element e is the average cost when e is covered. Spring, 2015 Xiaofeng Gao 19/53

Time Complexity Theorem: Greedy Set Cover has time complexity O(mn). Spring, 2015 Xiaofeng Gao 20/53

Time Complexity Theorem: Greedy Set Cover has time complexity O(mn). Proof: (1). There are at most O(min{m, n}) iterations to select the subcollection. Within each iteration to find the minimum cost-effectiveness, it requires O(m) times; (2). There are totally n elements, and each e i, the price modification will perform at most O(m) times, each with linear operations. Totally the price updating procedure requires O(mn) time. Thus the total running time is O(min{m, n}) O(m)+O(mn) = O(mn). Spring, 2015 Xiaofeng Gao 20/53

Approximation Ratio Theorem: Greedy Set Cover is an H n factor approximation algorithm for the minimum set cover problem, where H n = 1+ 1 2 + + 1 n. Harmonic Number (Log-APX) Spring, 2015 Xiaofeng Gao 21/53

Approximation Ratio Theorem: Greedy Set Cover is an H n factor approximation algorithm for the minimum set cover problem, where H n = 1+ 1 2 + + 1 n. Harmonic Number (Log-APX) Proof: Let m g (U) be the cost of Greedy Set Cover, m (U) be the cost of the optimal solution. Number the elements of U in the order in which they were covered by the algorithm. Let e 1,..., e n be this numbering (resolving ties arbitrarily). Spring, 2015 Xiaofeng Gao 21/53

Approximation Ratio Theorem: Greedy Set Cover is an H n factor approximation algorithm for the minimum set cover problem, where H n = 1+ 1 2 + + 1 n. Harmonic Number (Log-APX) Proof: Let m g (U) be the cost of Greedy Set Cover, m (U) be the cost of the optimal solution. Number the elements of U in the order in which they were covered by the algorithm. Let e 1,..., e n be this numbering (resolving ties arbitrarily). Observation: For each k {1,..., n}, price(e k ) m (U) n k + 1. Spring, 2015 Xiaofeng Gao 21/53

Proof (Continued) Since in any iteration, the optimal solution can cover the remaining elements C with cost m (U). Therefore, among the remaining sets, there must be one having cost-effectiveness of at most m (U)/ C. In the iteration in which e k was covered, C n k + 1. Thus price(e k ) m (U) C m (U) n k + 1. Spring, 2015 Xiaofeng Gao 22/53

Proof (Continued) The total cost of the sets picked by this algorithm is equal to n price(e k ). Then k=1 m g (U) = n price(e k ) k=1 (1+ 1 2 + + 1 n ) m (U) = H n m (U) Spring, 2015 Xiaofeng Gao 23/53

Tight Example The optimal cover has a cost of 1+ǫ. While the greedy algorithm will output a cover of cost 1 n + 1 n 1 + +1 = H n. Spring, 2015 Xiaofeng Gao 24/53

Maximum Problem Instance: Given finite set X of items and a positive integer b, for each x i X, it has value p i Z + and size a i Z +. Solution: A set of items Y X such that a i b. Measure: Total value of the chosen items, x i Y x i Y p i. Spring, 2015 Xiaofeng Gao 25/53

Algorithm 2 Greedy Knapsack Input: X with n item and b; for each x i X, value p i, and a i. Output: Subset Y X such that a i b. x i Y 1: Sort X in non-increasing order with respect to the ratio p i a i Let x 1,, x n be the sorted sequence 2: Y = ; 3: for i = 1 to n do 4: if b a i then 5: Y = Y {x i }; 6: b = b a i ; 7: end if 8: end for 9: Return Y Spring, 2015 Xiaofeng Gao 26/53

Time Complexity Theorem: Greedy Knapsack has time complexity O(n log n). Spring, 2015 Xiaofeng Gao 27/53

Time Complexity Theorem: Greedy Knapsack has time complexity O(n log n). Proof: Consider items in non-increasing order with respect to the profic/occupancy ratio. (1). To sort the items, it requires O(n log n) times; (2). and then the complexity of the algorithm is linear in their number. Thus the total running time is O(n log n). Spring, 2015 Xiaofeng Gao 27/53

Approximation Ratio Theorem: The solution of Greedy Knapsack can be arbitrarily far from the optimal value. Spring, 2015 Xiaofeng Gao 28/53

Approximation Ratio Theorem: The solution of Greedy Knapsack can be arbitrarily far from the optimal value. Proof: (A Worst Case Example) Consider an instance X of Maximum Knapsack with n items. p i = a i = 1 for i = 1,, n 1. p n = b 1 and a n = b = kn where k is an arbitrarily large number. Let m (X) be the size of optimal solution, and m g (X) the size of Greedy Knapsack solution. Then, m (X) = b 1, while m g (X) = n 1, m (X) m g (X) > kn 1 n 1 > k. Spring, 2015 Xiaofeng Gao 28/53

Improvement The poor behavior of Greedy Knapsack if due to the fact that the algorithm does not include the element with highest profit in the solution while the optimal solution contains only this element. Spring, 2015 Xiaofeng Gao 29/53

Improvement The poor behavior of Greedy Knapsack if due to the fact that the algorithm does not include the element with highest profit in the solution while the optimal solution contains only this element. Theorem Given an instance X of the Maximum Knapsack problem, let m H (X) = max{p max, m g (X)}. where p max is the maximum profit of an item in X. Then m H (X) satisfies the following inequality: m (X) < 2. (Constant-Factor Approximation) m H (X) Spring, 2015 Xiaofeng Gao 29/53

Proof (1) Let j be the first index of an item in the order that cannot be included. The profit achieved so far (up to item j) is: j 1 p j = p i m g (X). i=1 The total occupancy (size) is j 1 a j = a i b. i=1 Spring, 2015 Xiaofeng Gao 30/53

Proof (1) Let j be the first index of an item in the order that cannot be included. The profit achieved so far (up to item j) is: j 1 p j = p i m g (X). i=1 The total occupancy (size) is Observation: m (X) < p j + p j. j 1 a j = a i b. i=1 Spring, 2015 Xiaofeng Gao 30/53

Proof (2) x i are ordered by p i a i, so any exchange of any subset of the chosen items x 1,, x j 1 with any subset of the unchosen items x j,, x n that does not increase a j will not increase the overall profit. Thus m (X) is bounded by p j plus the maximum profit from filling the remaining space. Since a j + a j > b (otherwise x j will be selected), we obtain: m (X) p j +(b a j ) pj a j < p j + p j. Spring, 2015 Xiaofeng Gao 31/53

Proof (3) To complete the proof we consider two possible cases. If p j p j, then m (X) < p j + p j 2p j 2m g (X) 2m H (X). If p j > p j, then p max > p j, and m (X) < p j + p max 2p max 2m H (X) Thus Greedy Knapsack is 2-approximation. Spring, 2015 Xiaofeng Gao 32/53

Problem Definition Instance: Given a graph G = (V, E) Solution: An independent set V V on G, such that for any (u, v) E, either u V or v V. Measure: Cardinality of the independent set, V. Spring, 2015 Xiaofeng Gao 33/53

An Example The cube has 6 maximal independent sets (red nodes). Spring, 2015 Xiaofeng Gao 34/53

Algorithm 3 Greedy Independent Set Input: Graph G = (V, E). Output: Independent Node Subset V V in G. 1: V = ; 2: U = V ; 3: while U do 4: x = vertex of minimum degree in the graph induced by U. 5: V = V {x}. 6: Eliminate x and all its neighbors from U. 7: end while 8: Return V. Spring, 2015 Xiaofeng Gao 35/53

Worst Case Example v K4 I4 Let K 4 be a clique with four nodes and I 4 an independent set of four nodes. v is the first to be chosen by algorithm, and the resulting solution contains this node and exactly one node of K 4. The optimal solution contains I 4. Thus m (X) m g(x) n 2. Spring, 2015 Xiaofeng Gao 36/53

Approximation Ratio Theorem: Given a graph G with n vertices and m edges, let δ = m n. The approximation ratio of Greedy Independent Set is m (X) m g (X) δ + 1. (Poly-APX) Spring, 2015 Xiaofeng Gao 37/53

Approximation Ratio Theorem: Given a graph G with n vertices and m edges, let δ = m n. The approximation ratio of Greedy Independent Set is m (X) m g (X) δ + 1. (Poly-APX) Proof: Define V the optimal independent set for G. x i the vertex chosen at i th iteration of. d i the degree of x i, then each time remove d i + 1 vertices. k i the number of vertices in V that are among d i + 1 vertices deleted in the i th iteration. Spring, 2015 Xiaofeng Gao 37/53

Proof (2) Since algorithm stops when all vertices are eliminated, m g(g) (d i + 1) = n. i=1 k i represent distinct vertices set in V, m g(g) i=1 k i = V = m (G). Each iteration the degree of the deleted vertices is at least d i (d i + 1) and an edge cannot have both its endpoints in V, the number of deleted edges is at least d i(d i +1)+k i (k i 1) m g(g) i=1 d i (d i + 1)+k i (k i 1) 2 2, m = δn. Spring, 2015 Xiaofeng Gao 38/53

Proof (3) Adding three inequalities together, we have m g(g) i=1 ) (d i (d i + 1)+k i (k i 1)+(d i + 1)+k i 2δn+n+ m (G) = m g(g) i=1 ( (di + 1) 2 + ki 2 ) n(2δ + 1)+m (G). By applying the Cauchy-Schwarz Inequality, the left part is minimized when d i + 1 = n m and k g(g) i = m (G) m, hence, g(g) n 2 + m (G) 2 m g (G) m g(g) i=1 ( n ) 2 n C-S: x i n i=1 i=1 ( (di + 1) 2 + ki 2 ) n(2δ + 1)+m (G), x 2 i, equality holds when x 1 = = x n. Spring, 2015 Xiaofeng Gao 39/53

Proof (4) Thus, m g (G) n 2 + m (G) 2 n 2 n(2δ + 1)+m (G) = m m (G) (G) + m (G) n(2δ + 1)+m (G) We have m (G) m g (G) 2δ + 1+ m (G) n n m (G) + m (G) n When m (G) = n, the right-hand inequality is maximized, m (G) 2δ + 1+1 = δ + 1. m g (G) 1+1 Note: max(m) = n(n 1) 2 when G is a K n clique, and δ = n 1 2. Spring, 2015 Xiaofeng Gao 40/53

Outline Maximum Cut Problem Job Scheduling 1 History NP Optimization Definition of Approximation 2 3 Maximum Cut Problem Job Scheduling Spring, 2015 Xiaofeng Gao 41/53

Procedure Given: Maximum Cut Problem Job Scheduling Goal: Steps: An instance of the problem specifies a set of items I = {x 1,, x n } Determine a suitable partition that satisfies the problem constraints Maximize or minimize the measure function Sort the items according to some criterion. Build the output partition P sequentially. Note that when algorithm considers item x i, it is not allowed to modify the partition of items x j, for j < i (only assign once). Spring, 2015 Xiaofeng Gao 42/53

Maximum Cut Problem Maximum Cut Problem Job Scheduling Problem Instance: Given a graph G = (V, E). Solution: A partition of V into sets S and S. Measure: Maximize the number of edges running between S and S. Spring, 2015 Xiaofeng Gao 43/53

Maximum Cut Problem Job Scheduling Algorithm 4 Sequential Maximum Cut Input: G = (V, E); Output: Partition of V = S S. 1: Pick v 1, v 2 from V arbitrarily. Set A {v 1 }; B {v 2 } 2: for v V {v 1, v 2 } do 3: if d(u, A) d(u, B) then 4: B B {v} 5: else 6: A A {v} 7: end if 8: end for 9: Return A, B. d(u, A) is the number of edges between u and A Spring, 2015 Xiaofeng Gao 44/53

Approximation Ratio Maximum Cut Problem Job Scheduling Theorem. Greedy Sequential has approximation ratio 2. Spring, 2015 Xiaofeng Gao 45/53

Approximation Ratio Maximum Cut Problem Job Scheduling Theorem. Greedy Sequential has approximation ratio 2. Proof. Consider each edge (v i, v j ). Whether it belongs to the cut is determined when v i is fixed and at the moment when v j is fixed. Thus we can partition the edge set by its decision vertex". At each iteration, by the algorithm strategy at least half of edges in each partition will be assigned to the cut, and will never change again. Thus A g E 2. It is easy to see that OPT E. Hence OPT A g E E /2 = 2. Spring, 2015 Xiaofeng Gao 45/53

Tight Example Maximum Cut Problem Job Scheduling A 1 2 B A 1 2 A B 3 OPT 4 A B 3 Ag 4 B Spring, 2015 Xiaofeng Gao 46/53

Maximum Cut Problem Job Scheduling Minimum Scheduling on Identical Machines Problem Instance: Given set of jobs T, number p of machines, length l j for executing job t j T. Solution: A p-machine schedule for T, i.e., a function f : T [1,, p]. Measure: Minimize the schedule s makespan, i.e., ( min max l j ). i [1,,p] t j T:f(t j )=i Note: This problem is NP-Hard even in the case of p = 2. Spring, 2015 Xiaofeng Gao 47/53

Maximum Cut Problem Job Scheduling Algorithm 5 Largest Processing Time Input: Set T with n jobs, each has length l j, p machines; Output: Partition P of T. 1: Sort I in non-increasing order w.r.t. their processing time Let t 1,, t n be the obtained sequence,l 1 l n. 2: P = {{t 1 },,, } 3: for i = 2 to n do 4: Find machine p j with minimum finish time A j (i 1) = min 5: Append t i into p j. 6: end for 7: Return P. l k 1 j p 1 k i 1:f(t k )=j Spring, 2015 Xiaofeng Gao 48/53

Approximation Ratio Maximum Cut Problem Job Scheduling Theorem: Greedy Sequential has approximation ratio 4 3 1 3p. Spring, 2015 Xiaofeng Gao 49/53

Approximation Ratio Maximum Cut Problem Job Scheduling Theorem: Greedy Sequential has approximation ratio 4 3 1 3p. Proof: Use Contradiction. Assume theorem doesn t hold and let T violate the claim having the minimum number of jobs. Let j be the job of T that is last considered by Greedy Sequential and let l min be its length (the shortest one). Consider two cases: l min > m (T) 3 and l min m (T) 3. Spring, 2015 Xiaofeng Gao 49/53

Proof (2) Maximum Cut Problem Job Scheduling If l min > m (T) 3, then at most two jobs may have been assigned to any machine (otherwise it will violate the definition of m (T)). Then p < T 2p. Next, let us prove that m L (T) = m (T) For T 2p, we can setup 2p T virtual jobs with length 0. Thus we can assume T = 2p. Easy to see, either greedy approach or optimal solution will divide those 2p jobs into p pairs. Spring, 2015 Xiaofeng Gao 50/53

Proof (3) Maximum Cut Problem Job Scheduling Assume m L (T) is the length of ith machine (obviously i p, and the ith machine is the makespan). Then m L (T) = l i + l 2p i+1. If l 2p i+1 = 0, then it means l i is the minimum length single job. Thus m L (T) = m (T) = l i. If l 2p i+1 > 0, then it means the ith machine has two jobs with length>0. Assume m L (T) > m (T) at this scenario. Consider the new matching pair on the ith machine in optimal solution. Then, in the optimal solution, the pairs containing {l 1,, l i 1 } must have an l j (1 j i 1), whose new matching is greater than m L (T). Spring, 2015 Xiaofeng Gao 51/53

Proof (4) Maximum Cut Problem Job Scheduling If l min m (X) 3, let W = T k=1 l k, then we have m (T) W p. Since T is a minimum counter-example, then T obtained from T by removing job t j satisfies the claim (m L (T) > m L (T )). Thus Greedy Sequential assigns t j to machine h that will have the largest processing time. Spring, 2015 Xiaofeng Gao 52/53

Proof (5) Maximum Cut Problem Job Scheduling Since t j was assigned to the least loaded machine, then the finish time of any other machine is at least A h ( T ) l j. Then W p(a h ( T ) l j )+l j and we obtain that m L (T) = A h ( T ) W p + p 1 p l min Since m (T) W p and l min m (T) 3, we have m L (T) m (T)+ p 1 3p m (T) = ( 4 3 1 3p )m (T). Spring, 2015 Xiaofeng Gao 53/53