CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

Similar documents
CSC2420 Spring 2015: Lecture 2

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

CSC 373: Algorithm Design and Analysis Lecture 3

CSC 373: Algorithm Design and Analysis Lecture 4

Theorem 2.9: nearest addition algorithm

Approximation Basics

Approximation Algorithms

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Greedy algorithms is another useful way for solving optimization problems.

Conflict Graphs for Combinatorial Optimization Problems

9 About Intersection Graphs

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CSC 373: Algorithm Design and Analysis Lecture 8

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

GENERALIZING CONTEXTS AMENABLE TO GREEDY AND GREEDY-LIKE ALGORITHMS. Yuli Ye

An Efficient Approximation for the Generalized Assignment Problem

Algorithm Design Methods. Some Methods Not Covered

Lecture 3: Art Gallery Problems and Polygon Triangulation

Approximation Algorithms

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017

APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS

11. APPROXIMATION ALGORITHMS

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

Treewidth and graph minors

PTAS for Matroid Matching

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

CMPSCI611: Approximating SET-COVER Lecture 21

Approximation Algorithms

Approximation Algorithms for Geometric Intersection Graphs

Chapter Design Techniques for Approximation Algorithms

Solving NP-hard Problems on Special Instances

Chordal Graphs: Theory and Algorithms

How well can Primal-Dual and Local-Ratio algorithms perform?

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Approximation Algorithms

Algorithmic complexity of two defence budget problems

Chapter-6 Backtracking

Lecture 4: Primal Dual Matching Algorithm and Non-Bipartite Matching. 1 Primal/Dual Algorithm for weighted matchings in Bipartite Graphs

CS 473: Fundamental Algorithms, Spring Dynamic Programming. Sariel (UIUC) CS473 1 Spring / 42. Part I. Longest Increasing Subsequence

EE512 Graphical Models Fall 2009

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

CSE 417 Network Flows (pt 4) Min Cost Flows

On Covering a Graph Optimally with Induced Subgraphs

Approximation Algorithms

Approximation Techniques for Utilitarian Mechanism Design

CSC2420: Algorithm Design, Analysis and Theory Fall 2017

Greedy Approximations

Approximation Algorithms: The Primal-Dual Method. My T. Thai

1 Definition of Reduction

Batch Coloring of Graphs

CSCE 350: Chin-Tser Huang. University of South Carolina

Lecture 3: Totally Unimodularity and Network Flows

Design and Analysis of Algorithms

Figure : Example Precedence Graph

Solutions to Problem Set 1

16 Greedy Algorithms

Advanced Algorithms. On-line Algorithms

Algorithm design in Perfect Graphs N.S. Narayanaswamy IIT Madras

Reductions. Linear Time Reductions. Desiderata. Reduction. Desiderata. Classify problems according to their computational requirements.

16.410/413 Principles of Autonomy and Decision Making

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

On Minimum Sum of Radii and Diameters Clustering

Chapter 16. Greedy Algorithms

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

arxiv: v2 [cs.dm] 3 Dec 2014

A SIMPLE APPROXIMATION ALGORITHM FOR NONOVERLAPPING LOCAL ALIGNMENTS (WEIGHTED INDEPENDENT SETS OF AXIS PARALLEL RECTANGLES)

COMPUTATIONAL GEOMETRY

Algorithm Design and Analysis

Approximability Results for the p-center Problem

Decomposition of log-linear models

Polygon decomposition. Motivation: Art gallery problem

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶

In this lecture, we ll look at applications of duality to three problems:

Chordal deletion is fixed-parameter tractable

Chapter 9. Greedy Technique. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

6. Algorithm Design Techniques

Complexity Results on Graphs with Few Cliques

How well can Primal-Dual and Local-Ratio algorithms perform?

How well can Primal-Dual and Local-Ratio algorithms perform?

Christofides Algorithm

Prove, where is known to be NP-complete. The following problems are NP-Complete:

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li

Improved Results on Geometric Hitting Set Problems

A Decomposition for Chordal graphs and Applications

Computing optimal total vertex covers for trees

Algorithms for Data Science

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

THE LEAFAGE OF A CHORDAL GRAPH

CS 372: Computational Geometry Lecture 3 Line Segment Intersection

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

SCHEDULING WITH RELEASE TIMES AND DEADLINES ON A MINIMUM NUMBER OF MACHINES *

Greedy Algorithms CLRS Laura Toma, csci2200, Bowdoin College

Lecture 7: Computational Geometry

Lecture outline. Graph coloring Examples Applications Algorithms

Introduction to Approximation Algorithms

CS 373: Combinatorial Algorithms, Fall Name: Net ID: Alias: U 3 / 4 1

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems

CS 6783 (Applied Algorithms) Lecture 5

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

Transcription:

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin September 27, 2012 1 / 23

Lecture 3 1 We will first do the analysis of the Greedy α revocable priority algorithm for the WISP problem showing that the algorithm has an approximation ratio of 1 α (1 α) which is optimized at α = 1/2. 2 We mention a similar algorithm: Graham s convex hull scan algorithm 3 We then introduce the priority stack model 4 We return briefly to greedy algorithms and consider the natural greedy algorithms for the weighted versions of set packing and vertex cover. 5 We then move on to two dynamic programming algorithms, for the makespan and knapsack problems. 2 / 23

The Greedy α algorithm for WJISP The algorithm as stated by Erlebach and Spieksma (and called ADMISSION by Bar Noy et al) is as follows: S := % S is the set of currently accepted intervals Sort input intervals so that f 1 f 2... f n for i = 1..n C i := min weight subset of S : (S/C i ) {I i } feasible if v(c i ) α v(i i ) then S := (S/C i ) {I i } end if END FOR Figure: Priority algorithm with revocable acceptances for WJISP The Greedy α algorithm (which is not greedy by my definition) has a tight 1 approximation ratio of α(1 α) for WISP and 2 α(1 α) for WJISP. 3 / 23

Graham s [1972] convex hull algorithm Graham s scan algorithm for determining the convex hull of a set of n points in the plane is an efficient (O(n log n) time) algorithm in the framework of the revcocable priority model. (This is not a search or optimization problem but does fit the framework of making a decision for each input point. For simplicity assume no three points are colinear.) Choose a point that must be in the convex hull (e.g. the (leftmost) point p 1 having smallest y coordinate) Sort remaining points p 2,..., p n by increasing angle with respect to p 1 Push p 1, p 2, p 3 on a Stack for i = 4..n Push p i onto stack While the three top points u, v, p i on the Stack make a right turn Remove v from stack End While Return points on Stack as the points defining the convex hull. Figure: The Graham convex hull scan algorithm 4 / 23

Priority Stack Algorithms For packing problems, instead of immediate permanent acceptances, in the first phase of a priority stack algorithm, items (that have not been immediately rejected) can be placed on a stack. After all items have been considered (in the first phase), a second phase consists of popping the stack so as to insure feasibility. That is, while popping the stack, the item becomes permanently accepted if it can be feasibly added to the current set of permanently accepted items; otherwise it is rejected. Within this priority stack model (which models a class of primal dual with reverse delete algorithms and a class of local ratio algorithms), the weighted interval selection problem can be computed optimally. For covering problems (such as min weight set cover and min weight Steiner tree), the popping stage is to insure the minimality of the solution; that is, when popping item I from the stack, if the current set of permanently accepted items plus the items still on the stack already consitute a solution then I is deleted and otherwise it becomes a permanently accepted item. 5 / 23

Chordal graphs and perfect elimination orderings An interval graph is an example of a chordal graph. There are a number of equivalent definitions for chordal graphs, the standard one being that there are no induced cycles of length greater than 3. We shall use the characterization that a graph G = (V, E) is chordal iff there is an ordering of the vertices v 1,..., v n such that for all i, Nbdh(v i ) {v i+1,..., v n } is a clique. Such an ordering is called a perfect elimination ordering (PEO). It is easy to see that the interval graph induced by interval intersection has a PEO (and hence interval graphs are chordal) by ordering the intervals such that f 1 f 2... f n. Trees are also chordal graphs and a PEO is obtained by stripping off leaves one by one. 6 / 23

MIS and colouring chordal graphs Using this ordering (by earliest finishing time), we know that there is a greedy (i.e. priority) algorithm that optimally selects a maximum size set of non intersecting intervals. The same algorithm (and proof by charging argument) using a PEO in a fixed order greedy algorithm optimally solves the unweighted MIS problem for any chordal graph. This can be shown by an inductive argument showing that the partial solution after the ith iteration is promising in that it can be extended to an optimal solution; and it can also be shown by a simple charging argument. We also know that the greedy algorithm that orders intervals such that s 1 s 2... s n and then colours nodes using the smallest feasible colour is a an optimal algorithm for colouring interval graphs. Ordering by earliest starting times is (by symmetry) equivalent to ordering by latest finishing times first. The generalization of this is that any chordal graph can be optimally coloured by a greedy algorithms that orders vertices by the reverse of a PEO. This can be shown by arguing that when the algorithm first uses colour k, it is witnessing a clique of size k. 7 / 23

The optimal priority stack algorithm for the (WMIS) problem in chordal graphs ; Akcoglu et al [2002 Stack := % Stack is the set of items on stack Sort nodes using a PEO Set w (v i ) := w(v i ) for all v i % w (v) will be the residual weight of a node For i = 1..n C i := {v j j < i, v i Nbhd(v j ) and v j on Stack} w (v i ) := w (v i ) w (C i ) If w (v i ) > 0 then push v i onto Stack ; else reject End For S := % S will be the set of accepted nodes While Stack Pop next node v from Stack If v is not adjacent to any node in S, then S :=S {v} End While 8 / 23

The natural greedy algorithm : WMIS for k + 1-claw free graphs We return briefly to greedy algorithms considering the vague idea of the natural greedy algorithm. k + 1-claw free graphs A graph G = (V, E) is k + 1-claw free if for all v V, the induced subgraph of Nbhd(v) has at most k independent vertices (i.e. does not have a k + 1 claw as an induced subgraph). There are many types of graphs that are k + 1 claw free for small k; in particular, the intersection graph of axis parallel translates of a convex object in the two dimensional plane is a 6-claw free graph. For rectangles, the intersection graph is 5-claw free. Let {S 1,..., S n } be subsets of a universe U such that S i k. The intersection graph G = (V, E) defined by (S i, S j ) E iff S i S j is a k + 1 claw free graph. 9 / 23

k + 1-claw free graphs and the natural greedy algorithm continued Of special note are 3-claw free graphs which are simply called claw free. For example, line graphs are 3-claw free. A matching in a graph G corresponds to an independent set in the line graph L(G). The natural greedy algorithm for WMIS The natural greedy algorithm sorts the vertices such that w(v 1 ) w(v 2 )... w(v n ) and then accepts vertices greedily; i.e. if the vertex is independent of previously accepted vertices then accept. The natural greedy algorithm provides a k-approximation for WMIS on k + 1 claw free graphs. (A more complex algorithm, generalizing weighted matching, optimally solves the WMIS for 3-claw free graphs.) 10 / 23

The natural greedy might not be the best greedy: Set packing We consider two examples (weighted set packing and vertex cover) where the (seemingly) most natural) greedy algorithm is not the best greedy (approximation) algorithm. The weighted set packing problem (AKA the single minded combinatorial auction problem) is defined as follows: There is a universe U of size m, a collection of subsets {S 1,..., S n } of U and a weight w i for each set S i. The objective is to select a non-intersecting subcollection of these subsets so as to maximize the weight of the chosen sets. It is NP hard to obtain approximation ratio m 1 2 ɛ for any ɛ > 0. The induced intersection graph is m + 1 claw free and hence the natural greedy algorithm yields a a min{m, n} approximation. Let s assume n >> m and try to improve upon the m approximation. The next most natural greedy algorithm (ordering sets by non-increasing w(s)/ S ) is still an m-approximation. We can obtain approximation 2 m by ordering sets by non-increasing w(s)/ S. 11 / 23

Second example where natural greedy is not best: weighted vertex cover If we consider vertex cover as a special case of set cover then the natural greedy (which is essentially optimal for set cover) becomes the following: d (v) := d(v) for all v V % d (v) will be the residual degree of a node While there are uncovered edges Let v be the node minimizing w(v)/d (v) Add v to the vertex cover; remove all edges in Nbhd(v); recalculate the residual degree of all nodes in Nbhd)v) End While Figure: Natural greedy algorithm for weighted vertex cover with approximation ratio H n 12 / 23

Clarkson s [1983] modified greedy for weighted vertex cover d (v) := d(v) for all v V % d (v) will be the residual degree of a node w (v) := w(v) for all v V % w (v) will be the residual weight of a node While there are uncovered edges Let v be the node minimizing w (v)/d (v) w :=w (v)/d (v) w (u) :=w (u) w for all u Nbhd(v) Add v to the vertex cover; remove all edges in Nbhd(v); recalculate the residual degree of all nodes in Nbhd(v) End While Figure: Clarkson s greedy algorithm for weighted vertex cover with approximation ratio 2 13 / 23

A common generalization of k + 1-claw free graphs and chordal graphs One vague theme I try to think about is the interplay between classes of problems and classes of algorithms. In some way this leads to a common extension of chordal and k + 1-claw free graph implicitly defined in Akcoglu et al [2002] and pursued in Ye and B. [2009]. A graph is inductively k-independent is there is a k-peo ordering of the vertices v 1,..., v n such that Nbhd(v i ) {v i+1,..., v n } has at most k independent vertices. For example, The JISP problem induces an inductively 2-independent graph. Every planar graph is inductively 3-independent. It can be shown that the WMIS stack algorithm and analysis for chordal graphs extends to provide a k approximation for inductive k independent graphs by using a k-peo. The reverse order k-peo greedy algorithm is a k-approximation for inductive k independent graphs. 14 / 23

Dynamic programming and scaling We have previously seen that with some use of brute force and greediness, we can achieve PTAS algorithms for the identical machines makespan (polynomial in the number n of jobs but exponential in the number m of machines) and knapsack problems. We now consider how dynamic programming (DP) can be used to acheive a PTAS for the makespan problem which is polynomial in m and n, and how to achieve an FPTAS for the knapsack problem. To achieve these improved bounds we will combine dynamic programming with the idea of scaling inputs. 15 / 23

An FPTAS for the knapsack problem Let the input items be I 1,..., I n (in any order) with I k = (v k, s k ). The idea for the knapsack FPTAS begins with a pseudo polynomial time DP for the problem, namely an algorithm that is polynomial in the numeric values v j (rather than the encoded length v j ) of the input values. Define S[j, v] = the minimum size s needed to achieve a profit of at least v using only inputs I 1,... I j ; this is defined to if there is no way to achieve this profit using only these inputs. This is the essense of DP algorithms; namely, defining an approriate generalization of the problem (which we give in the form of an array) such that 1 the desired result can be easily obtained from ths array S[, ] 2 each entry of the array can be easily computed given pevious entries 16 / 23

How to compute the array S[j, v] and why is this sufficient The value of an optimal solution is max{v S[n, v] is finite}. We have the following equivalent recursive definition that shows how to compute the entries of S[j, v] for 0 j n and v n j=1 v j. 1 Basis: S[0, v] = for all v 2 Induction: S[j, v] = min{a, B} where A = S[j 1, v] and B = S[j 1, max{v v j, 0}] + s j. It should be clear that while we are computing these values that we can at the same time be computing a solution corresponding to each entry in the array. For efficiency one usually computes these entries iteratively but one could use a recursive program with memoization. The running time is O(n, V ) where V = n j=1 v j. Finally, to obtain the FPTAS the idea (due to Ibarra and Kim [1975]) is simply that the high order bits/digits of the item values give a good approximation to the true value of any solution and scaling these values down (or up) to the high order bits does not change feasibility. 17 / 23

The better PTAS for makespan We can think of m as being a parameter of the input instance and now we want an algorithm whose run time is poly in m, n for any fixed ɛ = 1/s. The algorithm s run time is exponential in 1 ɛ 2. We will need a combination of paradigms and techniques to achieve this PTAS; namely, DP and scaling (but less obvious than for the knapsack scaling) and binary search. 18 / 23

The high level idea of the makespan PTAS Let T be a candidate for an achievable makespan value. Depending on T and the ɛ required, we will scale down large (i.e. if p i T /s = T ɛ) to the largest multiple of T /s 2 so that there are only d = s 2 values for scaled values of the large jobs. When there are only a fixed number d of job sizes, we can use DP to test (and find) in time O(n 2d ) if there is a soluton that achieves makespan T. If there is such a solution then small jobs can be greedily scheduled without increasing the makespan too much. We use binary search to find a good T. 19 / 23

The optimal DP for a fixed number of job values Let z 1,..., z d be the d different job sizes and let n = n i be the total number of jobs with n i being the number of jobs of szie z i. M[x 1,..., x d ] = the minimum number of machines needed to schedule x i jobs having size z i within makespan T. The n jobs can be scheduled within makespan T iff M[n 1,, n d ] is at most m. 20 / 23

Computing M[x 1,..., x d ] Clearly M[0,..., 0] = 0 for the base case. Let V = {(v 1,, v d ) i v iz i T } be the set of configurations that can complete on one machine within makespan T ; that is, scheduling v i jobs with size z i on one machine does not exceed the target makespan T. M[x 1,..., x d ] = 1 + min (v1,...,v d ) V :v i x i M[x 1 v 1,..., x d v d ] There are at most n d array elements and each entry uses approximately n d time to compute (given previous entries) so that the total time is O(n 2d ). Must any (say DP) algorithm be exponential in d? 21 / 23

Large jobs and scaling (not worrying about any integrality issues) A job is large if p i T /s = T ɛ Scale down large jobs to have size p i = largest multiple of T /(s 2 ) p i p i T /(s 2 ) There are at most d = s 2 job sizes p There can be at most s large jobs on any machine not exceeding target makespan T. 22 / 23

Taking care of the small jobs and accounting for the scaling down We now wish to add in the small jobs with sizes less than T /s. We continue to try to add small jobs as long as some machine does not exceed the target makespan T. If this is not possible, then makespan T is not possible. If we can add in all the small jobs then to account for the scaling we note that each of the at most s large jobs were scaled down by at at most T /(s 2 ) so this only increases the makespan to (1 + 1/s)T. 23 / 23