CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

Similar documents
CSC2420 Spring 2015: Lecture 2

CSC2420: Algorithm Design, Analysis and Theory Fall 2017

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

CSC 373: Algorithm Design and Analysis Lecture 3

CSC 373: Algorithm Design and Analysis Lecture 4

Theorem 2.9: nearest addition algorithm

Algorithm Design and Analysis

CMPSCI611: Approximating SET-COVER Lecture 21

Approximation Algorithms

COMP Analysis of Algorithms & Data Structures

Algorithm Design Methods. Some Methods Not Covered

Approximation Basics

Approximation Techniques for Utilitarian Mechanism Design

Approximation Algorithms

Approximation Algorithms

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

4.1 Interval Scheduling

11. APPROXIMATION ALGORITHMS

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017

Chapter 16. Greedy Algorithms

Theorem: The greedy algorithm for RKP will always output the optimal solution under Selection Criterion 3. Proof sketch: Assume that the objects have

Greedy Approximations

Introduction to Approximation Algorithms

Online Algorithms. Lecture 11

Approximation Algorithms

Design and Analysis of Algorithms

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

An Efficient Approximation for the Generalized Assignment Problem

Coverage Approximation Algorithms

Chapter Design Techniques for Approximation Algorithms

Approximation Algorithms

11. APPROXIMATION ALGORITHMS

Comp Online Algorithms

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory An introductory (i.e. foundational) level graduate course.

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

Solutions for the Exam 6 January 2014

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

CS 598CSC: Approximation Algorithms Lecture date: March 2, 2011 Instructor: Chandra Chekuri

10. EXTENDING TRACTABILITY

Prove, where is known to be NP-complete. The following problems are NP-Complete:

Chapter 9. Greedy Technique. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

Storage Allocation Based on Client Preferences

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14

val(y, I) α (9.0.2) α (9.0.3)

Characterizing sets of jobs that admit optimal greedy-like algorithms

Polynomial-Time Approximation Algorithms

Competitive Algorithms for Mulitstage Online Scheduling

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS

11. APPROXIMATION ALGORITHMS

Introduction to Optimization

How well can Primal-Dual and Local-Ratio algorithms perform?

Block Crossings in Storyline Visualizations

CSC 373: Algorithm Design and Analysis Lecture 8

Department of Computer Science and Engineering Analysis and Design of Algorithm (CS-4004) Subject Notes

Maximum Betweenness Centrality: Approximability and Tractable Cases

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

CS 4407 Algorithms. Lecture 8: Circumventing Intractability, using Approximation and other Techniques

The complement of PATH is in NL

High Dimensional Indexing by Clustering

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Introduction to Optimization

COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)

arxiv: v2 [cs.dm] 3 Dec 2014

Foundations of Computing

CSE 417 Network Flows (pt 4) Min Cost Flows

The Ordered Covering Problem

Greedy Algorithms. Alexandra Stefan

A PAC Approach to ApplicationSpecific Algorithm Selection. Rishi Gupta Tim Roughgarden Simons, Nov 16, 2016

Greedy Algorithms CHAPTER 16

On Modularity Clustering. Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari)

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm

Algorithms for Storage Allocation Based on Client Preferences

CSE541 Class 2. Jeremy Buhler. September 1, 2016

6. Algorithm Design Techniques

Christofides Algorithm

Algorithmic complexity of two defence budget problems

5 MST and Greedy Algorithms

Copyright 2000, Kevin Wayne 1

UNIT 3. Greedy Method. Design and Analysis of Algorithms GENERAL METHOD

New Results on Simple Stochastic Games

5 MST and Greedy Algorithms

Approximation Algorithms

How well can Primal-Dual and Local-Ratio algorithms perform?

Online Stochastic Matching CMSC 858F: Algorithmic Game Theory Fall 2010

ON WEIGHTED RECTANGLE PACKING WITH LARGE RESOURCES*

16 Greedy Algorithms

SCHEDULING WITH RELEASE TIMES AND DEADLINES ON A MINIMUM NUMBER OF MACHINES *

Absorbing Random walks Coverage

Representations of Weighted Graphs (as Matrices) Algorithms and Data Structures: Minimum Spanning Trees. Weighted Graphs

COMP Online Algorithms. Online Graph Problems. Shahin Kamali. Lecture 23 - Nov. 28th, 2017 University of Manitoba

1 Linear programming relaxation

1 Format. 2 Topics Covered. 2.1 Minimal Spanning Trees. 2.2 Union Find. 2.3 Greedy. CS 124 Quiz 2 Review 3/25/18

Absorbing Random walks Coverage

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li

DESIGN AND ANALYSIS OF ALGORITHMS GREEDY METHOD

Transcription:

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin September 20, 2012 1 / 1

Lecture 2 We continue where we left off last lecture, namely we are considering a PTAS for the the knapsack problem. Input: A knapsack size capacity C and n items I = {I 1,..., I n } where I j = (v j, s j ) with v j (resp. s j ) the profit value (resp. size) of item I j. Output: A feasible subset S {1,..., n} satsifying j S s j C so as to maximize V (S) = j S v j. Note: I will usually use approximation ratios r 1 (so that we can talk unambiguously about upper and lower bounds on the ratio) but many people use approximation ratios ρ 1 for maximization problems; i.e. ALG ρopt. It is easy to see that the most natural greedy methods (sort by non-increasing profit densities v j s j, sort by non-increasing profits v j, sort by non-decreasing size s j ) will not yield any constant ratio. 2 / 1

The partial enumeration greedy PTAS for knapsack PGreedy k Algorithm: Sort I so that v 1 s 1 v 2 s 2... vn s n For every feasible subset H I with H k Let R = I H and let S H := H Consider items in R (in the order of profit densities) and greedily add items to S H that do not exceed knapsack capacity C. % It is sufficient to stop as soon as the first item is too large to fit End For Choose the S H that maximizes the profit. 3 / 1

Sahni s PTAS result Theorem (Sahni 1975): V (OPT ) (1 + 1 k )V (PGreedy k). This algorithm takes time kn k and setting k = 1 ɛ yields a (1 + ɛ) approximation (i.e. a PTAS) running in time 1 ɛ n 1 ɛ. To obtain a 2-approximation it suffices to maximize between the largest value item and the value of the profit density greedy algorithm. An FPTAS is an algorithm achieving a (1 + ɛ) approximation with running time poly(n, 1 ɛ ). There is an FPTAS for the knapsack problem (using dynamic programming and scaling the input values) so that this algorithm was quickly subsumed but still the partial enumeration technique is important. In particular, more recently this technique (for k = 3) was used to achieve an e e 1 1.58 approximation for monotone submodular maximization subject to a knapsack constraint. It is NP-hard to do better than a e e 1 approximation for submodular maximization subject to a cardinality constraint and hence this is also the best possible ratio for submodular maximization subject to a knapsack constraint. 4 / 1

The priority algorithm model and variants Before temporarily leaving greedy (and greedy-like) algorithms, I want to present the priority algorithm model and how it can be extended in (conceptually) simple ways to go beyond the power of the priority model. What is the intuitive nature of a greedy algorithm as exemplified by the CSC 373 algorithms mentioned last class)? With the exception of Huffman coding (which we can also deal with) all these algorithms consider one input item in each iteration and make an irrevocable greedy decision about that item. We are then already assuming that the class of search/optimization problems we are dealing with can be viewed as making a decision D k about each input item I k (e.g. on what machine to schedule job I k in the makespan case) such that {(I 1, D 1 ),..., (I n, D n )} constitutes a feasible solution. 5 / 1

Note: that a problem is only fully specified when we say how input items are represented. We mentioned that a non-greedy online algorithm for identical machine makespan can improve the competitive ratio; that is, the algorithm does not always place a job on the (or a) least loaded machine (i.e. does not make a greedy or locally optimal decision in each iteration). It isn t always obvious if or how to define a greedy decision but for many problems the definition of greedy can be informally phrased as live for today (i.e. assume the current input item could be the last item) so that the decision should be an optimal decision given the current state of the computation. For example, in the knapsack problem, a greedy decision always takes an input if it fits within the knapsack constraint and in the makespan problem, a greedy decision always schedules a job on some machine so as to minimize the increase in the makespan. This is more general than saying it must place the item on the least loaded machine. 6 / 1

We have both fixed order priority algorithms (e.g. unweighted interval scheduling and LPT makespan) and adaptive order priority algorithms (e.g. the set cover greedy algorithm and Prim s MST algorithm). The key concept then is to indicate how the algorithm chooses the order in which input items are considered. We cannot allow the algorithm to use any ordering (as there may very well be an optimal ordering that allows the algorithm to achieve optimality). We might be tempted to say that the ordering has to be determined in polynomial time but that gets us into the tarpit of trying to prove what can and can t be done in polynomial time. Instead, we take an information theoretic viewpoint in defining the orderings we allow. 7 / 1

Informal definition of a priority algorithm Lets first consider fixed priority algorithms. Since I am using this framework mainly to argue negative results (e.g. a priority algorithm for the given problem cannot achieve a stated approximation ratio), we will view the semantics of the model as a game between the algorithm and an adversary. Initially there is some (possibly infinite) set J of potential inputs. The algorithm chooses a total ordering π on J. Then the adversary selects a subset I J of actual inputs so that I becomes the input to the priority algorithm. The input items I 1,..., I n are ordered according to π. Finally, in iteration k for 1 k n, the algorithm considers input item I k and based on this input and all previous inputs and decisions (i.e. based on the current state of the computation) the algorithm makes an irrevocable decision D k about this input item. 8 / 1

The fixed (order) priority algorithm template J is the set of all possible input items I J is the input instance decide on a total ordering π of J S := % S is the set of items already seen i := 0 % i = S while I \ S do i := i + 1 I := I \ S I i := min π {I I} make an irrevocable decision D i concerning I i S := S {I i } end Figure: The template for a fixed priority algorithm 9 / 1

Some comments on the priority model A special (but usual) case is that π is determined by a function f : J R and and then ordering the set of actual input items by increasing (or decreasing) values f (). (We can break ties by say using the index of the item to provide a total ordering of the input set.) N.B. We make no assumption on the complexity or even the computability of the ordering π or function f. As stated we do not give the algorithm any additional information other than what it can learn as it gradually sees the input sequence. However, we can allow priority algorithms to be given some (hopefully easily computed) global information such as the number of input items, or say in the case of the makespan problem the minimum and/or maximium processing time/load of any input item. (Some proofs can be easily modified to allow such global information.) 10 / 1

The adaptive priority model template J is the set of all possible input items I is the input instance S := % S is the set of items already considered i := 0 % i = S while I \ S do i := i + 1 decide on a total ordering π i of J I := I \ S I i := min πi {I I} make an irrevocable decision D i concerning I i S := S {I i } J := J \ {I : I πi I i } % some items cannot be in input set end Figure: The template for an adaptive priority algorithm 11 / 1

Inapproximations with respect to the priority model Once we have a precise model, we can then argue that certain approximation bounds are not possible within this model. Such inapproximation results have been estalished with respect to (adaptive) priority algorithms for a number of problems but in many cases much better results can be established using extensions of the model. 1 For the weighted interval selection (a packing problem) with arbitrary weighted values (resp. for proportinal weights v j = f j s j ), no priority algorithm can achieve a constant approximation (respectively, better than a 3-approximation). 2 For the knapsack problem, no priority algorithm can achieve a constant approximation. (We have already noted how partial enumeration greedy can achieve a PTAS.) 3 For the set cover problem, the natural greedy algorithm is essentially the best priority algorithm. 4 As previously mentioned, for fixed order priority algorithms, there is an Ω(log m log log m) inapproximation bound for the makespan problem in the restricted machines model. 12 / 1

Extensions of the priority model: priority with revocable acceptances For packing problems, we can have priority algorithms with revocable acceptances. That is, in each iteration the algorithm can now eject previously accepted items in order to accept the current item. However, at all times, the set of currently accepted items must be a feasible set and all rejections are permanent. Within this model, there is a 4-approximation algorithm for the weighted interval selection problem WISP (Bar-Noy et al [2001], and Erlebach and Spieksma [2003]) and a 1.17 inapproximation bound (Horn [2004]). More generally, the algorithm applies to the weighted job interval selection problem WJISP resulting in an 8-approximation. The model has also been studied with respect to the proportional profit knapsack problem/subset sum problem (Ye and B [2008]) improving the constant approximation. And for the general knapack problem, the model immediately yields a 2-approximation. 13 / 1

The Greedy α algorithm for WJISP The algorithm as stated by Erlebach and Spieksma (and called ADMISSION by Bar Noy et al) is as follows: S := % S is the set of currently accepted intervals Sort input intervals so that f 1 f 2... f n for i = 1..n C i := min weight subset of S : (S/C i ) {I i } feasible if v(c i ) α v(i i ) then S := (S/C i ) {I i } end if END FOR Figure: Priority algorithm with revocable acceptances for WJISP The Greedy α algorithm (which is not greedy by my definition) has a tight 1 approximation ratio of α(1 α) for WISP and 2 α(1 α) for WJISP. 14 / 1