The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

Similar documents
Greedy Technique - Definition

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

GSLM Operations Research II Fall 13/14

Design and Analysis of Algorithms

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Smoothing Spline ANOVA for variable screening

11. APPROXIMATION ALGORITHMS

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Support Vector Machines

Dijkstra s Single Source Algorithm. All-Pairs Shortest Paths. Dynamic Programming Solution. Performance. Decision Sequence.

5 The Primal-Dual Method

CSE 326: Data Structures Quicksort Comparison Sorting Bound

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Dijkstra s Single Source Algorithm. All-Pairs Shortest Paths. Dynamic Programming Solution. Performance

CSE 326: Data Structures Quicksort Comparison Sorting Bound

All-Pairs Shortest Paths. Approximate All-Pairs shortest paths Approximate distance oracles Spanners and Emulators. Uri Zwick Tel Aviv University

Report on On-line Graph Coloring

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Load Balancing for Hex-Cell Interconnection Network

Dynamic Programming. Example - multi-stage graph. sink. source. Data Structures &Algorithms II

The Codesign Challenge

CHAPTER 2 DECOMPOSITION OF GRAPHS

1 Introducton Gven a graph G = (V; E), a non-negatve cost on each edge n E, and a set of vertces Z V, the mnmum Stener problem s to nd a mnmum cost su

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Chapter 9. Greedy Technique. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

F Geometric Mean Graphs

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Non-Split Restrained Dominating Set of an Interval Graph Using an Algorithm

Week 11: Minimum Spanning trees

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

LECTURE : MANIFOLD LEARNING

Parallel matrix-vector multiplication

More on Sorting: Quick Sort and Heap Sort

1 Introducton Effcent and speedy recovery of electrc power networks followng a major outage, caused by a dsaster such as extreme weather or equpment f

Hierarchical clustering for gene expression data analysis

Module Management Tool in Software Development Organizations

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

3. CR parameters and Multi-Objective Fitness Function

Priority queues and heaps Professors Clark F. Olson and Carol Zander

An Optimal Algorithm for Prufer Codes *

Graph-based Clustering

K-means and Hierarchical Clustering

Esc101 Lecture 1 st April, 2008 Generating Permutation

Unsupervised Learning

Ramsey numbers of cubes versus cliques

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Problem Set 3 Solutions

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Machine Learning: Algorithms and Applications

CS221: Algorithms and Data Structures. Priority Queues and Heaps. Alan J. Hu (Borrowing slides from Steve Wolfman)

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

Greedy Algorithms. At each step in the algorithm, one of several choices can be made.

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

Approximations for Steiner Trees with Minimum Number of Steiner Points

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm

Multicriteria Decision Making

Programming in Fortran 90 : 2017/2018

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Biostatistics 615/815

Announcements. Supervised Learning

Machine Learning. Topic 6: Clustering

Minimum-Spanning-Tree problem. Minimum Spanning Trees (Forests) Minimum-Spanning-Tree problem

Solving two-person zero-sum game by Matlab

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

Unsupervised Learning and Clustering

INTEGER PROGRAMMING MODELING FOR THE CHINESE POSTMAN PROBLEMS

CHARUTAR VIDYA MANDAL S SEMCOM Vallabh Vidyanagar

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

NGPM -- A NSGA-II Program in Matlab

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.

CS1100 Introduction to Programming

Classifier Selection Based on Data Complexity Measures *

Performance Evaluation of Information Retrieval Systems

Efficient Distributed File System (EDFS)

Towards Optimal I/O Scheduling for MEMS-based Storage

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

Constructing Minimum Connected Dominating Set: Algorithmic approach

Configuration Management in Multi-Context Reconfigurable Systems for Simultaneous Performance and Power Optimizations*

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Routing in Degree-constrained FSO Mesh Networks

Load-Balanced Anycast Routing

Active Contours/Snakes

CE 221 Data Structures and Algorithms

Evolutionary Computation for Community Detection in Networks: a Review

A New Approach For the Ranking of Fuzzy Sets With Different Heights

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

UNIT 2 : INEQUALITIES AND CONVEX SETS

Algorithm To Convert A Decimal To A Fraction

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

On a Local Protocol for Concurrent File Transfers

CS 534: Computer Vision Model Fitting

ELEC 377 Operating Systems. Week 6 Class 3

Virtual Machine Migration based on Trust Measurement of Computer Node

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming

Transcription:

//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy Algorthms How to make 8 cents of change usng cons of denomnatons of $ 0., $ 0.0, $ 0.0, and $ 0.0 so that the total number of cons s the smallest? The dea: make the locally best choce at each step. Is the soluton optmal? A greedy algorthm makes a locally optmal choce n the hope that ths choce wll lead to a globally optmal soluton. The choce made at each step must be: Feasble Satsfy the problem s constrants locally optmal Be the best local choce among all feasble choces Irrevocable Once made, the choce can t be changed on subsequent steps. Do greedy algorthms always yeld optmal solutons? Example: change makng problem wth a denomnaton set of $0.0, $0.0 and $0.0? Applcatons of the Greedy Strategy Optmal solutons: change makng Mnmum Spannng Tree (MST) Sngle-source shortest paths Huffman codes Approxmatons: Travelng Salesman Problem (TSP) Knapsack problem Many other optmzaton problems The Greedy Method Technque The greedy method s a general algorthm desgn paradgm, bult on the followng elements: confguratons: dfferent choces, collectons, or values to fnd objectve functon: a score assgned to confguratons, whch we want to ether maxmze or mnmze It works best when appled to problems wth the greedychoce property: a globally-optmal soluton can always be found by a seres of local mprovements from a startng confguraton.

//00 :0 AM Makng Change The Fractonal Knapsack Problem Problem: A dollar amount to reach and a collecton of con amounts to use to get there. Confguraton: A dollar amount yet to return to a customer plus the cons already returned Objectve functon: Mnmze number of cons returned. Greedy soluton: Always return the largest con you can Example : Cons are valued $., $.08, $.0 Has the greedy-choce property, snce no amount over $. can be made wth a mnmum number of cons by omttng a $. con (smlarly for amounts over $.08, but under $.). Example : Cons are valued $.0, $.0, $.0, $.0 Does not have greedy-choce property, snce $.0 s best made wth two $.0 s, but the greedy soluton wll pck three cons (whch ones?) Gven: A set S of n tems, wth each tem havng b - a postve beneft w -a postve weght Goal: Choose tems wth maxmum total beneft but wth weght at most W. If we are allowed to take fractonal amounts, then ths s the fractonal knapsack problem. In ths case, we let x denote the amount we take of tem Objectve: maxmze Constrant: S S b ( x / w ) x W 8 Example The Fractonal Knapsack Algorthm Gven: A set S of n tems, wth each tem havng b - a postve beneft w -a postve weght Goal: Choose tems wth maxmum total beneft but wth weght at most W. Items: Weght: Beneft: ml $ 8 ml $ ml $0 ml $0 ml $0 Value: 0 0 ($ per ml) 0 ml knapsack Soluton: ml of ml of ml of ml of 9 Greedy choce: Keep takng tem wth hghest value (beneft to weght rato) Snce b ( x / w ) = ( b / w ) x S Run tme: O(n log n). Why? Correctness: Suppose there s a better soluton there s an tem wth hgher value than a chosen tem j (.e., v <v j ) but x <w and x j >0 If we substtute some wth j, we get a better soluton How much of : mn{w -x, x j } Thus, there s no better soluton than the greedy one S Algorthm fractonalknapsack(s, W) Input: set S of tems w/ beneft b and weght w ; max. weght W Output: amount x of each tem to maxmze beneft wth weght at most W for each tem n S x 0 v b / w {value} w 0 {total weght} whle w < W remove tem wth hghest v x mn{w, W w} w w + mn{w, W w} 0 Task Schedulng Task Schedulng Algorthm Gven: a set T of n tasks, each havng: A start tme, s A fnsh tme, f (where s < f ) Goal: Perform all the tasks usng a mnmum number of machnes. Machne Machne Machne 8 9 Greedy choce: consder tasks by ther start tme and use as few machnes as possble wth ths order. Run tme: O(n log n). Why? Correctness: Suppose there s a better schedule. We can use k- machnes The algorthm uses k Let be frst task scheduled on machne k Machne must conflct wth k- other tasks But that means there s no nonconflctng schedule usng k- machnes Algorthm taskschedule(t) Input: set T of tasks w/ start tme s and fnsh tme f Output: non-conflctng schedule wth mnmum number of machnes m 0 {no. of machnes} whle T s not empty remove task w/ smallest s f there s a machne j for then schedule on machne j else m m + schedule on machne m

//00 :0 AM Example Makng a Greedy Choce Gven: a set T of n tasks, each havng: A start tme, s A fnsh tme, f (where s < f ) [,], [,], [,], [,], [,], [,9], [,8] (ordered by start) Goal: Perform all tasks on mn. number of machnes Machne Machne Machne Greedy choce: We make the choce that looks best at the moment..e. every tme we make a choce, we greedly try to maxmze our proft. 8 9 Mnmum Spannng Tree (MST) Prm s MST algorthm Spannng tree of a connected graph G: a connected acyclc subgraph (tree) of G that ncludes all of G s vertces. Mnmum Spannng Tree of a weghted, connected graph G: a spannng tree of G of mnmum total weght. Example: Start wth a tree, T 0,consstng of one vertex Grow tree one vertex/edge at a tme Construct a seres of expandng subtrees T, T, T n-..at each stage construct T + from T by addng the mnmum weght edge connectng a vertex n tree (T ) to one not yet n tree choose from frnge edges (ths s the greedy step!) Or (another way to understand t) expandng each tree (T) n a greedy manner by attachng to t the nearest vertex not n that tree. (a vertex not n the tree connected to a vertex n the tree by an edge of the smallest weght) Algorthm stops when all vertces are ncluded Frnge edges: one vertex s n T and the other s not. Unseen edges: both vertces are not n T. Examples The Key Pont c a b d e Notatons T: the expandng subtree. Q: the remanng vertces. At each stage, the key pont of expandng the current subtree T s to determne whch vertex n Q s the nearest vertex. Q can be thought of as a prorty queue: The key(prorty) of each vertex, key[v], means the mnmum weght edge from v to a vertex n T. Key[v] s f v s not lnked to any vertex n T. The major operaton s to fnd and delete the nearest vertex (v, for whch key[v] s the smallest among all the vertces) Remove the nearest vertex v from Q and add t and the correspondng edge to T. Wth the occurrence of that acton, the key of v s neghbors wll be changed. 8

//00 :0 AM PRIM Algorthm Notes about Prm s algorthm ALGORITHM MST-PRIM( G, w, r ) //w: weght; r: root, the startng vertex. for each u V[G]. do key[u]. π[u] NIL// π[u] : the parent of u. key[r] 0. Q V[G] //Now the prorty queue, Q, has been bult.. whle Q. do u Extract-Mn(Q) //remove the nearest vertex from Q 8. for each v Adj[u] //update the key for each of v s adjacent nodes. 9. do f v Q and w(u,v) < key[v] 0. then π[v] u. Key[v] w(u,v) Need prorty queue for locatng the nearest vertex Use unordered array to store the prorty queue: Effcency: Θ(n ) use heap to store the prorty queue Effcency: O(m log n) For graph wth n vertces and m edges: (n + m) logn 9 0 Another Greedy Algorthm for MST: Kruskal Kruskal s Algorthm Edges are ntally sorted by ncreasng weght Start wth an empty forest grow MST one edge at a tme ntermedate stages usually have forest of trees (not connected) at each stage add mnmum weght edge among those not yet used that does not create a cycle at each stage the edge may: expand an exstng tree combne two exstng trees nto a sngle tree create a new tree need effcent way of detectng/avodng cycles algorthm stops when all vertces are ncluded ALGORITHM Kruscal(G) //Input: A weghted connected graph G = <V, E> //Output: E T, the set of edges composng a mnmum spannng tree of G. Sort E n nondecreasng order of the edge weghts w(e) <= <= w(e E ) E T ; ecounter 0 //ntalze the set of tree edges and ts sze k 0 whle encounter < V - do k k + f E T U {ek} s acyclc E T E T U {ek} ; ecounter ecounter + return E T Shortest Paths Djkstra s Algorthm Shortest Path Problems All par shortest paths (Floy s algorthm) Sngle Source Shortest Paths Problem (Djkstra s algorthm): Gven a weghted graph G, fnd the shortest paths from a source vertex s to each of the other vertces. a b c d e

//00 :0 AM Prm s and Djkstra s Algorthms Generate dfferent knds of spannng trees Prm s: a mnmum spannng tree. Djkstra s : a spannng tree rooted at a gven source s, such that the dstance from s to every other vertex s the shortest. Dfferent greedy strateges Prms : Always choose the closest (to the tree) vertex n the prorty queue Q to add to the expandng tree V T. Djkstra s : Always choose the closest (to the source) vertex n the prorty queue Q to add to the expandng tree V T. Dfferent labels for each vertex Prms : parent vertex and the dstance from the tree to the vertex.. Djkstra s : parent vertex and the dstance from the source to the vertex. Djkstra s Algorthm ALGORITHM Djkstra(G, s) //Input: A weghted connected graph G = <V, E> and a source vertex s //Output: The length d v of a shortest path from s to v and ts penultmate vertex p v for every vertex v n V Intalze (Q) //ntalze vertex prorty n the prorty queue for every vertex v n V do d v ; P v null // P v, the parent of v nsert(q, v, d v ) //ntalze vertex prorty n the prorty queue d s 0; Decrease(Q, s, d s ) //update prorty of s wth d s, makng d s, the mnmum V T for 0 to V - do //produce V - edges for the tree u* DeleteMn(Q) //delete the mnmum prorty element V T V T U {u*} //expandng the tree, choosng the locally best vertex for every vertex u n V V T that s adjacent to u* do f d u* + w(u*, u) < d u d u d u + w(u*, u); p u u* Decrease(Q, u, d u ) Notes on Djkstra s Algorthm Doesn t work wth negatve weghts Can you gve a counter example? Applcable to both undrected and drected graphs Effcency Use unordered array to store the prorty queue: Θ(n ) Use mn-heap to store the prorty queue: O(m log n) When Do Greedy Algorthms Produce an Optmal Soluton? The problem must have two propertes: Greedy choce property: An optmal soluton can be obtaned by makng choces that seem best at the tme, wthout consderng ther mplcatons for solutons to subproblems. Optmal substructure: An optmal soluton can be obtaned by augmentng the partal soluton constructed so far wth an optmal soluton of the remanng subproblem. General Form of Greedy Summary SolType Greedy(Type a[], nt n) { SolType soluton = EMPTY; for(nt =; <=n; ++) { Type x = Select(a); f Feasble(soluton, x); soluton = Unon(soluton, x); } return soluton; } Greedy algorthms are effcent algorthms for optmzaton problems that exhbt two propertes: Greedy choce property: An optmal soluton can be obtaned by makng locally optmal choces. Optmal substructure: An optmal soluton contans wthn t optmal solutons to smaller subproblems. If only optmal substructure s present, dynamc programmng may be a vable approach; that s, the greedy choce property s what allows us to obtan faster algorthms than what can be obtaned usng dynamc programmng. 9