Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Size: px
Start display at page:

Download "Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples"

Transcription

1 Elements of Dynamic Programming COSC 3A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken from Monica Nicolescu, Univ. of Nevada, Reno, monica@cs.unr.edu Optimal Substructure An optimal solution to a problem contains within it an optimal solution to subproblems Optimal solution to the entire problem is build in a bottom-up manner from optimal solutions to subproblems Overlapping Subproblems If a recursive algorithm revisits the same subproblems over and over the problem has overlapping subproblems 6//4 Lecture 8 COSC3A Optimal Substructure - Examples Assembly line Fastest way of going through a station j contains: the fastest way of going through station j- on either line Matrix multiplication Optimal parenthesization of A i A i+ A j that splits the product between A k and A k+ contains: an optimal solution to the problem of parenthesizing A i..k and A k+..j Discovering Optimal Substructure. Show that a solution to a problem consists of making a choice that leaves one or more similar problems to be solved. Suppose that for a given problem you are given the choice that leads to an optimal solution 3. Given this choice determine which subproblems result 4. Show that the solutions to the subproblems used within the optimal solution must themselves be optimal Cut-and-paste approach 6//4 Lecture 8 COSC3A 3 6//4 Lecture 8 COSC3A 4 Parameters of Optimal Substructure How many subproblems are used in an optimal solution for the original problem Assembly line: One subproblem (the line that gives best time) Matrix multiplication: Two subproblems (subproducts A i..k, A k+..j ) How many choices we have in determining which subproblems to use in an optimal solution Assembly line: Two choices (line or line ) Matrix multiplication: j - i choices for k (splitting the product) 6//4 Lecture 8 COSC3A 5 Parameters of Optimal Substructure Intuitively, the running time of a dynamic programming algorithm depends on two factors: Number of subproblems overall How many choices we look at for each subproblem Assembly line Θ(n) subproblems (n stations) Θ(n) overall choices for each subproblem Matrix multiplication: Θ(n ) subproblems ( i j n) Θ(n 3 ) overall At most n- choices 6//4 Lecture 8 COSC3A 6

2 Memoization Memoized Matrix-Chain Top-down approach with the efficiency of typical dynamic programming approach Maintaining an entry in a table for the solution to each subproblem memoize the inefficient recursive algorithm When a subproblem is first encountered its solution is computed and stored in that table Subsequent calls to the subproblem simply look up that value 6//4 Lecture 8 COSC3A 7 Alg.: MEMOIZED- MATRIX- CHAIN(p). n length[p]. for i to n 3. do for j i to n 4. do m[i, j] 5. return LOOKUP- CHAIN(p,, n) Initialize the m table with large values that indicate whether the values of m[i, j] have been computed Top-down approach 6//4 Lecture 8 COSC3A 8 Memoized Matrix-Chain Dynamic Progamming vs. Memoization Alg.: LOOKUP-CHAIN(p, i, j) Running time is O(n 3 ). if m[i, j] <. then return m[i, j] 3. if i = j 4. then m[i, j] 5. else for k i to j 6. do q LOOKUP-CHAIN(p, i, k) + LOOKUP-CHAIN(p, k+, j) + p i- p k p j 7. if q < m[i, j] 8. then m[i, j] q 9. return m[i, j] 6//4 Lecture 8 COSC3A 9 Advantages of dynamic programming vs. memoized algorithms No overhead for recursion, less overhead for maintaining the table The regular pattern of table accesses may be used to reduce time or space requirements Advantages of memoized algorithms vs. dynamic programming Some subproblems do not need to be solved 6//4 Lecture 8 COSC3A Matrix-Chain Multiplication - Summary Both the dynamic programming approach and the memoized algorithm can solve the matrixchain multiplication problem in O(n 3 ) Both methods take advantage of the overlapping subproblems property There are only Θ(n ) different subproblems Solutions to these problems are computed only once Without memoization the natural recursive algorithm runs in exponential time Longest Common Subsequence Given two sequences X = x, x,, x m Y = y, y,, y n find a maximum length common subsequence (LCS) of X and Y E.g.: X = A, B, C, B, D, A, B Subsequences of X: A subset of elements in the sequence taken in order A, B, D, B, C, D, B, etc. 6//4 Lecture 8 COSC3A 6//4 Lecture 8 COSC3A

3 Example Brute-Force Solution X = A, B, C, B, D, A, B X = A, B, C, B, D, A, B Y = B, D, C, A, B, A Y = B, D, C, A, B, A B, C, B, A and B, D, A, B are longest common subsequences of X and Y (length = 4) B, C, A, however is not a LCS of X and Y For every subsequence of X, check whether it s a subsequence of Y There are m subsequences of X to check Each subsequence takes Θ(n) time to check scan Y for first letter, from there scan for second, and so on Running time: Θ(n m ) 6//4 Lecture 8 COSC3A 3 6//4 Lecture 8 COSC3A 4 Notations A Recursive Solution Given a sequence X = x, x,, x m we define the i- th prefix of X, for i =,,,, m X i = x, x,, x i c[i, j] = the length of a LCS of the sequences X i = x, x,, x i and Y j = y, y,, y j Case : x i = y j e.g.: X i = A, B, D, E Y j = Z, B, E c[i, j] = c[i-, j - ] + Append x i = y j to the LCS of X i- and Y j- Must find a LCS of X i- and Y j- optimal solution to a problem includes optimal solutions to subproblems 6//4 Lecture 8 COSC3A 5 6//4 Lecture 8 COSC3A 6 A Recursive Solution Overlapping Subproblems Case : x i y j e.g.: X i = A, B, D, G Y j = Z, B, D c[i, j] = max { c[i-, j], c[i, j- ] } Must solve two problems find a LCS of X i- and Y j : X i- = A, B, D and Y j = Z, B, D find a LCS of X i and Y j- : X i = A, B, D, G and Y j = Z, B Optimal solution to a problem includes optimal solutions to subproblems 6//4 Lecture 8 COSC3A 7 To find a LCS of X and Y we may need to find the LCS between X and Y n- and that of X m- and Y Both the above subproblems has the subproblem of finding the LCS of X m- and Y n- Subproblems share subsubproblems 6//4 Lecture 8 COSC3A 8

4 3. Computing the Length of the LCS if i = or j = c[i, j] = c[i-, j-] + if x i = y j max(c[i, j-], c[i-, j]) if x i y j m x i x x x m n y j: y y y n 6//4 Lecture 8 COSC3A 9 j first second i Additional Information if i,j = c[i, j] = c[i-, j-] + if x i = y j max(c[i, j-], c[i-, j]) if x i y j b & c: x i A B 3 C m D 3 n y j: A C D F c[i,j-] j c[i-,j] 6//4 Lecture 8 COSC3A i A matrix b[i, j]: For a subproblem [i, j] it tells us what choice was made to obtain the optimal value If x i = y j b[i, j] = Else, if c[i -, j] c[i, j-] b[i, j] = else b[i, j] = LCS-LENGTH(X, Y, m, n). for i to m. do c[i, ] 3. for j to n 4. do c[, j] 5. for i to m 6. do for j to n 7. do if x i = y j The length of the LCS if one of the sequences is empty is zero 8. then c[i, j] c[i -, j - ] + 9. b[i, j ]. else if c[i -, j] c[i, j - ]. then c[i, j] c[i -, j]. b[i, j] 3. else c[i, j] c[i, j - ] 4. b[i, j] 5.return c and b Case : x i = y j Case : x i y j Running time: Θ(mn) 6//4 Lecture 8 COSC3A Example if i = or j = X = B, D, C, A, B, A c[i, j] = c[i-, j-] + if x Y = A, B, C, B, D, A i = y j max(c[i, j-], c[i-, j]) if x i y j If x i = y j b[i, j] = Else if c[i -, j] c[i, j-] b[i, j] = else b[i, j] = x i A B C B D A B y j B D C A B A //4 Lecture 8 COSC3A 4. Constructing a LCS Start at b[m, n] and follow the arrows When we encounter a in b[i, j] x i = y j is an element of the LCS y j B D C A B A x i A B 3 C 4 B D A B COSC3A 6//4 Lecture 8 3 PRINT-LCS(b, X, i, j). if i = or j = Running time: Θ(m + n). then return 3. if b[i, j] = 4. then PRINT- LCS(b, X, i -, j - ) 5. print x i 6. elseif b[i, j] = 7. then PRINT- LCS(b, X, i -, j) 8. else PRINT- LCS(b, X, i, j - ) Initial call: PRINT- LCS(b, X, length[x], length[y]) 6//4 Lecture 8 COSC3A 4

5 Improving the Code Greedy Algorithms What can we say about how each entry c[i, j] is computed? It depends only on c[i -, j - ], c[i -, j], and c[i, j - ] Eliminate table b and compute in O() which of the three values was used to compute c[i, j] We save Θ(mn) space from table b However, we do not asymptotically decrease the auxiliary space requirements: still need table c If we only need the length of the LCS LCS-LENGTH works only on two rows of c at a time The row being computed and the previous row We can reduce the asymptotic space requirements by storing only these two rows 6//4 Lecture 8 COSC3A 5 Similar to dynamic programming, but simpler approach Also used for optimization problems Idea: When we have a choice to make, make the one that looks best right now Make a locally optimal choice in hope of getting a globally optimal solution Greedy algorithms don t always yield an optimal solution When the problem has certain general characteristics, greedy algorithms give optimal solutions 6//4 Lecture 8 COSC3A 6 Activity Selection Schedule n activities that require exclusive use of a common resource S = {a,..., a n } set of activities a i needs resource during period [s i, f i ) s i = start time and f i = finish time of activity a i s i < f i < Activities a i and a j are compatible if the intervals [s i, f i ) and [s j, f j ) do not overlap f i s j f j s i i j j i Activity Selection Problem Select the largest possible set of nonoverlapping (mutually compatible) activities. E.g.: i s i f i Activities are sorted in increasing order of finish times A subset of mutually compatible activities: {a 3, a 9, a } Maximal set of mutually compatible activities: {a, a 4, a 8, a } and {a, a 4, a 9, a } 6//4 Lecture 8 COSC3A 7 6//4 Lecture 8 COSC3A 8 Optimal Substructure Representing the Problem Define the space of subproblems: S ij = { a k S : f i s k < f k s j } activities that start after a i finishes and finish before a j starts Activities that are compatible with the ones in S ij All activities that finish by f i All activities that start no earlier than s j 6//4 Lecture 8 COSC3A 9 Add fictitious activities a = [-, ) S = S a n+ = [, + ),n+ entire space of activities Range for S ij is i, j n + In a set S ij we assume that activities are sorted in increasing order of finish times: f f f f n < f n+ What happens if i j? For an activity a k S ij : f i s k < f k s j < f j contradiction with f i f j! S ij = (the set S ij must be empty!) We only need to consider sets S ij with i < j n + 6//4 Lecture 8 COSC3A 3

6 Optimal Substructure Subproblem: Select a maximum size subset of mutually compatible activities from set S ij Assume that a solution to the above subproblem includes activity a k (S ij is non- empty) S ij Optimal Substructure (cont.) A ij A ik A kj A ij = Optimal solution to S ij Claim: Sets A ik and A kj must be optimal solutions Assume A ik that includes more activities than A ik S ik Solution to S ij = (Solution to S ik ) {a k } (Solution to S kj ) Solution to S ij = Solution to S ik + + Solution to S kj 6//4 Lecture 8 COSC3A 3 S kj Size[A ij ] = Size[A ik ] + + Size[A kj ] > Size[A ij ] Contradiction: we assumed that A ij is the maximum # of activities taken from S ij 6//4 Lecture 8 COSC3A 3 Recursive Solution Any optimal solution (associated with a set S ij ) contains within it optimal solutions to subproblems S ik and S kj c[i, j] = size of maximum- size subset of mutually compatible activities in S ij Recursive Solution S ij S ik S kj If S ij and if we consider that a k is used in an optimal solution (maximum- size subset of mutually compatible activities of S ij ) If S ij = c[i, j] = (i j) c[i, j] = c[i,k] + c[k, j] + 6//4 Lecture 8 COSC3A 33 6//4 Lecture 8 COSC3A 34 Recursive Solution if S ij = c[i, j] = max {c[i,k] + c[k, j] + } if S ij i < k < j a k S ij There are j i possible values for k k = i+,, j a k cannot be a i or a j (from the definition of S ij ) S ij ={ a k S : f i s k < f k s j } We check all the values and take the best one We could now write a dynamic programming algorithm 6//4 Lecture 8 COSC3A 35 Theorem Let S ij and a m be the activity in S ij with the earliest finish time: f m = min { f k : a k S ij } Then:. a m is used in some maximum- size subset of mutually compatible activities of S ij There exists some optimal solution that contains a m. S im = Choosing a m leaves S mj the only nonempty subproblem 6//4 Lecture 8 COSC3A 36

7 Proof. Assume a k S im f i s k < f k s m < f m f k < f m contradiction! a m did not have the earliest finish time a k S im s m S ij a m S mj There is no a k S im S im = f m Proof : Greedy Choice Property. a m is used in some maximum- size subset of mutually compatible activities of S ij A ij = optimal solution for activity selection from S ij Order activities in A ij in increasing order of finish time Let a k be the first activity in A ij = {a k, } If a k = a m Done! Otherwise, replace a k with a m (resulting in a set A ij ) since f m f k the activities in A ij will continue to be compatible A ij will have the same size with A ij a m is used in some maximum-size subset S ij s m f m 6//4 Lecture 8 COSC3A 37 6//4 Lecture 8 COSC3A a m f k 38 Why is the Theorem Useful? Number of subproblems in the optimal solution Number of choices to consider Dynamic programming subproblems: S ik, S kj j i choices Using the theorem subproblem: S mj S im = choice: the activity with the earliest finish time in S ij Making the greedy choice (the activity with the earliest finish time in S ij ) Reduce the number of subproblems and choices Solve each subproblem in a top-down fashion 6//4 Lecture 8 COSC3A 39 Greedy Approach To select a maximum size subset of mutually compatible activities from set S ij : Choose a m S ij with earliest finish time (greedy choice) Add a m to the set of activities used in the optimal solution Solve the same problem for the set S mj From the theorem By choosing a m we are guaranteed to have used an activity included in an optimal solution We do not need to solve the subproblem S mj before making the choice! The problem has the GREEDY CHOICE property 6//4 Lecture 8 COSC3A 4 Characterizing the Subproblems The original problem: find the maximum subset of mutually compatible activities for S = S, n+ Activities are sorted by increasing finish time a, a, a, a 3,, a n+ We always choose an activity with the earliest finish time Greedy choice maximizes the unscheduled time remaining Finish time of activities selected is strictly increasing 6//4 Lecture 8 COSC3A 4 A Recursive Greedy Algorithm Alg.: REC- ACT- SEL (s, f, i, j) a m f m am f m. m i + a i f i. while m < j and s m < f i Find first activity in S ij 3. do m m + 4. if m < j 5. then return {a m } REC- ACT- SEL(s, f, m, j) 6. else return Activities are ordered in increasing order of finish time Running time: Θ(n) each activity is examined only once Initial call: REC-ACT-SEL(s, f,, n+) 6//4 Lecture 8 COSC3A 4 a m f m

8 k s k f k a m= Example a a3 m=4 a5 a6 a7 a8 m=8 a9 a8 a8 a8 m= a8 6//4 Lecture COSC3A An Iterative Greedy Algorithm Alg.: GREEDY-ACTIVITY-SELECTOR(s, f). n length[s] a m f m a. A {a } m fm a m f m 3. i a i f i 4. for m to n 5. do if s m f i activity a m is compatible with a i 6. then A A {a m } 7. i m a i is most recent addition to A 8. return A Assumes that activities are ordered in increasing order of finish time Running time: Θ(n) each activity is examined only once 6//4 Lecture 8 COSC3A 44 Steps Toward Our Greedy Solution. Determine the optimal substructure of the problem. Develop a recursive solution 3. Prove that one of the optimal choices is the greedy choice 4. Show that all but one of the subproblems resulted by making the greedy choice are empty 5. Develop a recursive algorithm that implements the greedy strategy 6. Convert the recursive algorithm to an iterative one 6//4 Lecture 8 COSC3A 45 Designing Greedy Algorithms. Cast the optimization problem as one for which: we make a choice and are left with only one subproblem to solve. Prove that there is always an optimal solution to the original problem that makes the greedy choice Making the greedy choice is always safe 3. Demonstrate that after making the greedy choice: the greedy choice + an optimal solution to the resulting subproblem leads to an optimal solution 6//4 Lecture 8 COSC3A 46 Correctness of Greedy Algorithms. Greedy Choice Property A globally optimal solution can be arrived at by making a locally optimal (greedy) choice. Optimal Substructure Property We know that we have arrived at a subproblem by making a greedy choice Optimal solution to subproblem + greedy choice optimal solution for the original problem Activity Selection Greedy Choice Property There exists an optimal solution that includes the greedy choice: The activity a m with the earliest finish time in S ij Optimal Substructure: If an optimal solution to subproblem S ij includes activity a k it must contain optimal solutions to S ik and S kj Similarly, a m + optimal solution to S im optimal sol. 6//4 Lecture 8 COSC3A 47 6//4 Lecture 8 COSC3A 48

9 Dynamic Programming vs. Greedy Algorithms Dynamic programming We make a choice at each step The choice depends on solutions to subproblems Bottom up solution, from smaller to larger subproblems Greedy algorithm Make the greedy choice and THEN Solve the subproblem arising after the choice is made The choice we make may depend on previous choices, but not on solutions to subproblems Top down solution, problems decrease in size 6//4 Lecture 8 COSC3A 49 The Knapsack Problem The - knapsack problem A thief rubbing a store finds n items: the i-th item is worth v i dollars and weights w i pounds (v i, w i integers) The thief can only carry W pounds in his knapsack Items must be taken entirely or left behind Which items should the thief take to maximize the value of his load? The fractional knapsack problem Similar to above The thief can take fractions of items 6//4 Lecture 8 COSC3A 5 Fractional Knapsack Problem Knapsack capacity: W There are n items: the i- th item has value v i and weight w i Goal: find x i such that for all x i, i =,,.., n w i x i W and x i v i is maximum Fractional Knapsack Problem Greedy strategy : Pick the item with the maximum value E.g.: W = w =, v = w =, v = Taking from the item with the maximum value: Total value taken = v /w = / Smaller than what the thief can take if choosing the other item Total value (choose item ) = v /w = 6//4 Lecture 8 COSC3A 5 6//4 Lecture 8 COSC3A 5 Fractional Knapsack Problem Greedy strategy : Pick the item with the maximum value per pound v i /w i If the supply of that element is exhausted and the thief can carry more: take as much as possible from the item with the next greatest value per pound It is good to order items based on their value per pound v v vn... w w w 6//4 Lecture 8 COSC3A 53 n Fractional Knapsack Problem Alg.: Fractional-Knapsack (W, v[n], w[n]). While w > and as long as there are items remaining. pick item with maximum v i /w i 3. x i min (, w/w i ) 4. remove item i from list 5. w w x i w i w the amount of space remaining in the knapsack (w = W) Running time: Θ(n) if items already ordered; else Θ(nlgn) 6//4 Lecture 8 COSC3A 54

10 E.g.: Fractional Knapsack - Example Item Item Item 3 3 $6 $ $ $6/pound $5/pound $4/pound 5 $ + 6//4 Lecture 8 COSC3A $8 + $6 $4 Greedy Choice Items: 3 j n Optimal solution: x x x 3 x j x n Greedy solution: x x x 3 x j x n We know that: x x greedy choice takes as much as possible from item Modify the optimal solution to take x of item We have to decrease the quantity taken from some item j: the new x j is decreased by: (x - x ) w /w j Increase in profit: (x - x) v Decrease in profit: (x - x)w v j/w j (x - x ) v (x - x )w v /w j j v j v v v w j True, since x had the w j w w best value/pound ratio j 6//4 Lecture 8 COSC3A 56 Optimal Substructure The - Knapsack Problem Consider the most valuable load that weights at most W pounds If we remove a weight w of item j from the optimal load The remaining load must be the most valuable load weighing at most W w that can be taken from the remaining n items plus w j w pounds of item j 6//4 Lecture 8 COSC3A 57 Thief has a knapsack of capacity W There are n items: for i- th item value v i and weight w i Goal: find x i such that for all x i = {, }, i =,,.., n w i x i W and x i v i is maximum 6//4 Lecture 8 COSC3A 58 - Knapsack - Greedy Strategy E.g.: Item Item Item 3 3 $6 $ $ 5 5 $ + $6 $6 $6/pound $5/pound $4/pound None of the solutions involving the greedy choice (item ) leads to an optimal solution The greedy choice property does not hold 3 $ 5 + $ $ - Knapsack - Dynamic Programming P(i, w) the maximum profit that can be obtained from items to i, if the knapsack has size w Case : thief takes item i P(i, w) = v i + P(i-, w- w i ) Case : thief does not take item i P(i, w) = P(i-, w) 6//4 Lecture 8 COSC3A 59 6//4 Lecture 8 COSC3A 6

11 - Knapsack - Dynamic Programming i- i n P(i, w) = max {v i + P(i-, w- w i ), P(i-, w) } : w - w i w W Item i was taken Item i was not taken 6//4 Lecture 8 COSC3A 6 first second W = 5 Example: P(i, w) = max {v i + P(i -, w-w i ), P(i -, w) } P(, ) = P(, ) = P(, ) = max{+, } = P(, 3) = max{+, } = P(, 4) = max{+, } = P(, 5) = max{+, } = Item Weight Value P(, )= max{+, } = P(3, )= P(,) = P(4, )= P(3,) = P(, )= max{+, } = P(3, )= P(,) = P(4, )= max{5+, } = 5 P(, 3)= max{+, } = P(3, 3)= max{+, }= P(4, 3)= max{5+, }=5 P(, 4)= max{+, } = P(3, 4)= max{+,}=3 P(4, 4)= max{5+, 3}=3 P(, 5)= max{+, } = P(4, 5)= max{+,}=3 P(4, 5)= max{5+, 3}=37 6//4 Lecture 8 COSC3A 6 Reconstructing the Optimal Solution Optimal Substructure Item 4 Item Item Start at P(n, W) When you go left- up item i has been taken When you go straight up item i has not been taken Consider the most valuable load that weights at most W pounds If we remove item j from this load The remaining load must be the most valuable load weighing at most W w j that can be taken from the remaining n items 6//4 Lecture 8 COSC3A 63 6//4 Lecture 8 COSC3A 64 Overlapping Subproblems P(i, w) = max {v i + P(i-, w- w i ), P(i-, w) } i- i n : w W E.g.: all the subproblems shown in grey may depend on P(i-, w) 6//4 Lecture 8 COSC3A 65 Chapter 5 Chapter 6 Readings 6//4 Lecture 8 COSC3A 66

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 1 Dynamic Programming 1 Introduction An algorithm design paradigm like divide-and-conquer Programming : A tabular method (not writing computer code) Divide-and-Conquer (DAC):

More information

Dynamic Programming (Part #2)

Dynamic Programming (Part #2) Dynamic Programming (Part #) Introduction to Algorithms MIT Press (Chapter 5) Matrix-Chain Multiplication Problem: given a sequence A, A,, A n, compute the product: A A A n Matrix compatibility: C = A

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 16 Greedy algorithms Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 Overview A greedy

More information

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms Chapter 13 Greedy Algorithms Activity Selection Problem 0-1 Knapsack Problem Huffman Code Construction Dynamic Programming Assembly-Line Scheduling C-C Tsai P.1 Greedy Algorithms A greedy algorithm always

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms Dynamic Programming Well known algorithm design techniques: Brute-Force (iterative) ti algorithms Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic

More information

Lecture 8. Dynamic Programming

Lecture 8. Dynamic Programming Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

16.Greedy algorithms

16.Greedy algorithms 16.Greedy algorithms 16.1 An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a n } of n proposed activities that with to use a resource. Each activity a i has a start time s i and a

More information

Chapter 3 Dynamic programming

Chapter 3 Dynamic programming Chapter 3 Dynamic programming 1 Dynamic programming also solve a problem by combining the solutions to subproblems. But dynamic programming considers the situation that some subproblems will be called

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes Greedy Algorithms CLRS Chapters 16.1 16.3 Introduction to greedy algorithms Activity-selection problem Design of data-compression (Huffman) codes (Minimum spanning tree problem) (Shortest-path problem)

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

COMP251: Greedy algorithms

COMP251: Greedy algorithms COMP251: Greedy algorithms Jérôme Waldispühl School of Computer Science McGill University Based on (Cormen et al., 2002) Based on slides from D. Plaisted (UNC) & (goodrich & Tamassia, 2009) Disjoint sets

More information

Subsequence Definition. CS 461, Lecture 8. Today s Outline. Example. Assume given sequence X = x 1, x 2,..., x m. Jared Saia University of New Mexico

Subsequence Definition. CS 461, Lecture 8. Today s Outline. Example. Assume given sequence X = x 1, x 2,..., x m. Jared Saia University of New Mexico Subsequence Definition CS 461, Lecture 8 Jared Saia University of New Mexico Assume given sequence X = x 1, x 2,..., x m Let Z = z 1, z 2,..., z l Then Z is a subsequence of X if there exists a strictly

More information

Greedy Algorithms. Informal Definition A greedy algorithm makes its next step based only on the current state and simple calculations on the input.

Greedy Algorithms. Informal Definition A greedy algorithm makes its next step based only on the current state and simple calculations on the input. Greedy Algorithms Informal Definition A greedy algorithm makes its next step based only on the current state and simple calculations on the input. easy to design not always correct challenge is to identify

More information

Computer Sciences Department 1

Computer Sciences Department 1 1 Advanced Design and Analysis Techniques (15.1, 15.2, 15.3, 15.4 and 15.5) 3 Objectives Problem Formulation Examples The Basic Problem Principle of optimality Important techniques: dynamic programming

More information

We ve done. Now. Next

We ve done. Now. Next We ve done Matroid Theory Task scheduling problem (another matroid example) Dijkstra s algorithm (another greedy example) Dynamic Programming Now Matrix Chain Multiplication Longest Common Subsequence

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 6, 2016 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing up

More information

So far... Finished looking at lower bounds and linear sorts.

So far... Finished looking at lower bounds and linear sorts. So far... Finished looking at lower bounds and linear sorts. Next: Memoization -- Optimization problems - Dynamic programming A scheduling problem Matrix multiplication optimization Longest Common Subsequence

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms 6.046J/18.401J LECTURE 12 Dynamic programming Longest common subsequence Optimal substructure Overlapping subproblems Prof. Charles E. Leiserson Dynamic programming Design technique,

More information

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 11 Greedy Algorithms 1 Activity Selection Problem Input: a set S {1, 2,, n} of n activities s i =Start time of activity i, f i = Finish time of activity i Activity i takes place

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 15, 2015 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing

More information

Dynamic Programmming: Activity Selection

Dynamic Programmming: Activity Selection Dynamic Programmming: Activity Selection Select the maximum number of non-overlapping activities from a set of n activities A = {a 1,, a n } (sorted by finish times). Identify easier subproblems to solve.

More information

Greedy algorithms is another useful way for solving optimization problems.

Greedy algorithms is another useful way for solving optimization problems. Greedy Algorithms Greedy algorithms is another useful way for solving optimization problems. Optimization Problems For the given input, we are seeking solutions that must satisfy certain conditions. These

More information

Algorithm Design Techniques part I

Algorithm Design Techniques part I Algorithm Design Techniques part I Divide-and-Conquer. Dynamic Programming DSA - lecture 8 - T.U.Cluj-Napoca - M. Joldos 1 Some Algorithm Design Techniques Top-Down Algorithms: Divide-and-Conquer Bottom-Up

More information

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk CMPS 00 Fall 04 Dynamic Programming Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 9/30/4 CMPS 00 Intro. to Algorithms Dynamic programming Algorithm design technique

More information

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech License CC BY-NC-SA 2.0 http://creativecommons.org/licenses/by-nc-sa/2.0/fr/ Outline Previously on IN101 Python s anatomy Functions,

More information

/463 Algorithms - Fall 2013 Solution to Assignment 3

/463 Algorithms - Fall 2013 Solution to Assignment 3 600.363/463 Algorithms - Fall 2013 Solution to Assignment 3 (120 points) I (30 points) (Hint: This problem is similar to parenthesization in matrix-chain multiplication, except the special treatment on

More information

Longest Common Subsequence. Definitions

Longest Common Subsequence. Definitions Longest Common Subsequence LCS is an interesting variation on the classical string matching problem: the task is that of finding the common portion of two strings (more precise definition in a couple of

More information

Dynamic Programming part 2

Dynamic Programming part 2 Dynamic Programming part 2 Week 7 Objectives More dynamic programming examples - Matrix Multiplication Parenthesis - Longest Common Subsequence Subproblem Optimal structure Defining the dynamic recurrence

More information

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms Entwurf und Analyse von Algorithmen Dynamic Programming Overview Introduction Example 1 When and how to apply this method Example 2 Final remarks Introduction: when recursion is inefficient Example: Calculation

More information

Lecture 13: Chain Matrix Multiplication

Lecture 13: Chain Matrix Multiplication Lecture 3: Chain Matrix Multiplication CLRS Section 5.2 Revised April 7, 2003 Outline of this Lecture Recalling matrix multiplication. The chain matrix multiplication problem. A dynamic programming algorithm

More information

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12 Algorithms Theory 14 Dynamic Programming (2) Matrix-chain multiplication P.D. Dr. Alexander Souza Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution

More information

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms Efficient Sequential Algorithms, Comp309 Part 1: Algorithmic Paradigms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press

More information

CS 231: Algorithmic Problem Solving

CS 231: Algorithmic Problem Solving CS 231: Algorithmic Problem Solving Naomi Nishimura Module 5 Date of this version: June 14, 2018 WARNING: Drafts of slides are made available prior to lecture for your convenience. After lecture, slides

More information

Homework3: Dynamic Programming - Answers

Homework3: Dynamic Programming - Answers Most Exercises are from your textbook: Homework3: Dynamic Programming - Answers 1. For the Rod Cutting problem (covered in lecture) modify the given top-down memoized algorithm (includes two procedures)

More information

Main approach: always make the choice that looks best at the moment.

Main approach: always make the choice that looks best at the moment. Greedy algorithms Main approach: always make the choice that looks best at the moment. - More efficient than dynamic programming - Always make the choice that looks best at the moment (just one choice;

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Problem Strategies. 320 Greedy Strategies 6

Problem Strategies. 320 Greedy Strategies 6 Problem Strategies Weighted interval scheduling: 2 subproblems (include the interval or don t) Have to check out all the possibilities in either case, so lots of subproblem overlap dynamic programming:

More information

ECE250: Algorithms and Data Structures Dynamic Programming Part B

ECE250: Algorithms and Data Structures Dynamic Programming Part B ECE250: Algorithms and Data Structures Dynamic Programming Part B Ladan Tahvildari, PEng, SMIEEE Associate Professor Software Technologies Applied Research (STAR) Group Dept. of Elect. & Comp. Eng. University

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if Recursive solution for finding LCS of X and Y if x s =y t, then find an LCS of X s-1 and Y t-1, and then append x s =y t to this LCS if x s y t, then solve two subproblems: (1) find an LCS of X s-1 and

More information

Greedy Algorithms. Algorithms

Greedy Algorithms. Algorithms Greedy Algorithms Algorithms Greedy Algorithms Many algorithms run from stage to stage At each stage, they make a decision based on the information available A Greedy algorithm makes decisions At each

More information

Algorithmic Paradigms. Chapter 6 Dynamic Programming. Steps in Dynamic Programming. Dynamic Programming. Dynamic Programming Applications

Algorithmic Paradigms. Chapter 6 Dynamic Programming. Steps in Dynamic Programming. Dynamic Programming. Dynamic Programming Applications lgorithmic Paradigms reed. Build up a solution incrementally, only optimizing some local criterion. hapter Dynamic Programming Divide-and-conquer. Break up a problem into two sub-problems, solve each sub-problem

More information

Exercises Optimal binary search trees root

Exercises Optimal binary search trees root 5.5 Optimal binary search trees 403 e w 5 5 j 4.75 i j 4.00 i 3.75.00 3 3 0.70 0.80 3.5.0. 4 0.55 0.50 0.60 4 0.90 0.70 0.60 0.90 5 0.45 0.35 0. 0.50 5 0 0.45 0.40 0.5 0. 0.50 6 0 0. 0.5 0.5 0.0 0.35 6

More information

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya CS60020: Foundations of Algorithm Design and Machine Learning Sourangshu Bhattacharya Dynamic programming Design technique, like divide-and-conquer. Example: Longest Common Subsequence (LCS) Given two

More information

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.) Dynamic programming is a problem solving method that is applicable to many different types of problems. I think it is best learned by example, so we will mostly do examples today. 1 Rod cutting Suppose

More information

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares 12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution to the original

More information

Data Structures and Algorithms Week 8

Data Structures and Algorithms Week 8 Data Structures and Algorithms Week 8 Dynamic programming Fibonacci numbers Optimization problems Matrix multiplication optimization Principles of dynamic programming Longest Common Subsequence Algorithm

More information

15.Dynamic Programming

15.Dynamic Programming 15.Dynamic Programming Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. Like divide and conquer, DP solves problems by combining solutions

More information

Greedy Algorithms Huffman Coding

Greedy Algorithms Huffman Coding Greedy Algorithms Huffman Coding Huffman Coding Problem Example: Release 29.1 of 15-Feb-2005 of TrEMBL Protein Database contains 1,614,107 sequence entries, comprising 505,947,503 amino acids. There are

More information

Main approach: always make the choice that looks best at the moment. - Doesn t always result in globally optimal solution, but for many problems does

Main approach: always make the choice that looks best at the moment. - Doesn t always result in globally optimal solution, but for many problems does Greedy algorithms Main approach: always make the choice that looks best at the moment. - More efficient than dynamic programming - Doesn t always result in globally optimal solution, but for many problems

More information

Tutorial 6-7. Dynamic Programming and Greedy

Tutorial 6-7. Dynamic Programming and Greedy Tutorial 6-7 Dynamic Programming and Greedy Dynamic Programming Why DP? Natural Recursion may be expensive. For example, the Fibonacci: F(n)=F(n-1)+F(n-2) Recursive implementation memoryless : time= 1

More information

ECE608 - Chapter 16 answers

ECE608 - Chapter 16 answers ¼ À ÈÌ Ê ½ ÈÊÇ Ä ÅË ½µ ½ º½¹ ¾µ ½ º½¹ µ ½ º¾¹½ µ ½ º¾¹¾ µ ½ º¾¹ µ ½ º ¹ µ ½ º ¹ µ ½ ¹½ ½ ECE68 - Chapter 6 answers () CLR 6.-4 Let S be the set of n activities. The obvious solution of using Greedy-Activity-

More information

Chain Matrix Multiplication

Chain Matrix Multiplication Chain Matrix Multiplication Version of November 5, 2014 Version of November 5, 2014 Chain Matrix Multiplication 1 / 27 Outline Outline Review of matrix multiplication. The chain matrix multiplication problem.

More information

Greedy Algorithms CLRS Laura Toma, csci2200, Bowdoin College

Greedy Algorithms CLRS Laura Toma, csci2200, Bowdoin College Greedy Algorithms CLRS 16.1-16.2 Laura Toma, csci2200, Bowdoin College Overview. Sometimes we can solve optimization problems with a technique called greedy. A greedy algorithm picks the option that looks

More information

Algorithms: COMP3121/3821/9101/9801

Algorithms: COMP3121/3821/9101/9801 NEW SOUTH WALES Algorithms: COMP3121/3821/9101/9801 Aleks Ignjatović School of Computer Science and Engineering University of New South Wales TOPIC 5: DYNAMIC PROGRAMMING COMP3121/3821/9101/9801 1 / 38

More information

Lectures 12 and 13 Dynamic programming: weighted interval scheduling

Lectures 12 and 13 Dynamic programming: weighted interval scheduling Lectures 12 and 13 Dynamic programming: weighted interval scheduling COMP 523: Advanced Algorithmic Techniques Lecturer: Dariusz Kowalski Lectures 12-13: Dynamic Programming 1 Overview Last week: Graph

More information

Analysis of Algorithms - Greedy algorithms -

Analysis of Algorithms - Greedy algorithms - Analysis of Algorithms - Greedy algorithms - Andreas Ermedahl MRTC (Mälardalens Real-Time Reseach Center) andreas.ermedahl@mdh.se Autumn 2003 Greedy Algorithms Another paradigm for designing algorithms

More information

Unit 4: Dynamic Programming

Unit 4: Dynamic Programming Unit 4: Dynamic Programming Course contents: Assembly-line scheduling Matrix-chain multiplication Longest common subsequence Optimal binary search trees Applications: Cell flipping, rod cutting, optimal

More information

Analysis of Algorithms Prof. Karen Daniels

Analysis of Algorithms Prof. Karen Daniels UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 2 Tuesday, 2/2/10 Design Patterns for Optimization Problems Greedy Algorithms Algorithmic Paradigm Context

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms Instructor: SharmaThankachan Lecture 10: Greedy Algorithm Slides modified from Dr. Hon, with permission 1 About this lecture Introduce Greedy Algorithm Look at some problems

More information

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression Chapter 5 Dynamic Programming Exercise 5.1 Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression x 1 /x /x 3 /... x n 1 /x n, where

More information

Chapter 16: Greedy Algorithm

Chapter 16: Greedy Algorithm Chapter 16: Greedy Algorithm 1 About this lecture Introduce Greedy Algorithm Look at some problems solvable by Greedy Algorithm 2 Coin Changing Suppose that in a certain country, the coin dominations consist

More information

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16 Dynamic Programming Introduction, Weighted Interval Scheduling, Knapsack Tyler Moore CSE, SMU, Dallas, TX Lecture /6 Greedy. Build up a solution incrementally, myopically optimizing some local criterion.

More information

We ve done. Now. Next

We ve done. Now. Next We ve done Fast Fourier Transform Polynomial Multiplication Now Introduction to the greedy method Activity selection problem How to prove that a greedy algorithm works Huffman coding Matroid theory Next

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Greedy Algorithms. Alexandra Stefan

Greedy Algorithms. Alexandra Stefan Greedy Algorithms Alexandra Stefan 1 Greedy Method for Optimization Problems Greedy: take the action that is best now (out of the current options) it may cause you to miss the optimal solution You build

More information

Solving NP-hard Problems on Special Instances

Solving NP-hard Problems on Special Instances Solving NP-hard Problems on Special Instances Solve it in poly- time I can t You can assume the input is xxxxx No Problem, here is a poly-time algorithm 1 Solving NP-hard Problems on Special Instances

More information

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University Algorithms IV Dynamic Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Dynamic Programming Shortest Paths in Dags, Revisited Shortest Paths in Dags, Revisited The special distinguishing

More information

CS 473: Fundamental Algorithms, Spring Dynamic Programming. Sariel (UIUC) CS473 1 Spring / 42. Part I. Longest Increasing Subsequence

CS 473: Fundamental Algorithms, Spring Dynamic Programming. Sariel (UIUC) CS473 1 Spring / 42. Part I. Longest Increasing Subsequence CS 473: Fundamental Algorithms, Spring 2011 Dynamic Programming Lecture 8 February 15, 2011 Sariel (UIUC) CS473 1 Spring 2011 1 / 42 Part I Longest Increasing Subsequence Sariel (UIUC) CS473 2 Spring 2011

More information

Dynamic Programming CS 445. Example: Floyd Warshll Algorithm: Computing all pairs shortest paths

Dynamic Programming CS 445. Example: Floyd Warshll Algorithm: Computing all pairs shortest paths CS 44 Dynamic Programming Some of the slides are courtesy of Charles Leiserson with small changes by Carola Wenk Example: Floyd Warshll lgorithm: Computing all pairs shortest paths Given G(V,E), with weight

More information

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 Dynamic Programming is a powerful technique that allows one to solve many different types

More information

Dynamic Programming 1

Dynamic Programming 1 Dynamic Programming 1 Jie Wang University of Massachusetts Lowell Department of Computer Science 1 I thank Prof. Zachary Kissel of Merrimack College for sharing his lecture notes with me; some of the examples

More information

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms Efficient Sequential Algorithms, Comp39 Part 3. String Algorithms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press (21).

More information

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 Objectives: learn general strategies for problems about order statistics learn how to find the median (or k-th largest) in linear average-case number of

More information

Dynamic Programming. Lecture Overview Introduction

Dynamic Programming. Lecture Overview Introduction Lecture 12 Dynamic Programming 12.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, === Homework submission instructions ===

Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, === Homework submission instructions === Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, 2011 === Homework submission instructions === Submit the answers for writing problems (including your programming report) through

More information

Unit-5 Dynamic Programming 2016

Unit-5 Dynamic Programming 2016 5 Dynamic programming Overview, Applications - shortest path in graph, matrix multiplication, travelling salesman problem, Fibonacci Series. 20% 12 Origin: Richard Bellman, 1957 Programming referred to

More information

CSE 421 Applications of DFS(?) Topological sort

CSE 421 Applications of DFS(?) Topological sort CSE 421 Applications of DFS(?) Topological sort Yin Tat Lee 1 Precedence Constraints In a directed graph, an edge (i, j) means task i must occur before task j. Applications Course prerequisite: course

More information

Algorithms: Dynamic Programming

Algorithms: Dynamic Programming Algorithms: Dynamic Programming Amotz Bar-Noy CUNY Spring 2012 Amotz Bar-Noy (CUNY) Dynamic Programming Spring 2012 1 / 58 Dynamic Programming General Strategy: Solve recursively the problem top-down based

More information

Longest Common Subsequences and Substrings

Longest Common Subsequences and Substrings Longest Common Subsequences and Substrings Version November 5, 2014 Version November 5, 2014 Longest Common Subsequences and Substrings 1 / 16 Longest Common Subsequence Given two sequences X = (x 1, x

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Optimization

More information

Algorithms for Data Science

Algorithms for Data Science Algorithms for Data Science CSOR W4246 Eleni Drinea Computer Science Department Columbia University Thursday, October 1, 2015 Outline 1 Recap 2 Shortest paths in graphs with non-negative edge weights (Dijkstra

More information

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 25 Dynamic Programming Terrible Fibonacci Computation Fibonacci sequence: f = f(n) 2

More information

CMSC 451: Lecture 10 Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct 3, 2017

CMSC 451: Lecture 10 Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct 3, 2017 CMSC 45 CMSC 45: Lecture Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct, Reading: Section. in KT. Dynamic Programming: In this lecture we begin our coverage of an important algorithm design

More information

CMSC 451: Dynamic Programming

CMSC 451: Dynamic Programming CMSC 41: Dynamic Programming Slides By: Carl Kingsford Department of Computer Science University of Maryland, College Park Based on Sections 6.1&6.2 of Algorithm Design by Kleinberg & Tardos. Dynamic Programming

More information

16 Greedy Algorithms

16 Greedy Algorithms 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices

More information

CS141: Intermediate Data Structures and Algorithms Greedy Algorithms

CS141: Intermediate Data Structures and Algorithms Greedy Algorithms CS141: Intermediate Data Structures and Algorithms Greedy Algorithms Amr Magdy Activity Selection Problem Given a set of activities S = {a 1, a 2,, a n } where each activity i has a start time s i and

More information

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017 CS161 Lecture 13 Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017 1 Overview Last lecture, we talked about dynamic programming

More information

ECE608 - Chapter 15 answers

ECE608 - Chapter 15 answers ¼ À ÈÌ Ê ½ ÈÊÇ Ä ÅË ½µ ½ º¾¹¾ ¾µ ½ º¾¹ µ ½ º¾¹ µ ½ º ¹½ µ ½ º ¹¾ µ ½ º ¹ µ ½ º ¹¾ µ ½ º ¹ µ ½ º ¹ ½¼µ ½ º ¹ ½½µ ½ ¹ ½ ECE608 - Chapter 15 answers (1) CLR 15.2-2 MATRIX CHAIN MULTIPLY(A, s, i, j) 1. if

More information

Lecture 22: Dynamic Programming

Lecture 22: Dynamic Programming Lecture 22: Dynamic Programming COSC242: Algorithms and Data Structures Brendan McCane Department of Computer Science, University of Otago Dynamic programming The iterative and memoised algorithms for

More information

memoization or iteration over subproblems the direct iterative algorithm a basic outline of dynamic programming

memoization or iteration over subproblems the direct iterative algorithm a basic outline of dynamic programming Dynamic Programming 1 Introduction to Dynamic Programming weighted interval scheduling the design of a recursive solution memoizing the recursion 2 Principles of Dynamic Programming memoization or iteration

More information

Greedy Algorithms and Matroids. Andreas Klappenecker

Greedy Algorithms and Matroids. Andreas Klappenecker Greedy Algorithms and Matroids Andreas Klappenecker Giving Change Coin Changing Suppose we have n types of coins with values v[1] > v[2] > > v[n] > 0 Given an amount C, a positive integer, the following

More information

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm G205 Fundamentals of Computer Engineering CLASS 21, Mon. Nov. 22 2004 Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm Greedy Algorithms, 1 Algorithms for Optimization Problems Sequence of steps Choices at

More information

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2 CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 11: Dynamic Programming, Part 2 Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Goal: continue with DP (Knapsack, All-Pairs SPs, )

More information

Dynamic Programming: 1D Optimization. Dynamic Programming: 2D Optimization. Fibonacci Sequence. Crazy 8 s. Edit Distance

Dynamic Programming: 1D Optimization. Dynamic Programming: 2D Optimization. Fibonacci Sequence. Crazy 8 s. Edit Distance Dynamic Programming: 1D Optimization Fibonacci Sequence To efficiently calculate F [x], the xth element of the Fibonacci sequence, we can construct the array F from left to right (or bottom up ). We start

More information

Announcements. Programming assignment 1 posted - need to submit a.sh file

Announcements. Programming assignment 1 posted - need to submit a.sh file Greedy algorithms Announcements Programming assignment 1 posted - need to submit a.sh file The.sh file should just contain what you need to type to compile and run your program from the terminal Greedy

More information