Unit 4: Dynamic Programming

Size: px
Start display at page:

Download "Unit 4: Dynamic Programming"

Transcription

1 Unit 4: Dynamic Programming Course contents: Assembly-line scheduling Matrix-chain multiplication Longest common subsequence Optimal binary search trees Applications: Cell flipping, rod cutting, optimal polygon triangulation, flip-chip routing, technology mapping for logic synthesis Reading: Chapter 15 1 Dynamic Programming (DP) vs. Divide-and-Conquer Both solve problems by combining the solutions to subproblems. Divide-and-conquer algorithms Partition a problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. Inefficient if they solve the same subproblem more than once. Dynamic programming (DP) Applicable when the subproblems are not independent. DP solves each subproblem just once. 1

2 Assembly-line Scheduling station S 1,1 S 1, S 1,3 S 1,n-1 S 1,n line 1 a 1,1 a 1, a 1,3 a 1,n-1 a 1,n chassis enters e 1 e t 1,1 t,1 t 1, t, t 1,n-1 t,n-1 x 1 x auto exits line a,1 a, a,3 a,n-1 a,n station S,1 S, S,3 S,n-1 S,n An auto chassis enters each assembly line, has parts added at stations, and a finished auto exits at the end of the line. S i,j : the jth station on line i a i,j : the assembly time required at station S i,j t i,j : transfer time from station S i,j to the j+1 station of the other line. e i (x i ): time to enter (exit) line i 3 Optimal Substructure line 1 S 1,1 S 1, S 1,3 S 1,n-1 S 1,n chassis enters e 1 e t 1,1 t,1 t 1, t, a 1,1 a 1, a 1,3 a 1,n-1 a 1,n x1 t 1,n-1 t,n-1 x auto exits line a,1 a, a,3 a,n-1 a,n S,1 S, S,3 S,n-1 S,n Objective: Determine the stations to choose to minimize the total manufacturing time for one auto. Brute force: ( n ), why? The problem is linearly ordered and cannot be rearranged => Dynamic programming? Optimal substructure: If the fastest way through station S i,j is through S 1,j-1, then the chassis must have taken a fastest way from the starting point through S 1,j-1. 4

3 Overlapping Subproblem: Recurrence line 1 S 1,1 S 1, S 1,3 S 1,n-1 S 1,n chassis enters e 1 e t 1,1 t,1 t 1, t, a 1,1 a 1, a 1,3 a 1,n-1 a 1,n x1 t 1,n-1 t,n-1 x auto exits line a,1 a, a,3 a,n-1 a,n S,1 S, S,3 S,n-1 S,n Overlapping subproblem: The fastest way through station S 1,j is either through S 1,j-1 and then S 1,j, or through S,j-1 and then transfer to line 1 and through S 1,j. f i [j]: fastest time from the starting point through S i,j e1a1,1 if j 1 f 1[ j] min( f 1[ j1] a1, j, f [ j1] t, j 1a1, j) if j The fastest time all the way through the factory f* = min(f 1 [n] + x 1, f [n] + x ) 5 Computing the Fastest Time Fastest-Way(a, t, e, x, n) 1. f 1 [1] = e 1 + a 1,1. f [1] = e + a,1 3. for j = to n 4. if f 1 [j-1] + a 1,j f [j-1] + t,j -1 + a 1,j 5. f 1 [j] = f 1 [j-1] + a 1,j 6. l 1 [j] = 1 7. else f 1 [j] = f [j-1] + t,j -1 + a 1,j 8. l 1 [j] = 9. if f [j-1] + a,j f 1 [j-1] + t 1,j -1 + a,j 10. f [j] = f [j-1] + a,j 11. l [j] = 1. else f [j] = f 1 [j-1] + t 1,j -1 + a,j 13. l [j] = if f 1 [n] + x 1 f [n] + x 15. f* = f 1 [n] + x l* = else f* = f [n] + x 18. l* = Running time? Linear time!! l i [j]: The line number whose station j-1 is used in a fastest way through S i,j 6 3

4 Constructing the Fastest Way Print-Station(l, n) 1. i = l*. Print line i, station n 3. for j = n downto 4. i = l i [j] 5. Print line i, station j-1 line 1, station 6 line, station 5 line, station 4 line 1, station 3 line, station line 1, station 1 S 1,4 S 1,1 S 1, S 1,3 S 1,5 S 1,6 line chassis enters auto exits line S,1 S, S,4 S,3 S,5 S,6 7 An Example S 1,4 S 1,1 S 1, S 1,3 S 1,5 S 1,6 line chassis enters auto exits line S,1 S, S,4 S,3 S,5 S,6 f 1 [j] f [j] j l 1 [j] l [j] 1 1 f* = 38 l* = 1 8 4

5 Dynamic Programming (DP) Typically apply to optimization problem. Generic approach Calculate the solutions to all subproblems. Proceed computation from the small subproblems to larger subproblems. Compute a subproblem based on previously computed results for smaller subproblems. Store the solution to a subproblem in a table and never recompute. Development of a DP 1. Characterize the structure of an optimal solution.. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution bottom-up. 4. Construct an optimal solution from computed information (omitted if only the optimal value is required). 9 When to Use Dynamic Programming (DP) DP computes recurrence efficiently by storing partial results efficient only when the number of partial results is small. Hopeless configurations: n! permutations of an n-element set, n subsets of an n-element set, etc. n Promising configurations: i n( n1)/ i1 contiguous substrings of an n-character string, n(n+1)/ possible subtrees of a binary search tree, etc. DP works best on objects that are linearly ordered and cannot be rearranged!! Linear assembly lines, matrices in a chain, characters in a string, points around the boundary of a polygon, points on a line/circle, the left-to-right order of leaves in a search tree, fixed-order cell flipping, etc. Objects are ordered left-to-right Smell DP? 10 5

6 DP Example: Matrix-Chain Multiplication If A is a p x q matrix and B a q x r matrix, then C = AB is a p x r matrix q Ci [, j] Ai [, k] Bk [, j] k1 time complexity: O(pqr). Matrix-Multiply(A, B) 1. if A.columns B.rows. error incompatible dimensions 3. else let C be a new A.rows * B.columns matrix 4. for i = 1 to A.rows 5. for j = 1 to B.columns 6. c ij = 0 7. for k = 1 to A.columns 8. c ij = c ij + a ik b kj 9. return C 11 DP Example: Matrix-Chain Multiplication The matrix-chain multiplication problem Input: Given a chain <A 1, A,, A n > of n matrices, matrix A i has dimension p i-1 x p i Objective: Parenthesize the product A 1 A A n to minimize the number of scalar multiplications Exp: dimensions: A 1 : 4 x ; A : x 5; A 3 : 5 x 1 (A 1 A )A 3 : total multiplications = 4 x x x 5 x 1 = 60 A 1 (A A 3 ): total multiplications = x 5 x x x 1 = 18 So the order of multiplications can make a big difference! 1 6

7 Matrix-Chain Multiplication: Brute Force A = A 1 A A n : How to evaluate A using the minimum number of multiplications? Brute force: check all possible orders? P(n): number of ways to multiply n matrices., exponential in n. Any efficient solution? The matrix chain is linearly ordered and cannot be rearranged!! Smell Dynamic programming? 13 Matrix-Chain Multiplication m[i, j]: minimum number of multiplications to compute matrix A i..j = A i A i +1 A j, 1 i j n. m[1, n]: the cheapest cost to compute A 1..n. matrix A i has dimension p i-1 x p i Applicability of dynamic programming Optimal substructure: an optimal solution contains within its optimal solutions to subproblems. Overlapping subproblem: a recursive algorithm revisits the same problem over and over again; only (n ) subproblems. 14 7

8 Bottom-Up DP Matrix-Chain Order A i dimension p i-1 x p i Matrix-Chain-Order(p) 1. n = p.length 1. Let m[1..n, 1..n] and s[1..n-1,..n] be new tables 3. for i = 1 to n 4. m[i, i] = 0 5. for l = to n // l is the chain length 6. for i = 1 to n l j = i+ l m[i, j] = 9. for k = i to j q = m[i, k] + m[k+1, j] + p i-1 p k p j 11. if q < m[i, j] 1. m[i, j] = q 13. s[i, j] = k 14. return m and s s 15 Constructing an Optimal Solution s[i, j]: value of k such that the optimal parenthesization of A i A i+1 A j splits between A k and A k+1 Optimal matrix A 1..n multiplication: A 1..s[1, n] A s[1, n] + 1..n Exp: call Print-Optimal-Parens(s, 1, 6): ((A 1 (A A 3 ))((A 4 A 5 ) A 6 )) Print-Optimal-Parens(s, i, j) 1. if i == j. print A i 3. else print ( 4. Print-Optimal-Parens(s, i, s[i, j]) 5. Print-Optimal-Parens(s, s[i, j] + 1, j) 6. print ) s

9 Top-Down, Recursive Matrix-Chain Order n1 k 1 Recursive-Matrix-Chain(p, i, j) 1. if i == j. return 0 3. m[i, j] = 4. for k = i to j -1 5 q = Recursive-Matrix-Chain(p,i, k) + Recursive-Matrix-Chain(p, k+1, j) + p i-1 p k p j 6. if q < m[i, j] 7. m[i, j] = q 8. return m[i, j] Time complexity: ( n ) (T(n) > (T(k)+T(n-k)+1)). 17 Top-Down DP Matrix-Chain Order (Memoization) Complexity: O(n ) space for m[] matrix and O(n 3 ) time to fill in O(n ) entries (each takes O(n) time) Memoized-Matrix-Chain(p) 1. n = p.length - 1. let m[1..n, 1..n] be a new table. for i = 1 to n 3. for j = i to n 4. m[i, j] = 5. return Lookup-Chain(m, p,1,n) Lookup-Chain(m, p, i, j) 1. if m[i, j] <. return m[i, j] 3. if i == j 4. m[ i, j] = 0 5. else for k = i to j q = Lookup-Chain(m, p, i, k) + Lookup-Chain(m, p, k+1, j) + p i-1 p k p j 7. if q < m[i, j] 8. m[i, j] = q 9. return m[i, j] 18 9

10 Two Approaches to DP 1. Bottom-up iterative approach Start with recursive divide-and-conquer algorithm. Find the dependencies between the subproblems (whose solutions are needed for computing a subproblem). Solve the subproblems in the correct order.. Top-down recursive approach (memoization) Start with recursive divide-and-conquer algorithm. Keep top-down approach of original algorithms. Save solutions to subproblems in a table (possibly a lot of storage). Recurse only on a subproblem if the solution is not already available in the table. If all subproblems must be solved at least once, bottom-up DP is better due to less overhead for recursion and table maintenance. If many subproblems need not be solved, top-down DP is better since it computes only those required. 19 Longest Common Subsequence Problem: Given X = <x 1, x,, x m > and Y = <y 1, y,, y n >, find the longest common subsequence (LCS) of X and Y. Exp: X = <a, b, c, b, d, a, b> and Y = <b, d, c, a, b, a> LCS = <b, c, b, a> (also, LCS = <b, d, a, b>). Exp: DNA sequencing: S1 = ACCGGTCGAGATGCAG; S = GTCGTTCGGAATGCAT; LCS S3 = CGTCGGATGCA Brute-force method: Enumerate all subsequences of X and check if they appear in Y. Each subsequence of X corresponds to a subset of the indices {1,,, m} of the elements of X. There are m subsequences of X. Why? 0 10

11 Optimal Substructure for LCS Let X = <x 1, x,, x m > and Y = <y 1, y,, y n > be sequences, and Z = <z 1, z,, z k > be LCS of X and Y. 1. If x m = y n, then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1.. If x m y n, then z k x m implies Z is an LCS of X m-1 and Y. 3. If x m y n, then z k y n implies Z is an LCS of X and Y n-1. c[i, j]: length of the LCS of X i and Y j c[m, n]: length of LCS of X and Y Basis: c[0, j] = 0 and c[i, 0] = 0 1 Top-Down DP for LCS c[i, j]: length of the LCS of X i and Y j, where X i = <x 1, x,, x i > and Y j =<y 1, y,, y j >. c[m, n]: LCS of X and Y. Basis: c[0, j] = 0 and c[i, 0] = 0. The top-down dynamic programming: initialize c[i, 0] = c[0, j] = 0, c[i, j] = NIL TD-LCS(i, j) 1. if c[i,j] == NIL. if x i == y j 3. c[i, j] = TD-LCS(i-1, j-1) else c[i, j] = max(td-lcs(i, j-1), TD-LCS(i-1, j)) 5. return c[i, j] 11

12 Find the right order to solve the subproblems To compute c[i, j], we need c[i-1, j-1], c[i-1, j], and c[i, j-1] b[i, j]: points to the table entry w.r.t. the optimal subproblem solution chosen when computing c[i, j] Bottom-Up DP for LCS LCS-Length(X,Y) 1. m = X.length. n = Y.length 3. let b[1..m, 1..n] and c[0..m, 0..n] be new tables 4. for i = 1 to m 5. c[i, 0] = 0 6. for j = 0 to n 7. c[0, j] = 0 8. for i = 1 to m 9. for j = 1 to n 10. if x i == y j 11. c[i, j] = c[i-1, j-1]+1 1. b[i, j] = 13. elseif c[i-1,j] c[i, j-1] 14. c[i,j] = c[i-1, j] 15. b[i, j] = `` '' 16. else c[i, j] = c[i, j-1] 17. b[i, j] = 18. return c and b 3 Example of LCS LCS time and space complexity: O(mn). X = <A, B, C, B, D, A, B> and Y = <B, D, C, A, B, A> LCS = <B, C, B, A>. 4 1

13 Constructing an LCS Trace back from b[m, n] to b[1, 1], following the arrows: O(m+n) time. Print-LCS(b, X, i, j) 1. if i == 0 or j == 0. return 3. if b[i, j] == 4. Print-LCS(b, X, i-1, j-1) 5. print x i 6. elseif b[i, j] == 7. Print-LCS(b, X, i -1, j) 8. else Print-LCS(b, X, i, j-1) 5 Optimal Binary Search Tree Given a sequence K = <k 1, k,, k n > of n distinct keys in sorted order (k 1 < k < < k n ) and a set of probabilities P = <p 1, p,, p n > for searching the keys in K and Q = <q 0, q 1, q,, q n > for unsuccessful searches (corresponding to D = <d 0, d 1, d,, d n > of n+1 distinct dummy keys with d i representing all values between k i and k i+1 ), construct a binary search tree whose expected search cost is smallest. k p k p 1 k 1 k 4 p 4 k 1 k 5 d 0 d 1 k 3 k 5 p 3 p 5 d 0 d 1 k 4 d 5 q 0 q 1 d d 3 d 4 d 5 k 3 d 4 q q 3 q 4 q 5 d d

14 An Example i p i q i n p i n i1 i0 qi k 1 k (depth 0) k k 1 k k 5 d 0 d 1 k 3 k d 0 d 1 k 4 d 5 Cost =.80 d d 3 d 4 d 5 n n T i i T i i i1 i0 n n T i i T i i E[search cost in T ] = ( depth ( k ) 1) p ( depth ( d ) 1) q depth ( k ) p depth ( d ) q i1 i0 7 k 3 d d 3 d 4 Cost =.75 Optimal!! Optimal Substructure If an optimal binary search tree T has a subtree T containing keys k i,, k j, then this subtree T must be optimal as well for the subproblem with keys k i,, k j and dummy keys d i-1,, d j. Given keys k i,, k j with k r (i r j) as the root, the left subtree contains the keys k i,, k r-1 (and dummy keys d i-1,, d r-1 ) and the right subtree contains the keys k r+1,, k j (and dummy keys d r,, d j ). For the subtree with keys k i,, k j with root k i, the left subtree contains keys k i,.., k i-1 (no key) with the dummy key d i-1. k k 1 k 5 d 0 d 1 k4 d 5 k 3 d d 3 d

15 Overlapping Subproblem: Recurrence e[i, j] : expected cost of searching an optimal binary search tree containing the keys k i,, k j. Want to find e[1, n]. e[i, i -1] = q i-1 (only the dummy key d i-1 ). If k r (i r j) is the root of an optimal subtree containing keys k i,, k j and let wi (, j j j ) p l q l then li li1 e[i, j] = p r + (e[i, r-1] + w(i, r-1)) + (e[r+1, j] +w(r+1, j)) = e[i, r-1] + e[r+1, j] + w(i, j) 1 Recurrence: qi if j i1 ei [, j] min{ ei [, r1] er [ 1, j] wi (, j)} if i j ir j k Node depths increase by 1 after merging two subtrees, and so do the costs k 3 k d d 3 d 4 d Computing the Optimal Cost Need a table e[1..n+1, 0..n] for e[i, j] (why e[1, 0] and e[n+1, n]?) Apply the recurrence to compute w(i, j) (why?) q if j i wi [, j] wi [, j 1] pjqj if i j i 1 1 Optimal-BST(p, q, n) 1. let e[1..n+1, 0..n], w[1..n+1, 0..n], and root[1..n, 1..n] be new tables. for i = 1 to n e[i, i-1] = q i-1 4. w[i, i-1] = q i-1 5. for l = 1 to n 6. for i = 1 to n l j = i+ l e[i, j] = 9. w[i, j] = w[i, j-1] + p j + q j 10. for r = i to j 11. t = e[i, r-1] + e[r+1, j] + w[i, j] 1. if t < e[i, j] 13. e[i, j] = t 14. root[i, j] = r 15. return e and root root[i, j] : index r for which k r is the root of an optimal search tree containing keys k i,, k j. e j i k 3 k 4 k 5 d d 3 d 4 d 5 15

16 Example e[1, 1] = e[1, 0] + e[, 1] + w(1,1) i = p e i = q j i i root j i w e[1, 5] = e[1, 1] + e[3, 5] + w(1,5) 5 1 = = k (r = ; r=1, 3?) j i k 1 k d 0 d 1 k d k d 4 d d 3 31 Appendix A: Cell Compaction with Flipping Different cell boundaries need different minimum spacing Optimize cell orientations to get smaller chip area Flip c Smaller spacing Consider the cells in a single row with a fixed cell order Exhibit optimal substructure Dynamic programming (DP), 1 min,,, 1 min, T : cost function : #cells in row j : minimum spacing : nodes of cell flipping graph : node of optimal solution, : the two orientations, Y.-W. Chang 16

17 0 0 Example of DP-Based Cell Flipping 45 Flip c and c Unflipped Flipped Cell flipping graph, 1 min,,, 1 min i1 i i3 i , linear-time dynamic programming Y.-W. Chang , , , , , , ,9 39 Appendix B: Rod Cutting Cut steel rods into pieces to maximize the revenue Assumptions: Each cut is free; rod lengths are integral numbers. Input: A length n and table of prices p i, for i = 1,,, n. Output: The maximum revenue obtainable for rods whose lengths sum to n, computed as the sum of the prices for the individual rods. length i price p i length i = 4 1 max revenue r 4 = 5+5 = Objects are linearly ordered (and cannot be rearranged)?? 34 17

18 Optimal Rod Cutting length i price p i max revenue r i If p n is large enough, an optimal solution might require no cuts, i.e., just leave the rod as n unit long. Solution for the maximum revenue r i of length i rn max( pn, r1 rn 1, r rn,..., rn 1 r1) max ( pi rn i) r 1 = 1 from solution 1 = 1 (no cuts) r = 5 from solution = (no cuts) r 3 = 8 from solution 3 = 3 (no cuts) r 4 = 10 from solution 4 = + r 5 = 13 from solution 5 = + 3 r 6 = 17 from solution 6 = 6 (no cuts) r 7 = 18 from solution 7 = or 7 = r 8 = from solution 8 = + 6 1in 35 Optimal Substructure length i price p i max revenue r i Optimal substructure: To solve the original problem, solve subproblems on smaller sizes. The optimal solution to the original problem incorporates optimal solutions to the subproblems. We may solve the subproblems independently. After making a cut, we have two subproblems. Max revenue r 7 : r 7 = 18 = r 4 + r 3 = (r + r ) + r 3 or r 1 + r 6 rn max( pn, r1 rn 1, r rn,..., rn 1 r1) Decomposition with only one subproblem: Some cut gives a first piece of length i on the left and a remaining piece of length n - i on the right. rn max ( pi 1in r n i 36 ) 18

19 Recursive Top-Down Solution r n max ( p 1in i r n i ) Cut-Rod(p, n) 1. if n == 0. return 0 3. q = - 4. for i = 1 to n 5. q = max(q, p[i] + Cut-Rod(p, n - i)) 6. return q Inefficient solution: Cut-Rod calls itself repeatedly, even on subproblems it has already solved!! Cut-Rod(p, 4) Cut-Rod(p, 3) 1, if n 0 n 1 T ( n) 1 T ( j), if n 0. j0 T(n) = n Cut-Rod(p, 1) Cut-Rod(p, 0) Overlapping subproblems? 37 Top-Down DP Cut-Rod with Memoization Complexity: O(n ) time Solve each subproblem just once, and solves subproblems for sizes 0,1,, n. To solve a subproblem of size n, the for loop iterates n times. Memoized-Cut-Rod(p, n) 1. let r[0..n] be a new array. for i = 0 to n 3. r[i] = - 4. return Memoized-Cut-Rod -Aux(p, n, r) Memoized-Cut-Rod-Aux(p, n, r) 1. if r[n] 0. return r[n] 3. if n == 0 4. q = 0 5. else q = - 6. for i = 1 to n // Each overlapping subproblem // is solved just once!! 7. q = max(q, p[i] + Memoized-Cut-Rod -Aux(p, n-i, r)) 8. r[n] = q 9. return q 38 19

20 Bottom-Up DP Cut-Rod Complexity: O(n ) time Sort the subproblems by size and solve smaller ones first. When solving a subproblem, have already solved the smaller subproblems we need. Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] be a new array. r[0] = 0 3. for j = 1 to n 4. q = - 5. for i = 1 to j 6. q = max(q, p[i] + r[j i]) 7. r[j] = q 8. return r[n] 39 Bottom-Up DP with Solution Construction Extend the bottom-up approach to record not just optimal values, but optimal choices. Saves the first cut made in an optimal solution for a problem of size i in s[i] Extended-Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] and s[0..n] be new arrays. r[0] = 0 3. for j = 1 to n 4. q = - 5. for i = 1 to j 6. if q < p[i] + r[j i] 7. q = p[i] + r[j i] 8. s[j] = i 9. r[j] = q 10. return r and s Print-Cut-Rod-Solution(p, n) 1. (r, s) = Extended-Bottom-Up-Cut-Rod(p, n). while n > 0 3. print s[n] 4. n = n s[n] i r[i] s[i]

21 Appendix C: Optimal Polygon Triangulation Terminology: polygon, interior, exterior, boundary, convex polygon, triangulation? The Optimal Polygon Triangulation Problem: Given a convex polygon P = <v 0, v 1,, v n-1 > and a weight function w defined on triangles, find a triangulation that minimizes w(). One possible weight function on triangle: w(v i v j v k )= v i v j + v j v k + v k v i, where v i v j is the Euclidean distance from v i to v j. 41 Optimal Polygon Triangulation (cont'd) Correspondence between full parenthesization, full binary tree (parse tree), and triangulation full parenthesization full binary tree full binary tree (n-1 leaves) triangulation (n sides) t[i, j]: weight of an optimal triangulation of polygon <v i-1, v i,, v j >. 4 1

22 Pseudocode: Optimal Polygon Triangulation Matrix-Chain-Order is a special case of the optimal polygonal triangulation problem. Only need to modify Line 9 of Matrix-Chain-Order. Complexity: Runs in (n 3 ) time and uses (n ) space. Optimal-Polygon-Triangulation(P) 1. n = P.length. for i = 1 to n 3. t[i,i] = 0 4. for l = to n 5. for i = 1 to n l j = i + l t[i, j] = 8. for k = i to j-1 9. q = t[i, k] + t[k+1, j ] + w( v i-1 v k v j ) 10. if q < t[i, j] 11. t[i, j] = q 1. s[i,j] = k 13. return t and s 43 Appendix D: LCS for Flip-Chip/InFO Routing Lee, Lin, and Chang, ICCAD-09; Lin, Lin, Chang, ICCAD-16 Given: A set of I/O pads on I/O pad rings, a set of bump pads on bump pad rings, a set of nets/connections Objective: Connect I/O pads and bump pads according to a predefined assignment between the I/O and bump pads such that the total wirelength is minimized I/O pad bump pad ring net assignment (1) () 44

23 Minimize # of Detoured Nets by LCS Cut the rings into lines/segments (form linear orders) d 1 d 3 d 1 d 3 S b d n 1 n 3 d 4 n n 4 b 1 b b 3 b 4 n 3 b 1 n b n 1 b 3 n 4 b 4 (1) () net sequence S d n 1 n n 1 n 3 n 4 n 3 d 1 d d 1 d 3 d 4 d 3 b 1 b b 3 b 4 45 d 1 d 3 d 1 d d 1 d 3 d 4 d 3 n 3 n n 1 (3) (4) n 4 S d =[1,,1,3,4,3] S b =[3,,1,4] Need detour for only n 3 #Detours Minimization by Dynamic Programming Longest common subsequence (LCS) computation d 1 d d 3 d 1 d d 3 d 1 d d 3 b 3 b 1 b b 3 b 1 b b 3 b 1 b seq.1=<1,,3> (a) seq.=<3,1,> (b) Common subseq = <3> (c) LCS= <1,> Maximum planar subset of chords (MPSC) computation d 1 d d 3 d 1 d d 3 d 1 d d 3 b 3 b 1 b b 3 b 1 b b 3 b 1 b b 4 b 5 b 6 b 4 b 5 b 6 b 4 b 5 b 6 (d) chord set: {3,4,5} (e) subset:{3} Y.-W. Chang (f) MPSC = {4,5} 46 3

24 Supowit's Algorithm for Finding MPSC Supowit, Finding a maximum planar subset of a set of nets in a channel, IEEE TCAD, Problem: Given a set of chords, find a maximum planar subset of chords. Label the vertices on the circle 0 to n-1. Compute MIS(i, j): size of maximum independent set between vertices i and j, i < j. Answer = MIS(0, n-1). Vertices on the circle Unit 7 Y.-W. Chang 47 Dynamic Programming in Supowit's Algorithm Apply dynamic programming to compute MIS(i, j ). Unit 7 Y.-W. Chang 48 4

25 Ring-by-Ring Routing Decompose chip into rings of pads Initialize I/O-pad sequences Route from inner rings to router rings Exchange net order on current ring to be consistent with I/O pads Keep applying LCS and MPSC algorithms between two adjacent rings to minimize #detours Over 100X speedups over ILP current ring preceding ring Y.-W. Chang 49 Global Routing Results The global routing result of circuit fc64 A routing path for each net is guided by a set of segments Segment 50 5

26 Detailed Routing: Phase 1 Segment-by-segment routing in counter-clockwise order As compacted as possible 51 Detailed Routing: Phase Net-by-net re-routing in clockwise order Wirelength and number of bends minimizations 5 6

27 Appendix E: Cell Based VLSI Design Style 53 Pattern Graphs for an Example Library 54 7

28 Technology Mapping Technology Mapping: The optimization problem of finding a minimum cost covering of the subject graph by choosing from the collection of pattern graphs for all gates in the library. A cover is a collection of pattern graphs such that every node of the subject graph is contained in one (or more) of the pattern graphs. The cover is further constrained so that each input required by a pattern graph is actually an output of some other pattern graph. 55 Trivial Covering Mapped into -input NANDs and 1-input inverters. 8 -input NAND-gates and 7 inverters for an area cost of 3. Best covering? an example subject graph 56 8

29 Optimal Tree Covering by Dynamic Programming If the subject directed acyclic graph (DAG) is a tree, then a polynomial-time algorithm to find the minimum cover exists. Based on dynamic programming: optimal substructure? overlapping subproblems? Given: subject trees (networks to be mapped), library cells Consider a node n of the subject tree Recursive assumption: For all children of n, a best match which implements the node is known. Cost of a leaf is 0. Consider each pattern tree which matches at n, compute cost as the cost of implementing each node which the pattern requires as an input plus the cost of the pattern. Choose the lowest-cost matching pattern to implement n. 57 Best Covering A best covering with an area of 15. Obtained by the dynamic programming approach. 58 9

Chapter 3 Dynamic programming

Chapter 3 Dynamic programming Chapter 3 Dynamic programming 1 Dynamic programming also solve a problem by combining the solutions to subproblems. But dynamic programming considers the situation that some subproblems will be called

More information

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 1 Dynamic Programming 1 Introduction An algorithm design paradigm like divide-and-conquer Programming : A tabular method (not writing computer code) Divide-and-Conquer (DAC):

More information

Lecture 4: Dynamic programming I

Lecture 4: Dynamic programming I Lecture : Dynamic programming I Dynamic programming is a powerful, tabular method that solves problems by combining solutions to subproblems. It was introduced by Bellman in the 950 s (when programming

More information

Lecture 8. Dynamic Programming

Lecture 8. Dynamic Programming Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms Dynamic Programming Well known algorithm design techniques: Brute-Force (iterative) ti algorithms Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic

More information

15.Dynamic Programming

15.Dynamic Programming 15.Dynamic Programming Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. Like divide and conquer, DP solves problems by combining solutions

More information

Dynamic Programming (Part #2)

Dynamic Programming (Part #2) Dynamic Programming (Part #) Introduction to Algorithms MIT Press (Chapter 5) Matrix-Chain Multiplication Problem: given a sequence A, A,, A n, compute the product: A A A n Matrix compatibility: C = A

More information

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms Entwurf und Analyse von Algorithmen Dynamic Programming Overview Introduction Example 1 When and how to apply this method Example 2 Final remarks Introduction: when recursion is inefficient Example: Calculation

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples Elements of Dynamic Programming COSC 3A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken from Monica Nicolescu,

More information

Homework3: Dynamic Programming - Answers

Homework3: Dynamic Programming - Answers Most Exercises are from your textbook: Homework3: Dynamic Programming - Answers 1. For the Rod Cutting problem (covered in lecture) modify the given top-down memoized algorithm (includes two procedures)

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

Algorithms. Ch.15 Dynamic Programming

Algorithms. Ch.15 Dynamic Programming Algorithms Ch.15 Dynamic Programming Dynamic Programming Not a specific algorithm, but a technique (like divide-and-conquer). Developed back in the day when programming meant tabular method (like linear

More information

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12 Algorithms Theory 14 Dynamic Programming (2) Matrix-chain multiplication P.D. Dr. Alexander Souza Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution

More information

Dynamic Programming 1

Dynamic Programming 1 Dynamic Programming 1 Jie Wang University of Massachusetts Lowell Department of Computer Science 1 I thank Prof. Zachary Kissel of Merrimack College for sharing his lecture notes with me; some of the examples

More information

So far... Finished looking at lower bounds and linear sorts.

So far... Finished looking at lower bounds and linear sorts. So far... Finished looking at lower bounds and linear sorts. Next: Memoization -- Optimization problems - Dynamic programming A scheduling problem Matrix multiplication optimization Longest Common Subsequence

More information

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left 13.2 Rotations MAT-72006 AA+DS, Fall 2013 24-Oct-13 368 LEFT-ROTATE(, ) 1. // set 2. // s left subtree s right subtree 3. if 4. 5. // link s parent to 6. if == 7. 8. elseif == 9. 10. else 11. // put x

More information

ECE608 - Chapter 15 answers

ECE608 - Chapter 15 answers ¼ À ÈÌ Ê ½ ÈÊÇ Ä ÅË ½µ ½ º¾¹¾ ¾µ ½ º¾¹ µ ½ º¾¹ µ ½ º ¹½ µ ½ º ¹¾ µ ½ º ¹ µ ½ º ¹¾ µ ½ º ¹ µ ½ º ¹ ½¼µ ½ º ¹ ½½µ ½ ¹ ½ ECE608 - Chapter 15 answers (1) CLR 15.2-2 MATRIX CHAIN MULTIPLY(A, s, i, j) 1. if

More information

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares 12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution to the original

More information

Algorithm Design Techniques part I

Algorithm Design Techniques part I Algorithm Design Techniques part I Divide-and-Conquer. Dynamic Programming DSA - lecture 8 - T.U.Cluj-Napoca - M. Joldos 1 Some Algorithm Design Techniques Top-Down Algorithms: Divide-and-Conquer Bottom-Up

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Dynamic Programming part 2

Dynamic Programming part 2 Dynamic Programming part 2 Week 7 Objectives More dynamic programming examples - Matrix Multiplication Parenthesis - Longest Common Subsequence Subproblem Optimal structure Defining the dynamic recurrence

More information

Steven Skiena. skiena

Steven Skiena.   skiena Lecture 12: Examples of Dynamic Programming (1997) Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Give an O(n 2 )

More information

F(0)=0 F(1)=1 F(n)=F(n-1)+F(n-2)

F(0)=0 F(1)=1 F(n)=F(n-1)+F(n-2) Algorithms Dana Shapira Lesson #4: Dynamic programming Fibonacci Series F()= F()= F(n)=F(n-)+F(n-) Write a Divide and Conquer Algorithm! What is its running time? Binomial Coefficients n! n = ( )! n! Recursive

More information

CS 380 ALGORITHM DESIGN AND ANALYSIS

CS 380 ALGORITHM DESIGN AND ANALYSIS CS 380 ALGORITHM DESIGN AND ANALYSIS Lecture 14: Dynamic Programming Text Reference: Chapter 15 Dynamic Programming We know that we can use the divide-and-conquer technique to obtain efficient algorithms

More information

Lecture 13: Chain Matrix Multiplication

Lecture 13: Chain Matrix Multiplication Lecture 3: Chain Matrix Multiplication CLRS Section 5.2 Revised April 7, 2003 Outline of this Lecture Recalling matrix multiplication. The chain matrix multiplication problem. A dynamic programming algorithm

More information

CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees

CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees Reminders > HW4 is due today > HW5 will be posted shortly Dynamic Programming Review > Apply the steps... 1. Describe solution in terms of solution

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Data Structures and Algorithms Week 8

Data Structures and Algorithms Week 8 Data Structures and Algorithms Week 8 Dynamic programming Fibonacci numbers Optimization problems Matrix multiplication optimization Principles of dynamic programming Longest Common Subsequence Algorithm

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 15, 2015 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing

More information

Chain Matrix Multiplication

Chain Matrix Multiplication Chain Matrix Multiplication Version of November 5, 2014 Version of November 5, 2014 Chain Matrix Multiplication 1 / 27 Outline Outline Review of matrix multiplication. The chain matrix multiplication problem.

More information

Computer Sciences Department 1

Computer Sciences Department 1 1 Advanced Design and Analysis Techniques (15.1, 15.2, 15.3, 15.4 and 15.5) 3 Objectives Problem Formulation Examples The Basic Problem Principle of optimality Important techniques: dynamic programming

More information

(Feodor F. Dragan) Department of Computer Science Kent State University. Advanced Algorithms, Feodor F. Dragan, Kent State University 1

(Feodor F. Dragan) Department of Computer Science Kent State University. Advanced Algorithms, Feodor F. Dragan, Kent State University 1 $GYDQFH $OJRULWKPV (Feodor F. Dragan) Department of Computer Science Kent State University Advanced Algorithms, Feodor F. Dragan, Kent State University Textbook: Thomas Cormen, Charles Leisterson, Ronald

More information

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced 13 Red-Black Trees A red-black tree (RBT) is a BST with one extra bit of storage per node: color, either RED or BLACK Constraining the node colors on any path from the root to a leaf Ensures that no such

More information

Dynamic programming II - The Recursion Strikes Back

Dynamic programming II - The Recursion Strikes Back Chapter 5 Dynamic programming II - The Recursion Strikes Back By Sariel Har-Peled, December 17, 2012 1 Version: 0.4 No, mademoiselle, I don t capture elephants. I content myself with living among them.

More information

We ve done. Now. Next

We ve done. Now. Next We ve done Matroid Theory Task scheduling problem (another matroid example) Dijkstra s algorithm (another greedy example) Dynamic Programming Now Matrix Chain Multiplication Longest Common Subsequence

More information

CS 231: Algorithmic Problem Solving

CS 231: Algorithmic Problem Solving CS 231: Algorithmic Problem Solving Naomi Nishimura Module 5 Date of this version: June 14, 2018 WARNING: Drafts of slides are made available prior to lecture for your convenience. After lecture, slides

More information

Dynamic Programming Group Exercises

Dynamic Programming Group Exercises Name: Name: Name: Dynamic Programming Group Exercises Adapted from material by Cole Frederick Please work the following problems in groups of 2 or 3. Use additional paper as needed, and staple the sheets

More information

Dynamic Programming Matrix-chain Multiplication

Dynamic Programming Matrix-chain Multiplication 1 / 32 Dynamic Programming Matrix-chain Multiplication CS 584: Algorithm Design and Analysis Daniel Leblanc 1 1 Senior Adjunct Instructor Portland State University Maseeh College of Engineering and Computer

More information

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming. Dynamic Programming Deliverables Dynamic Programming basics Binomial Coefficients Weighted Interval Scheduling Matrix Multiplication /1 Knapsack Longest Common Subsequence 6/12/212 6:56 PM copyright @

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 6, 2016 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing up

More information

- Main approach is recursive, but holds answers to subproblems in a table so that can be used again without re-computing

- Main approach is recursive, but holds answers to subproblems in a table so that can be used again without re-computing Dynamic Programming class 2 - Main approach is recursive, but holds answers to subproblems in a table so that can be used again without re-computing - Can be formulated both via recursion and saving in

More information

Optimization II: Dynamic Programming

Optimization II: Dynamic Programming Chapter 12 Optimization II: Dynamic Programming In the last chapter, we saw that greedy algorithms are efficient solutions to certain optimization problems. However, there are optimization problems for

More information

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2 CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 11: Dynamic Programming, Part 2 Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Goal: continue with DP (Knapsack, All-Pairs SPs, )

More information

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 Dynamic Programming is a powerful technique that allows one to solve many different types

More information

/463 Algorithms - Fall 2013 Solution to Assignment 3

/463 Algorithms - Fall 2013 Solution to Assignment 3 600.363/463 Algorithms - Fall 2013 Solution to Assignment 3 (120 points) I (30 points) (Hint: This problem is similar to parenthesization in matrix-chain multiplication, except the special treatment on

More information

1 Dynamic Programming

1 Dynamic Programming Recitation 13 Dynamic Programming Parallel and Sequential Data Structures and Algorithms, 15-210 (Fall 2013) November 20, 2013 1 Dynamic Programming Dynamic programming is a technique to avoid needless

More information

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book)

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book) Matrix Chain-Products (not in book) is a general algorithm design paradigm. Rather than give the general structure, let us first give a motivating example: Matrix Chain-Products Review: Matrix Multiplication.

More information

CS141: Intermediate Data Structures and Algorithms Dynamic Programming

CS141: Intermediate Data Structures and Algorithms Dynamic Programming CS141: Intermediate Data Structures and Algorithms Dynamic Programming Amr Magdy Programming? In this context, programming is a tabular method Other examples: Linear programing Integer programming 2 Rod

More information

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if Recursive solution for finding LCS of X and Y if x s =y t, then find an LCS of X s-1 and Y t-1, and then append x s =y t to this LCS if x s y t, then solve two subproblems: (1) find an LCS of X s-1 and

More information

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions U.C. Berkeley CS70 : Algorithms Midterm 2 Solutions Lecturers: Sanjam Garg and Prasad aghavra March 20, 207 Midterm 2 Solutions. (0 points) True/False Clearly put your answers in the answer box in front

More information

CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING. Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17

CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING. Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17 CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17 DYNAMIC PROGRAMMING Recursive problems uses the subproblem(s) solve the current one. Dynamic

More information

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 25 Dynamic Programming Terrible Fibonacci Computation Fibonacci sequence: f = f(n) 2

More information

ECE250: Algorithms and Data Structures Dynamic Programming Part B

ECE250: Algorithms and Data Structures Dynamic Programming Part B ECE250: Algorithms and Data Structures Dynamic Programming Part B Ladan Tahvildari, PEng, SMIEEE Associate Professor Software Technologies Applied Research (STAR) Group Dept. of Elect. & Comp. Eng. University

More information

Dynamic Programming. Lecture Overview Introduction

Dynamic Programming. Lecture Overview Introduction Lecture 12 Dynamic Programming 12.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

Today: Matrix Subarray (Divide & Conquer) Intro to Dynamic Programming (Rod cutting) COSC 581, Algorithms January 21, 2014

Today: Matrix Subarray (Divide & Conquer) Intro to Dynamic Programming (Rod cutting) COSC 581, Algorithms January 21, 2014 Today: Matrix Subarray (Divide & Conquer) Intro to Dynamic Programming (Rod cutting) COSC 581, Algorithms January 21, 2014 Reading Assignments Today s class: Chapter 4.1, 15.1 Reading assignment for next

More information

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University Algorithms IV Dynamic Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Dynamic Programming Shortest Paths in Dags, Revisited Shortest Paths in Dags, Revisited The special distinguishing

More information

Unit 4: Formal Verification

Unit 4: Formal Verification Course contents Unit 4: Formal Verification Logic synthesis basics Binary-decision diagram (BDD) Verification Logic optimization Technology mapping Readings Chapter 11 Unit 4 1 Logic Synthesis & Verification

More information

Algorithms: COMP3121/3821/9101/9801

Algorithms: COMP3121/3821/9101/9801 NEW SOUTH WALES Algorithms: COMP3121/3821/9101/9801 Aleks Ignjatović School of Computer Science and Engineering University of New South Wales TOPIC 5: DYNAMIC PROGRAMMING COMP3121/3821/9101/9801 1 / 38

More information

Algorithms: Dynamic Programming

Algorithms: Dynamic Programming Algorithms: Dynamic Programming Amotz Bar-Noy CUNY Spring 2012 Amotz Bar-Noy (CUNY) Dynamic Programming Spring 2012 1 / 58 Dynamic Programming General Strategy: Solve recursively the problem top-down based

More information

Unit-5 Dynamic Programming 2016

Unit-5 Dynamic Programming 2016 5 Dynamic programming Overview, Applications - shortest path in graph, matrix multiplication, travelling salesman problem, Fibonacci Series. 20% 12 Origin: Richard Bellman, 1957 Programming referred to

More information

CSE 4/531 Solution 3

CSE 4/531 Solution 3 CSE 4/531 Solution 3 Edited by Le Fang November 7, 2017 Problem 1 M is a given n n matrix and we want to find a longest sequence S from elements of M such that the indexes of elements in M increase and

More information

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms Efficient Sequential Algorithms, Comp309 Part 1: Algorithmic Paradigms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press

More information

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018 CS124 Lecture 11 Spring 2018 Memoization/Dynamic Programming Today s lecture discusses memoization, which is a method for speeding up algorithms based on recursion, by using additional memory to remember

More information

Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, === Homework submission instructions ===

Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, === Homework submission instructions === Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, 2011 === Homework submission instructions === Submit the answers for writing problems (including your programming report) through

More information

Partha Sarathi Manal

Partha Sarathi Manal MA 515: Introduction to Algorithms & MA353 : Design and Analysis of Algorithms [3-0-0-6] Lecture 29 http://www.iitg.ernet.in/psm/indexing_ma353/y09/index.html Partha Sarathi Manal psm@iitg.ernet.in Dept.

More information

Lecture 22: Dynamic Programming

Lecture 22: Dynamic Programming Lecture 22: Dynamic Programming COSC242: Algorithms and Data Structures Brendan McCane Department of Computer Science, University of Otago Dynamic programming The iterative and memoised algorithms for

More information

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III SUB CODE: CS2251 DEPT: CSE SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks) 1. Write any four examples

More information

Dynamic Programming Homework Problems

Dynamic Programming Homework Problems CS 1510 Dynamic Programming Homework Problems 1. Consider the recurrence relation T(0) = T(1) = 2 and for n > 1 n 1 T(n) = T(i)T(i 1) i=1 We consider the problem of computing T(n) from n. (a) Show that

More information

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.) Dynamic programming is a problem solving method that is applicable to many different types of problems. I think it is best learned by example, so we will mostly do examples today. 1 Rod cutting Suppose

More information

Dynamic Programming. Outline and Reading. Computing Fibonacci

Dynamic Programming. Outline and Reading. Computing Fibonacci Dynamic Programming Dynamic Programming version 1.2 1 Outline and Reading Matrix Chain-Product ( 5.3.1) The General Technique ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Dynamic Programming version 1.2 2 Computing

More information

Solving NP-hard Problems on Special Instances

Solving NP-hard Problems on Special Instances Solving NP-hard Problems on Special Instances Solve it in poly- time I can t You can assume the input is xxxxx No Problem, here is a poly-time algorithm 1 Solving NP-hard Problems on Special Instances

More information

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60 Dynamic Programming December 15, 2016 CMPE 250 Dynamic Programming December 15, 2016 1 / 60 Why Dynamic Programming Often recursive algorithms solve fairly difficult problems efficiently BUT in other cases

More information

Introduction VLSI PHYSICAL DESIGN AUTOMATION

Introduction VLSI PHYSICAL DESIGN AUTOMATION VLSI PHYSICAL DESIGN AUTOMATION PROF. INDRANIL SENGUPTA DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Introduction Main steps in VLSI physical design 1. Partitioning and Floorplanning l 2. Placement 3.

More information

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk CMPS 00 Fall 04 Dynamic Programming Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 9/30/4 CMPS 00 Intro. to Algorithms Dynamic programming Algorithm design technique

More information

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 Objectives: learn general strategies for problems about order statistics learn how to find the median (or k-th largest) in linear average-case number of

More information

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms Chapter 13 Greedy Algorithms Activity Selection Problem 0-1 Knapsack Problem Huffman Code Construction Dynamic Programming Assembly-Line Scheduling C-C Tsai P.1 Greedy Algorithms A greedy algorithm always

More information

Algorithms for Data Science

Algorithms for Data Science Algorithms for Data Science CSOR W4246 Eleni Drinea Computer Science Department Columbia University Thursday, October 1, 2015 Outline 1 Recap 2 Shortest paths in graphs with non-negative edge weights (Dijkstra

More information

Analysis of Algorithms. Unit 4 - Analysis of well known Algorithms

Analysis of Algorithms. Unit 4 - Analysis of well known Algorithms Analysis of Algorithms Unit 4 - Analysis of well known Algorithms 1 Analysis of well known Algorithms Brute Force Algorithms Greedy Algorithms Divide and Conquer Algorithms Decrease and Conquer Algorithms

More information

Dynamic Programming Homework Problems

Dynamic Programming Homework Problems CS 1510 Dynamic Programming Homework Problems 1. (2 points) Consider the recurrence relation T (0) = T (1) = 2 and for n > 1 n 1 T (n) = T (i)t (i 1) i=1 We consider the problem of computing T (n) from

More information

CMSC 451: Dynamic Programming

CMSC 451: Dynamic Programming CMSC 41: Dynamic Programming Slides By: Carl Kingsford Department of Computer Science University of Maryland, College Park Based on Sections 6.1&6.2 of Algorithm Design by Kleinberg & Tardos. Dynamic Programming

More information

Introduction to Algorithms

Introduction to Algorithms Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Second Edition The MIT Press Cambridge, Massachusetts London, England McGraw-Hill Book Company Boston Burr

More information

Divide-and-Conquer. The most-well known algorithm design strategy: smaller instances. combining these solutions

Divide-and-Conquer. The most-well known algorithm design strategy: smaller instances. combining these solutions Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances recursively 3. Obtain solution to original

More information

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165 S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165 5.22. You are given a graph G = (V, E) with positive edge weights, and a minimum spanning tree T = (V, E ) with respect to these weights; you may

More information

EE/CSCI 451 Spring 2018 Homework 8 Total Points: [10 points] Explain the following terms: EREW PRAM CRCW PRAM. Brent s Theorem.

EE/CSCI 451 Spring 2018 Homework 8 Total Points: [10 points] Explain the following terms: EREW PRAM CRCW PRAM. Brent s Theorem. EE/CSCI 451 Spring 2018 Homework 8 Total Points: 100 1 [10 points] Explain the following terms: EREW PRAM CRCW PRAM Brent s Theorem BSP model 1 2 [15 points] Assume two sorted sequences of size n can be

More information

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes Greedy Algorithms CLRS Chapters 16.1 16.3 Introduction to greedy algorithms Activity-selection problem Design of data-compression (Huffman) codes (Minimum spanning tree problem) (Shortest-path problem)

More information

CSL 860: Modern Parallel

CSL 860: Modern Parallel CSL 860: Modern Parallel Computation PARALLEL ALGORITHM TECHNIQUES: BALANCED BINARY TREE Reduction n operands => log n steps Total work = O(n) How do you map? Balance Binary tree technique Reduction n

More information

Dynamic programming 4/9/18

Dynamic programming 4/9/18 Dynamic programming 4/9/18 Administrivia HW 3 due Wednesday night Exam out Thursday, due next week Multi-day takehome, open book, closed web, written problems Induction, AVL trees, recurrences, D&C, multithreaded

More information

EE/CSCI 451 Midterm 1

EE/CSCI 451 Midterm 1 EE/CSCI 451 Midterm 1 Spring 2018 Instructor: Xuehai Qian Friday: 02/26/2018 Problem # Topic Points Score 1 Definitions 20 2 Memory System Performance 10 3 Cache Performance 10 4 Shared Memory Programming

More information

DESIGN AND ANALYSIS OF ALGORITHMS (DAA 2017)

DESIGN AND ANALYSIS OF ALGORITHMS (DAA 2017) DESIGN AND ANALYSIS OF ALGORITHMS (DAA 2017) Veli Mäkinen Design and Analysis of Algorithms 2017 week 4 11.8.2017 1 Dynamic Programming Week 4 2 Design and Analysis of Algorithms 2017 week 4 11.8.2017

More information

Divide and Conquer. Algorithm Fall Semester

Divide and Conquer. Algorithm Fall Semester Divide and Conquer Algorithm 2014 Fall Semester Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms 6.046J/18.401J LECTURE 12 Dynamic programming Longest common subsequence Optimal substructure Overlapping subproblems Prof. Charles E. Leiserson Dynamic programming Design technique,

More information

Dynamic Programmming: Activity Selection

Dynamic Programmming: Activity Selection Dynamic Programmming: Activity Selection Select the maximum number of non-overlapping activities from a set of n activities A = {a 1,, a n } (sorted by finish times). Identify easier subproblems to solve.

More information

CS Algorithms and Complexity

CS Algorithms and Complexity CS 350 - Algorithms and Complexity Dynamic Programming Sean Anderson 2/20/18 Portland State University Table of contents 1. Homework 3 Solutions 2. Dynamic Programming 3. Problem of the Day 4. Application

More information

Brute Force: Selection Sort

Brute Force: Selection Sort Brute Force: Intro Brute force means straightforward approach Usually based directly on problem s specs Force refers to computational power Usually not as efficient as elegant solutions Advantages: Applicable

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Divide and Conquer

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Divide and Conquer Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Divide and Conquer Divide and-conquer is a very common and very powerful algorithm design technique. The general idea:

More information

L.J. Institute of Engineering & Technology Semester: VIII (2016)

L.J. Institute of Engineering & Technology Semester: VIII (2016) Subject Name: Design & Analysis of Algorithm Subject Code:1810 Faculties: Mitesh Thakkar Sr. UNIT-1 Basics of Algorithms and Mathematics No 1 What is an algorithm? What do you mean by correct algorithm?

More information

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech License CC BY-NC-SA 2.0 http://creativecommons.org/licenses/by-nc-sa/2.0/fr/ Outline Previously on IN101 Python s anatomy Functions,

More information

O(n): printing a list of n items to the screen, looking at each item once.

O(n): printing a list of n items to the screen, looking at each item once. UNIT IV Sorting: O notation efficiency of sorting bubble sort quick sort selection sort heap sort insertion sort shell sort merge sort radix sort. O NOTATION BIG OH (O) NOTATION Big oh : the function f(n)=o(g(n))

More information

1 Dynamic Programming

1 Dynamic Programming Recitation 13 Dynamic Programming Parallel and Sequential Data Structures and Algorithms, 15-210 (Spring 2013) April 17, 2013 1 Dynamic Programming Dynamic programming is a technique to avoid needless

More information