Unit-5 Dynamic Programming 2016

Size: px
Start display at page:

Download "Unit-5 Dynamic Programming 2016"

Transcription

1 5 Dynamic programming Overview, Applications - shortest path in graph, matrix multiplication, travelling salesman problem, Fibonacci Series. 20% 12 Origin: Richard Bellman, 1957 Programming referred to a series of choices Dynamic: choices are made on the fly, not in beginning Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. The idea is simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again. If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then dynamic programming can be applied. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem which is referred to as the Optimal Substructure Property. A method for solving complex problems by breaking them up into sub-problems first. This technique can be used when a given problem can be split into overlapping sub-problems and when there is an optimal sub-structure to the problem. It can be used to solve many problems in time O(n 2 ) or O(n 3 ) for which a naive approach would take exponential time. Dynamic programming is a technique for solving problems recursively and is applicable when the computations of the subproblems overlap. Dynamic programming is typically implemented using tabulation, but can also be implemented using memoization There are two ways of doing this. 1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization. If you use memoization to solve the problem you do it by maintaining a map of already solved sub problems. You do it "top down" in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems). 2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, Analysis and Design of Algorithm Page 1

2 it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Tabulation. When you solve a dynamic programming problem using tabulation you solve the problem "bottom up", i.e., by solving all related sub-problems first, typically by filling up an n- dimensional table. Based on the results in the table, the solution to the "top" / original problem is then computed. Principle of Optimality (Optimal Substructure Property) A problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem. Suppose that in solving a problem, we have to make a sequence of decisions D1, D2 Dn. If this sequence is optimal, then the last k decisions, 1 k n must be optimal. Ex: The shortest path problem If i1, i2 j is a shortest path from i to j, then i1, i2 j must be a shortest path from i1to j Difference Between Dynamic Programming and Divide and Conquer: Dynamic programming is similar to the divide-and-conquer approach in that the solution of a large problem depends on previously obtained solutions to easier subproblems. The significant difference, however, is that dynamic programming permits subproblems to overlap. By overlap, we mean that the subproblem can be used in the solution of two different subproblems. In contrast, the divide-and-conquer approach creates subproblems that are completely separate and can be solved independently. the problem to be solved is shown as the root of a tree, where children are easier subproblems. The leaves of the trees are trivial subproblems that can be solved directly in D.P., these leaves are often the input to the algorithm. The primary difference between divide and conquer and D.P. is clear: subproblems in divide and conquer do not interact, while in D.P., they might. Analysis and Design of Algorithm Page 2

3 The primary difference is that the subproblems of the divide-and-conquer approach are independent, while in dynamic programming they interact. Also, dynamic programming solves problems in a bottom-up manner as opposed to divide-and-conquer s top-down approach. A second difference is also illustrated by the figure: while the divide-and-conquer approach is recursive, top-down, D.P is best thought as bottom up. Example The following computer problems can be solved using dynamic programming approach Fibonacci number series Knapsack problem Tower of Hanoi All pair shortest path by Floyd-Warshall Shortest path by Dijkstra Project scheduling Dynamic programming can be used in both top-down and bottom-up manner. Most of the times, referring to previous solution output is cheaper than re-computing in terms of CPU cycles thus by reduction in time complexity. Application: 1) Fibonacci Series 1. Consider the Fibonacci Series : 0,1,1,2,3,5,8,13,21... Analysis and Design of Algorithm Page 3

4 F(0)=0 ; F(1) = 1; F(N)=F(N-1)+F(N-2) Pseudo-code for a simple recursive function will be : fib(int n) { if (n==0) return 0; if (n==1) return 1; return fib(n-1)+fib(n-2); Using the above function, if we want to compute fib(7), we end up making a tree of calls which computes the function on the same value, several times : fib(6) = fib(5) + fib(4) fib(6) = ( fib(4)+fib(3) ) + ( fib(3) + fib(2) ) fib(6) = ( (fib(3) + fib(2)) + ( fib(2) + fib(1)) ) + ( (fib(2) + fib(1)) + (fib(1) + fib(0) ) fib(6) = ( ( ( fib(2) + fib(1) ) +( fib(1) + fib(0) ) ) + ( ( fib(1) + fib(0) ) + fib(1) ) ) + ( ( (fib(1)+fib(0)) + fib(1) ) + ( fib(1) + fib(0) ) ) fib(6) = ( ( ((fib(1)+fib(0)) + fib(1) ) +( fib(1) + fib(0) ) ) + ( ( fib(1) + fib(0) ) + fib(1) ) ) + ( ( (fib(1)+fib(0)) + fib(1) ) + ( fib(1) + fib(0) ) ) As you can see : fib(5) has been computed once. fib(4) has been computed 2 times. fib(3) has been computed 3 times. fib(2) has been computed 5 times. Example: Write a function int fib(int n) that returns F n. For example, if n = 0, then fib() should return 0. If n = 1, then it should return 1. For n > 1, it should return F n-1 + F n-2 Method 1 ( Use recursion ) A simple method that is a direct recusrive implementation mathematical recurance relation given above. #include<stdio.h> int fib(int n) { if (n <= 1) return n; return fib(n-1) + fib(n-2); Analysis and Design of Algorithm Page 4

5 int main () { int n = 9; printf("%d", fib(n)); getchar(); return 0;. Time Complexity: T(n) = T(n-1) + T(n-2) which is exponential Extra Space: O(n) if we consider the function call stack size, otherwise O(1). fib(5) / \ fib(4) fib(3) / \ / \ fib(3) fib(2) fib(2) fib(1) / \ / \ / \ fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) / \ fib(1) fib(0) Method 2 ( Use Dynamic Programming ) We can avoid the repeated work done is the method 1 by storing the Fibonacci numbers calculated so far. #include<stdio.h> int fib(int n) { /* Declare an array to store Fibonacci numbers. */ int f[n+1]; int i; /* 0th and 1st number of the series are 0 and 1*/ f[0] = 0; f[1] = 1; for (i = 2; i <= n; i++) { /* Add the previous 2 numbers in the series and store it */ f[i] = f[i-1] + f[i-2]; return f[n]; int main () { Analysis and Design of Algorithm Page 5

6 int n = 9; printf("%d", fib(n)); getchar(); return 0; Time Complexity: O(n) Extra Space: O(n) Analysis and Design of Algorithm Page 6

7 Travelling Salesman Problem (TSP): Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point. TSP problem can be described as: find a tour of N cities in a country (assuming all cities to be visited are reachable), the tour should (a) visit every city just once, (b) return to the starting point and (c) be of minimum distance. Algorithm : Number the cities 1, 2,..., N and assume we start at city 1, and the distance between city i and city j is d ij. Consider subsets S {2,...,N of cities and, for c S, let D(S, c) be the minimum distance, starting at city 1, visiting all cities in S and finishing at city c. First phase: if S = {c, then D(S, c) = d 1,c. Otherwise: D(S, c) = min x S c (D(S c, x) + d x,c ) Second phase: the minimum distance for a complete tour of all cities is M = min c {2,...,N (D({2,..., N, c) + d c,1 ) A tour n 1,..., n N is of minimum distance just when it satisfies M = D({2,..., N, n N ) + d nn,1. Pseucodode: function algorithm TSP (G, n) for k := 2 to n do C({1, k, k) := d 1,k end for for s := 3 to n do for all S {1, 2,..., n, S = s do for all k S do {C(S, k) = min m 1,m k,m S [C(S {k, m) + d m,k ] end for end for end for opt := min k 1 [C({1, 2, 3,..., n, k) + d k,1 ] return (opt) end Analysis and Design of Algorithm Page 7

8 Example: Distance matrix: Functions description: g(x, S) - starting from 1, path min cost ends at vertex x, passing vertices in set S exactly once c xy - edge cost ends at x from y p(x, S) - the second-to-last vertex to x from set S. Used for constructing the TSP path back at the end. k = 0, null set: Set : g(2, ) = c 21 = 1 g(3, ) = c 31 = 15 g(4, ) = c 41 = 6 k = 1, consider sets of 1 element: Set {2: g(3,{2) = c 32 + g(2, ) = c 32 + c 21 = = 8 p(3,{2) = 2 g(4,{2) = c 42 + g(2, ) = c 42 + c 21 = = 4 p(4,{2) = 2 Set {3: g(2,{3) = c 23 + g(3, ) = c 23 + c 31 = = 21 p(2,{3) = 3 g(4,{3) = c 43 + g(3, ) = c 43 + c 31 = = 27 p(4,{3) = 3 Set {4: g(2,{4) = c 24 + g(4, ) = c 24 + c 41 = = 10 p(2,{4) = 4 g(3,{4) = c 34 + g(4, ) = c 34 + c 41 = = 14 p(3,{4) = 4 k = 2, consider sets of 2 elements: Set {2,3: g(4,{2,3) = min {c 42 + g(2,{3), c 43 + g(3,{2) = min {3+21, 12+8= min {24, 20= 20 p(4,{2,3) = 3 Set {2,4: Analysis and Design of Algorithm Page 8

9 g(3,{2,4) = min {c 32 + g(2,{4), c 34 + g(4,{2) = min {7+10, 8+4= min {17, 12 = 12 p(3,{2,4) = 4 Set {3,4: g(2,{3,4) = min {c 23 + g(3,{4), c 24 + g(4,{3) = min {6+14, 4+27= min {20, 31= 20 p(2,{3,4) = 3 Length of an optimal tour: f = g(1,{2,3,4) = min { c 12 + g(2,{3,4), c 13 + g(3,{2,4), c 14 + g(4,{2,3) = min {2 + 20, , = min {22, 21, 30 = 21 Successor of node 1: p(1,{2,3,4) = 3 Successor of node 3: p(3, {2,4) = 4 Successor of node 4: p(4, {2) = 2 The optimal TSP tour reaches: The worst-case time complexity of this algorithm is O(2 n n 2 ) and the space O(2 n n). Analysis and Design of Algorithm Page 9

10 Matrix Chain Multiplication Problem: Given a sequence of matrices, find the most efficient way to multiply these matrices together. The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications. We have many options to multiply a chain of matrices because matrix multiplication is associative. In other words, no matter how we parenthesize the product, the result will be the same. For example, if we had four matrices A, B, C, and D, we would have: (ABC)D = (AB)(CD) = A(BCD) =... However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the efficiency. For example, suppose A is a matrix, B is a 30 5 matrix, and C is a 5 60 matrix. Then, (AB)C = ( ) + ( ) = = 4500 operations A(BC) = ( ) + ( ) = = operations. Clearly the first parenthesization requires less number of operations. Given an array p[] which represents the chain of matrices such that the ith matrix Ai is of dimension p[i-1] x p[i]. We need to write a function MatrixChainOrder() that should return the minimum number of multiplications needed to multiply the chain. Example: Input: p[] = {40, 20, 30, 10, 30 Output: There are 4 matrices of dimensions 40x20, 20x30, 30x10 and 10x30. Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained by putting parenthesis in following way (A(BC))D --> 20*30* *20* *10*30 Input: p[] = {10, 20, 30, 40, 30 Output: There are 4 matrices of dimensions 10x20, 20x30, 30x40 and 40x30. Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained by putting parenthesis in following way ((AB)C)D --> 10*20* *30* *40*30 Input: p[] = {10, 20, 30 Output: 6000 There are only two matrices of dimensions 10x20 and 20x30. So there is only one way to multiply the matrices, cost of which is 10*20*30 Analysis and Design of Algorithm Page 10

11 Dynamic Programming Approach Unit-5 Dynamic Programming 2016 The first step of the dynamic programming paradigm is to characterize the structure of an optimal solution. For the chain matrix problem, like other dynamic programming problems, involves determining the optimal structure (in this case, a parenthesization). We would like to break the problem into subproblems, whose solutions can be combined to obtain a solution to the global problem. For convenience, let us adopt the notation A i.. j, where i j, for the result from evaluating the product A i A i A j. That is, A i.. j A i A i A j, where i j, It is easy to see that is a matrix A i.. j is of dimensions p i p i + 1. In parenthesizing the expression, we can consider the highest level of parenthesization. At this level we are simply multiplying two matrices together. That is, for any k, 1 k n 1, A 1..n = A 1..k A k+1..n. Therefore, the problem of determining the optimal sequence of multiplications is broken up into two questions: Question 1: How do we decide where to split the chain? (What is k?) Question 2: How do we parenthesize the subchains A 1..k A k+1..n? The subchain problems can be solved by recursively applying the same scheme. On the other hand, to determine the best value of k, we will consider all possible values of k, and pick the best of them. Notice that this problem satisfies the principle of optimality, because once we decide to break the sequence into the product, we should compute each subsequence optimally. That is, for the global problem to be solved optimally, the subproblems must be solved optimally as well. The key observation is that the parenthesization of the "prefix" subchain A 1..k within this optimal parenthesization of A 1..n. must be anoptimal parenthesization of A 1..k. Step:2 The second step of the dynamic programming paradigm is to define the value of an optimal solution recursively in terms of the optimal solutions to subproblems. To help us keep track of solutions to subproblems, we will use a table, and build the table in a bottomup manner. For 1 i j n, let m[i, j] be the minimum number of scalar multiplications needed to compute the A i..j. The optimum cost can be described by the following recursive formulation. Basis: Observe that if i = j then the problem is trivial; the sequence contains only one matrix, and so the cost is 0. (In other words, there is nothing to multiply.) Thus, m[i, i] = 0 for i = 1, 2,..., n. Step: If i j, then we are asking about the product of the subchain A i..j and we take advantage of the structure of an optimal solution. We assume that the optimal parenthesization splits the product, A i..j into for each value of k, 1 k n 1 as A i..k. A k+1..j. Analysis and Design of Algorithm Page 11

12 The optimum time to compute is m[i, k], and the optimum time to compute is m[k + 1, j]. We may assume that these values have been computed previously and stored in our array. Since A i..k is a matrix, and A k+1..j is a matrix, the time to multiply them is p i 1. p k. p j. This suggests the following recursive rule for computing m[i, j]. To keep track of optimal subsolutions, we store the value of k in a table s[i, j]. Recall, k is the place at which we split the product A i..j to get an optimal parenthesization. That is, s[i, j] = k such that m[i, j] = m[i, k] + m[k + 1, j] + p i 1. p k. p j. Step-3: The third step of the dynamic programming paradigm is to construct the value of an optimal solution in a bottom-up fashion. It is pretty straight forward to translate the above recurrence into a procedure. As we have remarked in the introduction that the dynamic programming is nothing but the fancy name for divide-and-conquer with a table. But here in dynamic programming, as opposed to divide-and-conquer, we solve subproblems sequentially. It means the trick here is to solve them in the right order so that whenever the solution to a subproblem is needed, it is already available in the table. Consequently, in our problem the only tricky part is arranging the order in which to compute the values (so that it is readily available when we need it). In the process of computing m[i, j] we will need to access values m[i, k] and m[k + 1, j] for each value of k lying between i and j. This suggests that we should organize our computation according to the number of matrices in the subchain. So, lets work on the subchain: Let L = j i + 1 denote the length of the subchain being multiplied. The subchains of length 1 (m[i, i]) are trivial. Then we build up by computing the subchains of length 2, 3,..., n. The final answer is m[1, n]. Now set up the loop: Observe that if a subchain of length L starts at position i, then j = i + L 1. Since, we would like to keep j in bounds, this means we want j n, this, in turn, means that we want i + L 1 n, actually what we are saying here is that we want i n L +1. This gives us the closed interval for i. So our loop for i runs from 1 to n L + 1. Matrix-Chain(array p[1.. n], int n) { Array s[1.. n 1, 2.. n]; FOR i = 1 TO n DO m[i, i] = 0; FOR L = 2 TO n DO { FOR i = 1 TO n L + 1 do { j = i + L 1; m[i, j] = infinity; FOR k = i TO j 1 DO { q = m[i, k] + m[k + 1, j] + p[i 1] p[k] p[j]; IF (q < m[i, j]) { m[i, j] = q; s[i, j] = k; // initialize // L=length of subchain // check all splits Analysis and Design of Algorithm Page 12

13 E.G. return m[1, n](final cost) and s (splitting markers); The m-table computed by MatrixChain procedure for n = 6 matrices A 1, A 2, A 3, A 4, A 5, A 6 and their dimensions 30, 35, 15, 5, 10, 20, 25. the m-table is rotated so that the main diagonal runs horizontally. Only the main diagonal and upper triangle is used. Complexity Analysis Clearly, the space complexity of this procedure Ο(n 2 ). Since the tables m and s require Ο(n 2 ) space. As far as the time complexity is concern, a simple inspection of the for-loop(s) structures gives us a running time of the procedure. Since, the three for-loops are nested three deep, and each one of them iterates at most n times (that is to say indices L, i, and j takes on at most n 1 values). Therefore, The running time of this procedure is Ο(n 3 ). Extracting Optimum Sequence This is Step 4 of the dynamic programming paradigm in which we construct an optimal solution from computed information. The arrays[i, j] can be used to extract the actual sequence. The basic idea is to keep a split marker in s[i, j] that indicates what is the best split, that is, what value of k leads to the minimum value of m[i, j]. s[i, j] = k tells us that the best way to multiply the subchain is to first multiply the subchain A i..k and then multiply the subchain A k+1..j, and finally multiply these two subchains together. Intuitively, s[i, j] tells us what multiplication to perform last. Note that we only need to store s[i, j] when we have at least two matrices, that is, if j > i. The actual multiplication algorithm uses the s[i, j] value to determine how to split the current sequence. Assume that the matrices are stored in an array of matrices A[1..n], and that s[i, j] is global to this recursive procedure. The procedure returns a matrix. Mult(i, j) { if (i = = j) return A[i]; else { k = s[i, j]; // Basis Analysis and Design of Algorithm Page 13

14 X = Mult(i, k]; Y = Mult(k + 1, j]; return XY; // X=A[i] A[k] // Y=A[k+1] A[j] // multiply matrices X and Y Again, we rotate the s-table so that the main diagonal runs horizontally but in this table we use only upper triangle (and not the main diagonal). In the example, the procedure computes the chain matrix product according to the parenthesization ((A 1 (A 2 A 3 ))((A 4 A 5 ) A 6 ). Shortest Path: Given a graph and a source vertex src in graph, find shortest paths from src to all vertices in the given graph. The graph may contain negative weight edges. Bellman-Ford is also simpler than Dijkstra and suites well for distributed systems. But time complexity of Bellman-Ford is O(VE), which is more than Dijkstra. Algorithm Following are the detailed steps. Input: Graph and a source vertex src Output: Shortest distance to all vertices from src. If there is a negative weight cycle, then shortest distances are not calculated, negative weight cycle is reported. 1) This step initializes distances from source to all vertices as infinite and distance to source itself as 0. Create an array dist[] of size V with all values as infinite except dist[src] where src is source vertex. 2) This step calculates shortest distances. Do following V -1 times where V is the number of vertices in given graph...a) Do following for each edge u-v If dist[v] > dist[u] + weight of edge uv, then update dist[v].dist[v] = dist[u] + weight of edge uv 3) This step reports if there is a negative weight cycle in graph. Do following for each edge u- v If dist[v] > dist[u] + weight of edge uv, then Graph contains negative weight cycle Analysis and Design of Algorithm Page 14

15 The idea of step 3 is, step 2 guarantees shortest distances if graph doesn t contain negative weight cycle. If we iterate through all edges one more time and get a shorter path for any vertex, then there is a negative weight cycle How does this work? Like other Dynamic Programming Problems, the algorithm calculate shortest paths in bottom-up manner. It first calculates the shortest distances for the shortest paths which have at-most one edge in the path. Then, it calculates shortest paths with at-nost 2 edges, and so on. After the ith iteration of outer loop, the shortest paths with at most i edges are calculated. There can be maximum V 1 edges in any simple path, that is why the outer loop runs v 1 times. Example Let the given source vertex be 0. Initialize all distances as infinite, except the distance to source itself. Total number of vertices in the graph is 5, so all edges must be processed 4 times. Let all edges are processed in following order: (B,E), (D,B), (B,D), (A,B), (A,C), (D,C), (B,C), (E,D). We get following distances when all edges are processed first time. The first row in shows initial distances. The second row shows distances when edges (B,E), (D,B), (B,D) and (A,B) are processed. The third row shows distances when (A,C) is processed. The fourth row shows when (D,C), (B,C) and (E,D) are processed. The first iteration guarantees to give all shortest paths which are at most 1 edge long. We get following distances when all edges are processed second time (The last row shows final values). Analysis and Design of Algorithm Page 15

16 The second iteration guarantees to give all shortest paths which are at most 2 edges long. The algorithm processes all edges 2 more times. The distances are minimized after the second iteration, so third and fourth iterations don t update the distances. Analysis and Design of Algorithm Page 16

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60 Dynamic Programming December 15, 2016 CMPE 250 Dynamic Programming December 15, 2016 1 / 60 Why Dynamic Programming Often recursive algorithms solve fairly difficult problems efficiently BUT in other cases

More information

Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm

Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm This module 27 focuses on introducing dynamic programming design strategy and applying it to problems like chained matrix

More information

Lecture 13: Chain Matrix Multiplication

Lecture 13: Chain Matrix Multiplication Lecture 3: Chain Matrix Multiplication CLRS Section 5.2 Revised April 7, 2003 Outline of this Lecture Recalling matrix multiplication. The chain matrix multiplication problem. A dynamic programming algorithm

More information

Chain Matrix Multiplication

Chain Matrix Multiplication Chain Matrix Multiplication Version of November 5, 2014 Version of November 5, 2014 Chain Matrix Multiplication 1 / 27 Outline Outline Review of matrix multiplication. The chain matrix multiplication problem.

More information

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018 CS124 Lecture 11 Spring 2018 Memoization/Dynamic Programming Today s lecture discusses memoization, which is a method for speeding up algorithms based on recursion, by using additional memory to remember

More information

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

CSC 373 Lecture # 3 Instructor: Milad Eftekhar Huffman encoding: Assume a context is available (a document, a signal, etc.). These contexts are formed by some symbols (words in a document, discrete samples from a signal, etc). Each symbols s i is occurred

More information

CS 231: Algorithmic Problem Solving

CS 231: Algorithmic Problem Solving CS 231: Algorithmic Problem Solving Naomi Nishimura Module 5 Date of this version: June 14, 2018 WARNING: Drafts of slides are made available prior to lecture for your convenience. After lecture, slides

More information

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming. Dynamic Programming Deliverables Dynamic Programming basics Binomial Coefficients Weighted Interval Scheduling Matrix Multiplication /1 Knapsack Longest Common Subsequence 6/12/212 6:56 PM copyright @

More information

Dynamic Programming Shabsi Walfish NYU - Fundamental Algorithms Summer 2006

Dynamic Programming Shabsi Walfish NYU - Fundamental Algorithms Summer 2006 Dynamic Programming What is Dynamic Programming? Technique for avoiding redundant work in recursive algorithms Works best with optimization problems that have a nice underlying structure Can often be used

More information

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms Efficient Sequential Algorithms, Comp309 Part 1: Algorithmic Paradigms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press

More information

Dynamic Programming. Outline and Reading. Computing Fibonacci

Dynamic Programming. Outline and Reading. Computing Fibonacci Dynamic Programming Dynamic Programming version 1.2 1 Outline and Reading Matrix Chain-Product ( 5.3.1) The General Technique ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Dynamic Programming version 1.2 2 Computing

More information

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares 12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution to the original

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

Data Structures and Algorithms Week 8

Data Structures and Algorithms Week 8 Data Structures and Algorithms Week 8 Dynamic programming Fibonacci numbers Optimization problems Matrix multiplication optimization Principles of dynamic programming Longest Common Subsequence Algorithm

More information

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms Entwurf und Analyse von Algorithmen Dynamic Programming Overview Introduction Example 1 When and how to apply this method Example 2 Final remarks Introduction: when recursion is inefficient Example: Calculation

More information

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University Algorithms IV Dynamic Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Dynamic Programming Shortest Paths in Dags, Revisited Shortest Paths in Dags, Revisited The special distinguishing

More information

Lecture 57 Dynamic Programming. (Refer Slide Time: 00:31)

Lecture 57 Dynamic Programming. (Refer Slide Time: 00:31) Programming, Data Structures and Algorithms Prof. N.S. Narayanaswamy Department of Computer Science and Engineering Indian Institution Technology, Madras Lecture 57 Dynamic Programming (Refer Slide Time:

More information

Last week: Breadth-First Search

Last week: Breadth-First Search 1 Last week: Breadth-First Search Set L i = [] for i=1,,n L 0 = {w}, where w is the start node For i = 0,, n-1: For u in L i : For each v which is a neighbor of u: If v isn t yet visited: - mark v as visited,

More information

Lecture 8. Dynamic Programming

Lecture 8. Dynamic Programming Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018

More information

Algorithms: COMP3121/3821/9101/9801

Algorithms: COMP3121/3821/9101/9801 NEW SOUTH WALES Algorithms: COMP3121/3821/9101/9801 Aleks Ignjatović School of Computer Science and Engineering University of New South Wales TOPIC 5: DYNAMIC PROGRAMMING COMP3121/3821/9101/9801 1 / 38

More information

Algorithm Design Techniques part I

Algorithm Design Techniques part I Algorithm Design Techniques part I Divide-and-Conquer. Dynamic Programming DSA - lecture 8 - T.U.Cluj-Napoca - M. Joldos 1 Some Algorithm Design Techniques Top-Down Algorithms: Divide-and-Conquer Bottom-Up

More information

Shortest path problems

Shortest path problems Next... Shortest path problems Single-source shortest paths in weighted graphs Shortest-Path Problems Properties of Shortest Paths, Relaxation Dijkstra s Algorithm Bellman-Ford Algorithm Shortest-Paths

More information

CS6301 Programming and Data Structures II Unit -5 REPRESENTATION OF GRAPHS Graph and its representations Graph is a data structure that consists of following two components: 1. A finite set of vertices

More information

Outline. CS38 Introduction to Algorithms. Fast Fourier Transform (FFT) Fast Fourier Transform (FFT) Fast Fourier Transform (FFT)

Outline. CS38 Introduction to Algorithms. Fast Fourier Transform (FFT) Fast Fourier Transform (FFT) Fast Fourier Transform (FFT) Outline CS8 Introduction to Algorithms Lecture 9 April 9, 0 Divide and Conquer design paradigm matrix multiplication Dynamic programming design paradigm Fibonacci numbers weighted interval scheduling knapsack

More information

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 1 Dynamic Programming 1 Introduction An algorithm design paradigm like divide-and-conquer Programming : A tabular method (not writing computer code) Divide-and-Conquer (DAC):

More information

Chapter 3 Dynamic programming

Chapter 3 Dynamic programming Chapter 3 Dynamic programming 1 Dynamic programming also solve a problem by combining the solutions to subproblems. But dynamic programming considers the situation that some subproblems will be called

More information

We ve done. Now. Next

We ve done. Now. Next We ve done Matroid Theory Task scheduling problem (another matroid example) Dijkstra s algorithm (another greedy example) Dynamic Programming Now Matrix Chain Multiplication Longest Common Subsequence

More information

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book)

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book) Matrix Chain-Products (not in book) is a general algorithm design paradigm. Rather than give the general structure, let us first give a motivating example: Matrix Chain-Products Review: Matrix Multiplication.

More information

Chapter 6. Dynamic Programming

Chapter 6. Dynamic Programming Chapter 6 Dynamic Programming CS 573: Algorithms, Fall 203 September 2, 203 6. Maximum Weighted Independent Set in Trees 6..0. Maximum Weight Independent Set Problem Input Graph G = (V, E) and weights

More information

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2) Dynamic Programming Any recursive formula can be directly translated into recursive algorithms. However, sometimes the compiler will not implement the recursive algorithm very efficiently. When this is

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Dynamic Programming November, 0 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff INRIA Lille Nord Europe Course Overview Dimo Brockhoff, INRIA Introduction to

More information

Data Structures, Algorithms & Data Science Platforms

Data Structures, Algorithms & Data Science Platforms Indian Institute of Science Bangalore, India भ रत य व ज ञ न स स थ न ब गल र, भ रत Department of Computational and Data Sciences DS221 19 Sep 19 Oct, 2017 Data Structures, Algorithms & Data Science Platforms

More information

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 25 Dynamic Programming Terrible Fibonacci Computation Fibonacci sequence: f = f(n) 2

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms Dynamic Programming Well known algorithm design techniques: Brute-Force (iterative) ti algorithms Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic

More information

Practice Problems for the Final

Practice Problems for the Final ECE-250 Algorithms and Data Structures (Winter 2012) Practice Problems for the Final Disclaimer: Please do keep in mind that this problem set does not reflect the exact topics or the fractions of each

More information

6. Algorithm Design Techniques

6. Algorithm Design Techniques 6. Algorithm Design Techniques 6. Algorithm Design Techniques 6.1 Greedy algorithms 6.2 Divide and conquer 6.3 Dynamic Programming 6.4 Randomized Algorithms 6.5 Backtracking Algorithms Malek Mouhoub, CS340

More information

Algorithms for Data Science

Algorithms for Data Science Algorithms for Data Science CSOR W4246 Eleni Drinea Computer Science Department Columbia University Thursday, October 1, 2015 Outline 1 Recap 2 Shortest paths in graphs with non-negative edge weights (Dijkstra

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 600.463 Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 11.1 Introduction Dynamic programming can be very confusing until you ve used it a

More information

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors 1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors on an EREW PRAM: See solution for the next problem. Omit the step where each processor sequentially computes the AND of

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2 CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 11: Dynamic Programming, Part 2 Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Goal: continue with DP (Knapsack, All-Pairs SPs, )

More information

A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010

A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010 A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010 Overview What is DP? Characteristics of DP Formulation Examples Disadvantages

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Shortest Path Problem G. Guérard Department of Nouvelles Energies Ecole Supérieur d Ingénieurs Léonard de Vinci Lecture 3 GG A.I. 1/42 Outline 1 The Shortest Path Problem Introduction

More information

Computer Sciences Department 1

Computer Sciences Department 1 1 Advanced Design and Analysis Techniques (15.1, 15.2, 15.3, 15.4 and 15.5) 3 Objectives Problem Formulation Examples The Basic Problem Principle of optimality Important techniques: dynamic programming

More information

Algorithms for Data Science

Algorithms for Data Science Algorithms for Data Science CSOR W4246 Eleni Drinea Computer Science Department Columbia University Shortest paths in weighted graphs (Bellman-Ford, Floyd-Warshall) Outline 1 Shortest paths in graphs with

More information

5.1 The String reconstruction problem

5.1 The String reconstruction problem CS125 Lecture 5 Fall 2014 5.1 The String reconstruction problem The greedy approach doesn t always work, as we have seen. It lacks flexibility; if at some point, it makes a wrong choice, it becomes stuck.

More information

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2006

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2006 University of New Mexico Department of Computer Science Final Examination CS 6 Data Structures and Algorithms Spring, 006 Name: Email: Print your name and email, neatly in the space provided above; print

More information

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III SUB CODE: CS2251 DEPT: CSE SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks) 1. Write any four examples

More information

******** Chapter-4 Dynamic programming

******** Chapter-4 Dynamic programming repeat end SHORT - PATHS Overall run time of algorithm is O ((n+ E ) log n) Example: ******** Chapter-4 Dynamic programming 4.1 The General Method Dynamic Programming: is an algorithm design method that

More information

So far... Finished looking at lower bounds and linear sorts.

So far... Finished looking at lower bounds and linear sorts. So far... Finished looking at lower bounds and linear sorts. Next: Memoization -- Optimization problems - Dynamic programming A scheduling problem Matrix multiplication optimization Longest Common Subsequence

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

CSC 373: Algorithm Design and Analysis Lecture 8

CSC 373: Algorithm Design and Analysis Lecture 8 CSC 373: Algorithm Design and Analysis Lecture 8 Allan Borodin January 23, 2013 1 / 19 Lecture 8: Announcements and Outline Announcements No lecture (or tutorial) this Friday. Lecture and tutorials as

More information

Dynamic Programming II

Dynamic Programming II Lecture 11 Dynamic Programming II 11.1 Overview In this lecture we continue our discussion of dynamic programming, focusing on using it for a variety of path-finding problems in graphs. Topics in this

More information

PSD1A. DESIGN AND ANALYSIS OF ALGORITHMS Unit : I-V

PSD1A. DESIGN AND ANALYSIS OF ALGORITHMS Unit : I-V PSD1A DESIGN AND ANALYSIS OF ALGORITHMS Unit : I-V UNIT I -- Introduction -- Definition of Algorithm -- Pseudocode conventions -- Recursive algorithms -- Time and space complexity -- Big- o notation --

More information

Dynamic Programming. An Introduction to DP

Dynamic Programming. An Introduction to DP Dynamic Programming An Introduction to DP Dynamic Programming? A programming technique Solve a problem by breaking into smaller subproblems Similar to recursion with memoisation Usefulness: Efficiency

More information

Dynamic Programming (Part #2)

Dynamic Programming (Part #2) Dynamic Programming (Part #) Introduction to Algorithms MIT Press (Chapter 5) Matrix-Chain Multiplication Problem: given a sequence A, A,, A n, compute the product: A A A n Matrix compatibility: C = A

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 16 Dynamic Programming (plus FFT Recap) Adam Smith 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Discrete Fourier Transform

More information

Dynamic Programming: 1D Optimization. Dynamic Programming: 2D Optimization. Fibonacci Sequence. Crazy 8 s. Edit Distance

Dynamic Programming: 1D Optimization. Dynamic Programming: 2D Optimization. Fibonacci Sequence. Crazy 8 s. Edit Distance Dynamic Programming: 1D Optimization Fibonacci Sequence To efficiently calculate F [x], the xth element of the Fibonacci sequence, we can construct the array F from left to right (or bottom up ). We start

More information

Subset sum problem and dynamic programming

Subset sum problem and dynamic programming Lecture Notes: Dynamic programming We will discuss the subset sum problem (introduced last time), and introduce the main idea of dynamic programming. We illustrate it further using a variant of the so-called

More information

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples Elements of Dynamic Programming COSC 3A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken from Monica Nicolescu,

More information

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12 Algorithms Theory 14 Dynamic Programming (2) Matrix-chain multiplication P.D. Dr. Alexander Souza Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution

More information

Data Structures and Algorithms. Dynamic Programming

Data Structures and Algorithms. Dynamic Programming Data Structures and Algorithms Dynamic Programming Introduction Dynamic programming is simply the process of turning recursive calls in to table lookups. If a recursive function does redundant computations,

More information

Dynamic programming. Trivial problems are solved first More complex solutions are composed from the simpler solutions already computed

Dynamic programming. Trivial problems are solved first More complex solutions are composed from the simpler solutions already computed Dynamic programming Solves a complex problem by breaking it down into subproblems Each subproblem is broken down recursively until a trivial problem is reached Computation itself is not recursive: problems

More information

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16 Dynamic Programming Introduction, Weighted Interval Scheduling, Knapsack Tyler Moore CSE, SMU, Dallas, TX Lecture /6 Greedy. Build up a solution incrementally, myopically optimizing some local criterion.

More information

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 Dynamic Programming is a powerful technique that allows one to solve many different types

More information

Solving NP-hard Problems on Special Instances

Solving NP-hard Problems on Special Instances Solving NP-hard Problems on Special Instances Solve it in poly- time I can t You can assume the input is xxxxx No Problem, here is a poly-time algorithm 1 Solving NP-hard Problems on Special Instances

More information

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017 CS161 Lecture 13 Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017 1 Overview Last lecture, we talked about dynamic programming

More information

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Dynamic Programming

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Dynamic Programming Computer Science 385 Design and Analysis of Algorithms Siena College Spring 29 Topic Notes: Dynamic Programming We next consider dynamic programming, a technique for designing algorithms to solve problems

More information

Dynamic Programming part 2

Dynamic Programming part 2 Dynamic Programming part 2 Week 7 Objectives More dynamic programming examples - Matrix Multiplication Parenthesis - Longest Common Subsequence Subproblem Optimal structure Defining the dynamic recurrence

More information

The Shortest Path Problem. The Shortest Path Problem. Mathematical Model. Integer Programming Formulation

The Shortest Path Problem. The Shortest Path Problem. Mathematical Model. Integer Programming Formulation The Shortest Path Problem jla,jc@imm.dtu.dk Department of Management Engineering Technical University of Denmark The Shortest Path Problem Given a directed network G = (V,E,w) for which the underlying

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15 600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15 14.1 Introduction Today we re going to talk about algorithms for computing shortest

More information

/463 Algorithms - Fall 2013 Solution to Assignment 3

/463 Algorithms - Fall 2013 Solution to Assignment 3 600.363/463 Algorithms - Fall 2013 Solution to Assignment 3 (120 points) I (30 points) (Hint: This problem is similar to parenthesization in matrix-chain multiplication, except the special treatment on

More information

Algorithms. All-Pairs Shortest Paths. Dong Kyue Kim Hanyang University

Algorithms. All-Pairs Shortest Paths. Dong Kyue Kim Hanyang University Algorithms All-Pairs Shortest Paths Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr Contents Using single source shortest path algorithms Presents O(V 4 )-time algorithm, O(V 3 log V)-time algorithm,

More information

Shortest Paths. Nishant Mehta Lectures 10 and 11

Shortest Paths. Nishant Mehta Lectures 10 and 11 Shortest Paths Nishant Mehta Lectures 0 and Communication Speeds in a Computer Network Find fastest way to route a data packet between two computers 6 Kbps 4 0 Mbps 6 6 Kbps 6 Kbps Gbps 00 Mbps 8 6 Kbps

More information

Optimization II: Dynamic Programming

Optimization II: Dynamic Programming Chapter 12 Optimization II: Dynamic Programming In the last chapter, we saw that greedy algorithms are efficient solutions to certain optimization problems. However, there are optimization problems for

More information

Resources matter. Orders of Growth of Processes. R(n)= (n 2 ) Orders of growth of processes. Partial trace for (ifact 4) Partial trace for (fact 4)

Resources matter. Orders of Growth of Processes. R(n)= (n 2 ) Orders of growth of processes. Partial trace for (ifact 4) Partial trace for (fact 4) Orders of Growth of Processes Today s topics Resources used by a program to solve a problem of size n Time Space Define order of growth Visualizing resources utilization using our model of evaluation Relating

More information

The Citizen s Guide to Dynamic Programming

The Citizen s Guide to Dynamic Programming The Citizen s Guide to Dynamic Programming Jeff Chen Stolen extensively from past lectures October 6, 2007 Unfortunately, programmer, no one can be told what DP is. You have to try it for yourself. Adapted

More information

Dynamic programming 4/9/18

Dynamic programming 4/9/18 Dynamic programming 4/9/18 Administrivia HW 3 due Wednesday night Exam out Thursday, due next week Multi-day takehome, open book, closed web, written problems Induction, AVL trees, recurrences, D&C, multithreaded

More information

Chapter 17. Dynamic Programming

Chapter 17. Dynamic Programming Chapter 17 Dynamic Programming An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman

More information

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.) Dynamic programming is a problem solving method that is applicable to many different types of problems. I think it is best learned by example, so we will mostly do examples today. 1 Rod cutting Suppose

More information

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 2: Search problems Uninformed search Teacher: Gianni A. Di Caro RECAP: ACT RATIONALLY Think like people Think rationally Agent Sensors? Actuators Percepts Actions Environment Act like

More information

Shortest Paths. Nishant Mehta Lectures 10 and 11

Shortest Paths. Nishant Mehta Lectures 10 and 11 Shortest Paths Nishant Mehta Lectures 0 and Finding the Fastest Way to Travel between Two Intersections in Vancouver Granville St and W Hastings St Homer St and W Hastings St 6 Granville St and W Pender

More information

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 Objectives: learn general strategies for problems about order statistics learn how to find the median (or k-th largest) in linear average-case number of

More information

Advanced Algorithms Class Notes for Monday, November 10, 2014

Advanced Algorithms Class Notes for Monday, November 10, 2014 Advanced Algorithms Class Notes for Monday, November 10, 2014 Bernard Moret Divide-and-Conquer: Matrix Multiplication Divide-and-conquer is especially useful in computational geometry, but also in numerical

More information

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I MCA 312: Design and Analysis of Algorithms [Part I : Medium Answer Type Questions] UNIT I 1) What is an Algorithm? What is the need to study Algorithms? 2) Define: a) Time Efficiency b) Space Efficiency

More information

1 Dynamic Programming

1 Dynamic Programming Recitation 13 Dynamic Programming Parallel and Sequential Data Structures and Algorithms, 15-210 (Fall 2013) November 20, 2013 1 Dynamic Programming Dynamic programming is a technique to avoid needless

More information

COMP4128 Programming Challenges

COMP4128 Programming Challenges COMP4128 Challenges School of Computer Science and Engineering UNSW Australia 2D Table of Contents 2 2D 1 2 3 4 5 6 2D 7 Divide 3 2D Wikipedia: an algorithm design paradigm based on multi-branched recursion

More information

ESO207A: Data Structures and Algorithms End-semester exam

ESO207A: Data Structures and Algorithms End-semester exam ESO207A: Data Structures and Algorithms End-semester exam Max marks: 120 Time: 180 mins. 17-July-2017 1. Answer all 7 questions. Questions 1 to 3 are from module 3 and questions 4 to 7 are from module

More information

Analysis of Algorithms, I

Analysis of Algorithms, I Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, March 8, 2016 Outline 1 Recap Single-source shortest paths in graphs with real edge weights:

More information

Single Source Shortest Path (SSSP) Problem

Single Source Shortest Path (SSSP) Problem Single Source Shortest Path (SSSP) Problem Single Source Shortest Path Problem Input: A directed graph G = (V, E); an edge weight function w : E R, and a start vertex s V. Find: for each vertex u V, δ(s,

More information

CS 491 CAP Intermediate Dynamic Programming

CS 491 CAP Intermediate Dynamic Programming CS 491 CAP Intermediate Dynamic Programming Victor Gao University of Illinois at Urbana-Champaign Oct 28, 2016 Linear DP Knapsack DP DP on a Grid Interval DP Division/Grouping DP Tree DP Set DP Outline

More information

Title. Ferienakademie im Sarntal Course 2 Distance Problems: Theory and Praxis. Nesrine Damak. Fakultät für Informatik TU München. 20.

Title. Ferienakademie im Sarntal Course 2 Distance Problems: Theory and Praxis. Nesrine Damak. Fakultät für Informatik TU München. 20. Title Ferienakademie im Sarntal Course 2 Distance Problems: Theory and Praxis Nesrine Damak Fakultät für Informatik TU München 20. September 2010 Nesrine Damak: Classical Shortest-Path Algorithms 1/ 35

More information

CSE 373 Analysis of Algorithms, Fall Homework #3 Solutions Due Monday, October 18, 2003

CSE 373 Analysis of Algorithms, Fall Homework #3 Solutions Due Monday, October 18, 2003 Piyush Kumar CSE 373 Analysis of Algorithms, Fall 2003 Homework #3 Solutions Due Monday, October 18, 2003 Problem 1 Find an optimal parenthesization of a matrix chain product whose sequence of dimensions

More information

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming (2017F) Lecture12: Strings and Dynamic Programming Daijin Kim CSE, POSTECH dkim@postech.ac.kr Strings A string is a sequence of characters Examples of strings: Python program HTML document DNA sequence

More information

Lecture 4: Dynamic programming I

Lecture 4: Dynamic programming I Lecture : Dynamic programming I Dynamic programming is a powerful, tabular method that solves problems by combining solutions to subproblems. It was introduced by Bellman in the 950 s (when programming

More information

Lecture 12: Dynamic Programming Part 1 10:00 AM, Feb 21, 2018

Lecture 12: Dynamic Programming Part 1 10:00 AM, Feb 21, 2018 CS18 Integrated Introduction to Computer Science Fisler, Nelson Lecture 12: Dynamic Programming Part 1 10:00 AM, Feb 21, 2018 Contents 1 Introduction 1 2 Fibonacci 2 Objectives By the end of these notes,

More information

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression Chapter 5 Dynamic Programming Exercise 5.1 Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression x 1 /x /x 3 /... x n 1 /x n, where

More information

1 More on the Bellman-Ford Algorithm

1 More on the Bellman-Ford Algorithm CS161 Lecture 12 Shortest Path and Dynamic Programming Algorithms Scribe by: Eric Huang (2015), Anthony Kim (2016), M. Wootters (2017) Date: May 15, 2017 1 More on the Bellman-Ford Algorithm We didn t

More information