Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Size: px
Start display at page:

Download "Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms"

Transcription

1 Entwurf und Analyse von Algorithmen Dynamic Programming

2 Overview Introduction Example 1 When and how to apply this method Example 2 Final remarks

3 Introduction: when recursion is inefficient Example: Calculation of Fibonacci numbers f n recursively: f n = f n 1 + f n 2, f 1 = f 2 = 1 The solution (f n ) is the combination of the solutions of subproblems (f n 1 and f n 2 ). We are solving the same subproblems again and again!

4 Introduction: bottom-up! f n f n 1 Advantage: Each sub-problem has to be calculated only once. f 4 f 3 f 2 f 1 = 3 = 2 = 1 = 1 Disadvantage: Possibly additional storage requirement. But often... asymptomptotically same needed time. old values can be overridden.

5 Introduction: typical behaviour of subproblems Divide & Conquer Dynamic Programming

6 Introduction: dynamic programming Steps: Characterization of the structure of an optimal solution. Recursive definition of the value of an optimal solution. Computation of the value of an optimal solution in a bottom-up fashion. Construction of an optimal solution from computed information.

7 Example 1: matrix-chain multiplication Setting: We are given a sequence (chain) A 1, A 2,..., A n of n matrices to be multiplied, and we want to compute the product A 1 A 2 A n. Standard algorithm: pairs of matrices. fully parenthesized: either a single matrix or the product of two fully parenthesized matrix products. Cost of multiplying a p q matrix with a q r matrix (school method) pqr scalar multiplications. Matrix multiplication is associative: all parenthesizations yield the same. BUT... At different costs in general! A 1 (A 2 A 3 ) = (A 1 A 2 )A 3

8 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dimension p i 1 p i for i = 1,..., n, fully parenthesize the product A 1 A 2 A n such that minimizes the number of scalar multiplications needed. Note that we are not actually multiplying matrices! The goal is only to determine an order for multiplying matrices that has the lowest cost.

9 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. How NOT to solve it: try it all P (n):= number of alternative parenthesizations of a sequence of n matrices. 1 for n = 1 P (n) = n 1 P (k)p (n k) for n 2 k=1 It s the Catalan numbers! P (n) = C(n 1) with C(n) = 1 ( ) 2n n + 1 n = Θ ( 4 n n 1.5 )

10 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. How to solve it: using dynamic programming

11 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 1: structure of an optimal solution Notation: A i..j with i j denotes the matrix that results from evaluating the product A i A i+1 A j. Observation: if the problem is nontrivial, i.e., i < j, then we must split the product between A k and A k+1 for some integer k in the range i k < j. Optimal (A i A k ) (A k+1 A j ) substructure Optimal solution subchains are optimally parenthesized!

12 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 2: a recursive solution A i..j := product A i A i+1 A j. m[i, j]:= the minimum cost of computing the matrix A i..j. we are interested in m[1, n]. 0 for i = j m[i, j] = min {m[i, k] + m[k + 1, j] + p i 1p k p j } for i < j i k<j (A i A k ) (A k+1 A j )

13 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 2: a recursive solution Direct recursive algorithm takes exponential time!

14 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 3: bottom-up algorithm A i..j := product A i A i+1 A j. m[i, j]:= the minimum cost of computing the matrix A i..j. we are interested in m[1, n]. 0 for i = j m[i, j] = min {m[i, k] + m[k + 1, j] + p i 1p k p j } for i < j i k<j computing the product of j i + 1 matrices depends only on the costs of computing the products of fewer than j i + 1 matrices.

15 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 3: bottom-up algorithm Order: fill in the table m solving the problem on matrix chains of increasing length. for i := 1 to n do m[i, i] := 0 od for l := 2 to n do for i := 1 to n l + 1 do All possible starting points of the subsequence j := i + l 1 All possible sending points of the subsequence m[i, j] := Special value meaning IDK! for k := i to j 1 do All possible spliting points of the subsequence hlp := m[i, k] + m[k + 1, j] + p i 1 p k p j if hlp < m[i, j] then m[i, j] := hlp

16 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 4: constructing a solution s[i, j]:= value of k s.t. an optimal parenthesization of A i A j splits the product between A k and A k+1. for i := 1 to n do m[i, i] := 0 od for l := 2 to n do for i := 1 to n l + 1 do j := i + l 1 m[i, j] := for k := i to j 1 do hlp := m[i, k] + m[k + 1, j] + p i 1 p k p j if hlp < m[i, j] then the final matrix multiplication for computing A 1..n optimally is A 1..s[1,n] A s[1,n]+1..n. m[i, j] := hlp ; s[i, j] = k

17 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 4: constructing a solution s[i, j]:= value of k s.t. an optimal parenthesization of A i A j splits the product between A k and A k+1. for i := 1 to n do m[i, i] := 0 od for l := 2 to n do for i := 1 to n l + 1 do j := i + l 1 m[i, j] := for k := i to j 1 do hlp := m[i, k] + m[k + 1, j] + p i 1 p k p j if hlp < m[i, j] then the final matrix multiplication for computing A 1..n optimally is A 1..s[1,n] A s[1,n]+1..n. m[i, j] := hlp ; s[i, j] = k Θ(n 3 ) time Θ(n 2 ) space

18 Example 1: matrix-chain multiplication Matrix-chain multiplication problem: given a chain A 1,..., A n of n matrices, A i of dim. p i 1 p i i, fully parenthesize A 1 A 2 A n minimizing the cost. Step 4: constructing a solution s[i, j]:= value of k s.t. an optimal parenthesization of A i A j splits the product between A k and A k+1. the final matrix multiplication for computing A 1..n optimally is A 1..s[1,n] A s[1,n]+1..n. F UNCT ION PRINT-OPTIMAL-PARENS(s, i, j) : ST RING x, y : ST RING if i < j then else x := PRINT-OPTIMAL-PARENS(s, i, s[i, j]) y := PRINT-OPTIMAL-PARENS(s, s[i, j] + 1, j) print (, x, y, ) print A i O(n) time Call PRINT-OPTIMAL- PARENS(1, n)

19 When and how to apply it? Typically optimization problems Two key characteristics: Optimal substructure: an optimal solution to the problem contains within it optimal solutions to subproblems. Often cut & paste argument - Try to keep the subproblems simple Variants: - How many subproblems we use to solve the original problem. - How many choices in which subproblem(s) to use. We might want to store the choice we made for reconstructing the solution (step 4). Running time (informally): (# subproblems overall) (# choices we look at for each subproblem).

20 When and how to apply it? Typically optimization problems Two key characteristics: Optimal substructure: an optimal solution to the problem contains within it optimal solutions to subproblems. Overlapping subproblems: recursive algorithm revisits the same problem repeatedly. - We want the total number of distinct subproblems to be a polynomial in the input size.

21 Example 2: Longest common subsequence A subsequence of a given sequence is just the given sequence with zero or more elements left out. Formally: Given the sequences X = (x 1, x 2,..., x m ) and Z = (z 1, z 2,..., z k ), k m, Z is a subsequence of X if there exists a strictly increasing sequence i 1, i 2,..., i k of indices of X such that z j = x ij for all j = 1..k. Given two sequences X and Y, we say that a sequence Z is a common subsequence of X and Y if Z is a subsequence of both X and Y. Longest-common-subsequence (LCS) problem: given two sequences X = (x 1, x 2,..., x m ) and Y = (y 1, y 2,..., y n ) find a maximum length common subsequence of X and Y.

22 Example 2: Longest common subsequence Longest-common-subsequence (LCS) problem: given two sequences X = (x 1, x 2,..., x m ) and Y = (y 1, y 2,..., y n ) find a maximum length common subsequence of X and Y. Step 1: structure of an optimal solution i-th prefix of X, for i = 0..m := X i = (x 1, x 2,..., x i ). Let Z = (z 1,..., z k ) be any LCS of X and Y. If x m = y n, then z k = x m = y n and Z k 1 is an LCS of X m 1 and Y n 1. If x m y n : if z k x m then Z is an LCS of X m 1 and Y. if z k y n then Z is an LCS of X and Y n 1. An LCS of two sequences contains an LCS of prefixes of the two sequences.

23 Example 2: Longest common subsequence Longest-common-subsequence (LCS) problem: given two sequences X = (x 1, x 2,..., x m ) and Y = (y 1, y 2,..., y n ) find a maximum length common subsequence of X and Y. Step 2: a recursive solution l[i, j] with 0 i m, 0 j n := length of an LCS of the sequences X i and Y j. 0 if i = 0 or j = 0 l[i, j] = l[i 1, j 1] + 1 if i, j > 0 and x i = y j max{l[i 1, j], l[i, j 1]} if i, j > 0 and x i y j

24 Example 2: Longest common subsequence Longest-common-subsequence (LCS) problem: given two sequences X = (x 1, x 2,..., x m ) and Y = (y 1, y 2,..., y n ) find a maximum length common subsequence of X and Y. Step 3: bottom-up algorithm i = 0..m The computation of l[i, j] requires only the values of l[i 1, j], l[i, j 1] and l[i 1, j 1]. Compute l ( M m n ) line by line from bottom to top j = 0..n

25 Example 2: Longest common subsequence Longest-common-subsequence (LCS) problem: given two sequences X = (x 1, x 2,..., x m ) and Y = (y 1, y 2,..., y n ) find a maximum length common subsequence of X and Y. Step 3: bottom-up algorithm l[i, j] := length of an LCS of the sequences X i and Y j. for i := 0 to m do l[i, 0] := 0 od Initialization for j := 1 to n do l[0, j] := 0 od for i := 1 to m do for j := 1 to n do if x i = y j then l[i, j] := l[i 1, j 1] + 1 First case else l[i, j] := max(l[i 1, j], l[i, j 1]) fi od od Second case Θ(mn)time O(mn) space Compute l ( M m n ) line by line from bottom to top

26 Example 2: Longest common subsequence Longest-common-subsequence (LCS) problem: given two sequences X = (x 1, x 2,..., x m ) and Y = (y 1, y 2,..., y n ) find a maximum length common subsequence of X and Y. Step 4: constructing a solution l[i, j] := length of an LCS of the sequences X i and Y j. for i := 0 to m do l[i, 0] := 0 od for j := 1 to n do l[0, j] := 0 od for i := 1 to m do for j := 1 to n do if x i = y j then l[i, j] := l[i 1, j 1] + 1 else l[i, j] := max(l[i 1, j], l[i, j 1]) fi od od We could, but there is no need to add a new matrix storing where we came from. The first case gives you one character of the sequence. In every step, either i or j or both decrease one unit O(m + n) time for reconstructing the solution.

27 Example 2: Longest common subsequence Longest-common-subsequence (LCS) problem: given two sequences X = (x 1, x 2,..., x m ) and Y = (y 1, y 2,..., y n ) find a maximum length common subsequence of X and Y. Remark If we are only interested in the length of the optimal subsequence, but not in an optimal sequence itself, the matrix l[i, j] does not need to be stored. It is enough to save only the last two lines! memory required O(min{m, n}).

28 Final remarks Two key characteristics that a problem must have for dynamic programming to be a possible (but might the one!) solution technique are optimal substructure and overlapping subproblems. Explained bottom-up approach for dynamic programming, but... a top-down recursive approach that takes advantage of the overlapping-subproblems (memoization) is also possible. Bibliography: Introduction to Algorithms by T. H. Cormen, C. E. Leiserson, R. L. Rivest (and C. Stein).

Lecture 8. Dynamic Programming

Lecture 8. Dynamic Programming Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018

More information

Chapter 3 Dynamic programming

Chapter 3 Dynamic programming Chapter 3 Dynamic programming 1 Dynamic programming also solve a problem by combining the solutions to subproblems. But dynamic programming considers the situation that some subproblems will be called

More information

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 1 Dynamic Programming 1 Introduction An algorithm design paradigm like divide-and-conquer Programming : A tabular method (not writing computer code) Divide-and-Conquer (DAC):

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

Algorithm Design Techniques part I

Algorithm Design Techniques part I Algorithm Design Techniques part I Divide-and-Conquer. Dynamic Programming DSA - lecture 8 - T.U.Cluj-Napoca - M. Joldos 1 Some Algorithm Design Techniques Top-Down Algorithms: Divide-and-Conquer Bottom-Up

More information

Lecture 4: Dynamic programming I

Lecture 4: Dynamic programming I Lecture : Dynamic programming I Dynamic programming is a powerful, tabular method that solves problems by combining solutions to subproblems. It was introduced by Bellman in the 950 s (when programming

More information

Lecture 13: Chain Matrix Multiplication

Lecture 13: Chain Matrix Multiplication Lecture 3: Chain Matrix Multiplication CLRS Section 5.2 Revised April 7, 2003 Outline of this Lecture Recalling matrix multiplication. The chain matrix multiplication problem. A dynamic programming algorithm

More information

Chain Matrix Multiplication

Chain Matrix Multiplication Chain Matrix Multiplication Version of November 5, 2014 Version of November 5, 2014 Chain Matrix Multiplication 1 / 27 Outline Outline Review of matrix multiplication. The chain matrix multiplication problem.

More information

Dynamic Programming (Part #2)

Dynamic Programming (Part #2) Dynamic Programming (Part #) Introduction to Algorithms MIT Press (Chapter 5) Matrix-Chain Multiplication Problem: given a sequence A, A,, A n, compute the product: A A A n Matrix compatibility: C = A

More information

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12 Algorithms Theory 14 Dynamic Programming (2) Matrix-chain multiplication P.D. Dr. Alexander Souza Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution

More information

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk CMPS 00 Fall 04 Dynamic Programming Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 9/30/4 CMPS 00 Intro. to Algorithms Dynamic programming Algorithm design technique

More information

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares 12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares Optimal substructure Dynamic programming is typically applied to optimization problems. An optimal solution to the original

More information

Homework3: Dynamic Programming - Answers

Homework3: Dynamic Programming - Answers Most Exercises are from your textbook: Homework3: Dynamic Programming - Answers 1. For the Rod Cutting problem (covered in lecture) modify the given top-down memoized algorithm (includes two procedures)

More information

15.Dynamic Programming

15.Dynamic Programming 15.Dynamic Programming Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. Like divide and conquer, DP solves problems by combining solutions

More information

/463 Algorithms - Fall 2013 Solution to Assignment 3

/463 Algorithms - Fall 2013 Solution to Assignment 3 600.363/463 Algorithms - Fall 2013 Solution to Assignment 3 (120 points) I (30 points) (Hint: This problem is similar to parenthesization in matrix-chain multiplication, except the special treatment on

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms 6.046J/18.401J LECTURE 12 Dynamic programming Longest common subsequence Optimal substructure Overlapping subproblems Prof. Charles E. Leiserson Dynamic programming Design technique,

More information

Data Structures and Algorithms Week 8

Data Structures and Algorithms Week 8 Data Structures and Algorithms Week 8 Dynamic programming Fibonacci numbers Optimization problems Matrix multiplication optimization Principles of dynamic programming Longest Common Subsequence Algorithm

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples Elements of Dynamic Programming COSC 3A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken from Monica Nicolescu,

More information

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya CS60020: Foundations of Algorithm Design and Machine Learning Sourangshu Bhattacharya Dynamic programming Design technique, like divide-and-conquer. Example: Longest Common Subsequence (LCS) Given two

More information

Dynamic Programming Shabsi Walfish NYU - Fundamental Algorithms Summer 2006

Dynamic Programming Shabsi Walfish NYU - Fundamental Algorithms Summer 2006 Dynamic Programming What is Dynamic Programming? Technique for avoiding redundant work in recursive algorithms Works best with optimization problems that have a nice underlying structure Can often be used

More information

We ve done. Now. Next

We ve done. Now. Next We ve done Matroid Theory Task scheduling problem (another matroid example) Dijkstra s algorithm (another greedy example) Dynamic Programming Now Matrix Chain Multiplication Longest Common Subsequence

More information

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms Efficient Sequential Algorithms, Comp309 Part 1: Algorithmic Paradigms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press

More information

Algorithms: Dynamic Programming

Algorithms: Dynamic Programming Algorithms: Dynamic Programming Amotz Bar-Noy CUNY Spring 2012 Amotz Bar-Noy (CUNY) Dynamic Programming Spring 2012 1 / 58 Dynamic Programming General Strategy: Solve recursively the problem top-down based

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms Dynamic Programming Well known algorithm design techniques: Brute-Force (iterative) ti algorithms Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic

More information

Unit-5 Dynamic Programming 2016

Unit-5 Dynamic Programming 2016 5 Dynamic programming Overview, Applications - shortest path in graph, matrix multiplication, travelling salesman problem, Fibonacci Series. 20% 12 Origin: Richard Bellman, 1957 Programming referred to

More information

CS141: Intermediate Data Structures and Algorithms Dynamic Programming

CS141: Intermediate Data Structures and Algorithms Dynamic Programming CS141: Intermediate Data Structures and Algorithms Dynamic Programming Amr Magdy Programming? In this context, programming is a tabular method Other examples: Linear programing Integer programming 2 Rod

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Computer Sciences Department 1

Computer Sciences Department 1 1 Advanced Design and Analysis Techniques (15.1, 15.2, 15.3, 15.4 and 15.5) 3 Objectives Problem Formulation Examples The Basic Problem Principle of optimality Important techniques: dynamic programming

More information

Unit 4: Dynamic Programming

Unit 4: Dynamic Programming Unit 4: Dynamic Programming Course contents: Assembly-line scheduling Matrix-chain multiplication Longest common subsequence Optimal binary search trees Applications: Cell flipping, rod cutting, optimal

More information

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression Chapter 5 Dynamic Programming Exercise 5.1 Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression x 1 /x /x 3 /... x n 1 /x n, where

More information

ECE250: Algorithms and Data Structures Dynamic Programming Part B

ECE250: Algorithms and Data Structures Dynamic Programming Part B ECE250: Algorithms and Data Structures Dynamic Programming Part B Ladan Tahvildari, PEng, SMIEEE Associate Professor Software Technologies Applied Research (STAR) Group Dept. of Elect. & Comp. Eng. University

More information

CS 380 ALGORITHM DESIGN AND ANALYSIS

CS 380 ALGORITHM DESIGN AND ANALYSIS CS 380 ALGORITHM DESIGN AND ANALYSIS Lecture 14: Dynamic Programming Text Reference: Chapter 15 Dynamic Programming We know that we can use the divide-and-conquer technique to obtain efficient algorithms

More information

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if Recursive solution for finding LCS of X and Y if x s =y t, then find an LCS of X s-1 and Y t-1, and then append x s =y t to this LCS if x s y t, then solve two subproblems: (1) find an LCS of X s-1 and

More information

So far... Finished looking at lower bounds and linear sorts.

So far... Finished looking at lower bounds and linear sorts. So far... Finished looking at lower bounds and linear sorts. Next: Memoization -- Optimization problems - Dynamic programming A scheduling problem Matrix multiplication optimization Longest Common Subsequence

More information

Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm

Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm This module 27 focuses on introducing dynamic programming design strategy and applying it to problems like chained matrix

More information

Dynamic Programming Matrix-chain Multiplication

Dynamic Programming Matrix-chain Multiplication 1 / 32 Dynamic Programming Matrix-chain Multiplication CS 584: Algorithm Design and Analysis Daniel Leblanc 1 1 Senior Adjunct Instructor Portland State University Maseeh College of Engineering and Computer

More information

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left 13.2 Rotations MAT-72006 AA+DS, Fall 2013 24-Oct-13 368 LEFT-ROTATE(, ) 1. // set 2. // s left subtree s right subtree 3. if 4. 5. // link s parent to 6. if == 7. 8. elseif == 9. 10. else 11. // put x

More information

Dynamic Programming. Lecture Overview Introduction

Dynamic Programming. Lecture Overview Introduction Lecture 12 Dynamic Programming 12.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book)

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book) Matrix Chain-Products (not in book) is a general algorithm design paradigm. Rather than give the general structure, let us first give a motivating example: Matrix Chain-Products Review: Matrix Multiplication.

More information

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University Algorithms IV Dynamic Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Dynamic Programming Shortest Paths in Dags, Revisited Shortest Paths in Dags, Revisited The special distinguishing

More information

Dynamic Programming Group Exercises

Dynamic Programming Group Exercises Name: Name: Name: Dynamic Programming Group Exercises Adapted from material by Cole Frederick Please work the following problems in groups of 2 or 3. Use additional paper as needed, and staple the sheets

More information

Dynamic Programming part 2

Dynamic Programming part 2 Dynamic Programming part 2 Week 7 Objectives More dynamic programming examples - Matrix Multiplication Parenthesis - Longest Common Subsequence Subproblem Optimal structure Defining the dynamic recurrence

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 16 Dynamic Programming Least Common Subsequence Saving space Adam Smith Least Common Subsequence A.k.a. sequence alignment edit distance Longest Common Subsequence

More information

Algorithms. Ch.15 Dynamic Programming

Algorithms. Ch.15 Dynamic Programming Algorithms Ch.15 Dynamic Programming Dynamic Programming Not a specific algorithm, but a technique (like divide-and-conquer). Developed back in the day when programming meant tabular method (like linear

More information

CS 231: Algorithmic Problem Solving

CS 231: Algorithmic Problem Solving CS 231: Algorithmic Problem Solving Naomi Nishimura Module 5 Date of this version: June 14, 2018 WARNING: Drafts of slides are made available prior to lecture for your convenience. After lecture, slides

More information

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 Dynamic Programming is a powerful technique that allows one to solve many different types

More information

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60

Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60 Dynamic Programming December 15, 2016 CMPE 250 Dynamic Programming December 15, 2016 1 / 60 Why Dynamic Programming Often recursive algorithms solve fairly difficult problems efficiently BUT in other cases

More information

Dynamic Programming. Outline and Reading. Computing Fibonacci

Dynamic Programming. Outline and Reading. Computing Fibonacci Dynamic Programming Dynamic Programming version 1.2 1 Outline and Reading Matrix Chain-Product ( 5.3.1) The General Technique ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Dynamic Programming version 1.2 2 Computing

More information

Dynamic Programming 1

Dynamic Programming 1 Dynamic Programming 1 Jie Wang University of Massachusetts Lowell Department of Computer Science 1 I thank Prof. Zachary Kissel of Merrimack College for sharing his lecture notes with me; some of the examples

More information

F(0)=0 F(1)=1 F(n)=F(n-1)+F(n-2)

F(0)=0 F(1)=1 F(n)=F(n-1)+F(n-2) Algorithms Dana Shapira Lesson #4: Dynamic programming Fibonacci Series F()= F()= F(n)=F(n-)+F(n-) Write a Divide and Conquer Algorithm! What is its running time? Binomial Coefficients n! n = ( )! n! Recursive

More information

(Feodor F. Dragan) Department of Computer Science Kent State University. Advanced Algorithms, Feodor F. Dragan, Kent State University 1

(Feodor F. Dragan) Department of Computer Science Kent State University. Advanced Algorithms, Feodor F. Dragan, Kent State University 1 $GYDQFH $OJRULWKPV (Feodor F. Dragan) Department of Computer Science Kent State University Advanced Algorithms, Feodor F. Dragan, Kent State University Textbook: Thomas Cormen, Charles Leisterson, Ronald

More information

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.) Dynamic programming is a problem solving method that is applicable to many different types of problems. I think it is best learned by example, so we will mostly do examples today. 1 Rod cutting Suppose

More information

CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING. Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17

CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING. Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17 CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17 DYNAMIC PROGRAMMING Recursive problems uses the subproblem(s) solve the current one. Dynamic

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 6, 2016 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing up

More information

Optimization II: Dynamic Programming

Optimization II: Dynamic Programming Chapter 12 Optimization II: Dynamic Programming In the last chapter, we saw that greedy algorithms are efficient solutions to certain optimization problems. However, there are optimization problems for

More information

Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, === Homework submission instructions ===

Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, === Homework submission instructions === Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, 2011 === Homework submission instructions === Submit the answers for writing problems (including your programming report) through

More information

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming. Dynamic Programming Deliverables Dynamic Programming basics Binomial Coefficients Weighted Interval Scheduling Matrix Multiplication /1 Knapsack Longest Common Subsequence 6/12/212 6:56 PM copyright @

More information

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 25 Dynamic Programming Terrible Fibonacci Computation Fibonacci sequence: f = f(n) 2

More information

CS Algorithms and Complexity

CS Algorithms and Complexity CS 350 - Algorithms and Complexity Dynamic Programming Sean Anderson 2/20/18 Portland State University Table of contents 1. Homework 3 Solutions 2. Dynamic Programming 3. Problem of the Day 4. Application

More information

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming (2017F) Lecture12: Strings and Dynamic Programming Daijin Kim CSE, POSTECH dkim@postech.ac.kr Strings A string is a sequence of characters Examples of strings: Python program HTML document DNA sequence

More information

ECE608 - Chapter 15 answers

ECE608 - Chapter 15 answers ¼ À ÈÌ Ê ½ ÈÊÇ Ä ÅË ½µ ½ º¾¹¾ ¾µ ½ º¾¹ µ ½ º¾¹ µ ½ º ¹½ µ ½ º ¹¾ µ ½ º ¹ µ ½ º ¹¾ µ ½ º ¹ µ ½ º ¹ ½¼µ ½ º ¹ ½½µ ½ ¹ ½ ECE608 - Chapter 15 answers (1) CLR 15.2-2 MATRIX CHAIN MULTIPLY(A, s, i, j) 1. if

More information

Recitation 12. Dynamic Programming Announcements. SegmentLab has been released and is due Apr 14.

Recitation 12. Dynamic Programming Announcements. SegmentLab has been released and is due Apr 14. Recitation 12 Dynamic Programming 12.1 Announcements SegmentLab has been released and is due Apr 14. 73 74 RECITATION 12. DYNAMIC PROGRAMMING 12.2 Matrix Chain Product Definition 12.1. In the matrix chain

More information

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech License CC BY-NC-SA 2.0 http://creativecommons.org/licenses/by-nc-sa/2.0/fr/ Outline Previously on IN101 Python s anatomy Functions,

More information

Dynamic Programming. CS 374: Algorithms & Models of Computation, Fall Lecture 11. October 1, 2015

Dynamic Programming. CS 374: Algorithms & Models of Computation, Fall Lecture 11. October 1, 2015 CS 374: Algorithms & Models of Computation, Fall 2015 Dynamic Programming Lecture 11 October 1, 2015 Chandra & Manoj (UIUC) CS374 1 Fall 2015 1 / 32 Dynamic Programming Dynamic Programming is smart recursion

More information

Algorithms for Data Science

Algorithms for Data Science Algorithms for Data Science CSOR W4246 Eleni Drinea Computer Science Department Columbia University Thursday, October 1, 2015 Outline 1 Recap 2 Shortest paths in graphs with non-negative edge weights (Dijkstra

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 15, 2015 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing

More information

Introduction to Algorithms

Introduction to Algorithms Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Second Edition The MIT Press Cambridge, Massachusetts London, England McGraw-Hill Book Company Boston Burr

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

CSE 373 Analysis of Algorithms, Fall Homework #3 Solutions Due Monday, October 18, 2003

CSE 373 Analysis of Algorithms, Fall Homework #3 Solutions Due Monday, October 18, 2003 Piyush Kumar CSE 373 Analysis of Algorithms, Fall 2003 Homework #3 Solutions Due Monday, October 18, 2003 Problem 1 Find an optimal parenthesization of a matrix chain product whose sequence of dimensions

More information

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 15 Lecturer: Michael Jordan October 26, Notes 15 for CS 170

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 15 Lecturer: Michael Jordan October 26, Notes 15 for CS 170 UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 15 Lecturer: Michael Jordan October 26, 2005 Notes 15 for CS 170 1 Introduction to Dynamic Programming Consider the following algorithm

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

Partha Sarathi Manal

Partha Sarathi Manal MA 515: Introduction to Algorithms & MA353 : Design and Analysis of Algorithms [3-0-0-6] Lecture 29 http://www.iitg.ernet.in/psm/indexing_ma353/y09/index.html Partha Sarathi Manal psm@iitg.ernet.in Dept.

More information

Dynamic Programming CS 445. Example: Floyd Warshll Algorithm: Computing all pairs shortest paths

Dynamic Programming CS 445. Example: Floyd Warshll Algorithm: Computing all pairs shortest paths CS 44 Dynamic Programming Some of the slides are courtesy of Charles Leiserson with small changes by Carola Wenk Example: Floyd Warshll lgorithm: Computing all pairs shortest paths Given G(V,E), with weight

More information

Lecture 23: Priority Queues, Part 2 10:00 AM, Mar 19, 2018

Lecture 23: Priority Queues, Part 2 10:00 AM, Mar 19, 2018 CS8 Integrated Introduction to Computer Science Fisler, Nelson Lecture : Priority Queues, Part : AM, Mar 9, 8 Contents Sorting, Revisited Counting Sort Bucket Sort 4 Radix Sort 6 4. MSD Radix Sort......................................

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Chapter 15-1 : Dynamic Programming I

Chapter 15-1 : Dynamic Programming I Chapter 15-1 : Dynamic Programming I About this lecture Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems may be solved using a stronger strategy:

More information

Lecture 22: Dynamic Programming

Lecture 22: Dynamic Programming Lecture 22: Dynamic Programming COSC242: Algorithms and Data Structures Brendan McCane Department of Computer Science, University of Otago Dynamic programming The iterative and memoised algorithms for

More information

ALGORITHM DESIGN DYNAMIC PROGRAMMING. University of Waterloo

ALGORITHM DESIGN DYNAMIC PROGRAMMING. University of Waterloo ALGORITHM DESIGN DYNAMIC PROGRAMMING University of Waterloo LIST OF SLIDES 1-1 List of Slides 1 2 Dynamic Programming Approach 3 Fibonacci Sequence (cont.) 4 Fibonacci Sequence (cont.) 5 Bottom-Up vs.

More information

Algorithms: COMP3121/3821/9101/9801

Algorithms: COMP3121/3821/9101/9801 NEW SOUTH WALES Algorithms: COMP3121/3821/9101/9801 Aleks Ignjatović School of Computer Science and Engineering University of New South Wales TOPIC 5: DYNAMIC PROGRAMMING COMP3121/3821/9101/9801 1 / 38

More information

Problem Set 6. Part A:

Problem Set 6. Part A: Introduction to Algorithms: 6.006 Massachusetts Institute of Technology April 12, 2011 Professors Erik Demaine, Piotr Indyk, and Manolis Kellis Problem Set 6 Problem Set 6 This problem set is divided into

More information

Dynamic programming 4/9/18

Dynamic programming 4/9/18 Dynamic programming 4/9/18 Administrivia HW 3 due Wednesday night Exam out Thursday, due next week Multi-day takehome, open book, closed web, written problems Induction, AVL trees, recurrences, D&C, multithreaded

More information

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms Chapter 13 Greedy Algorithms Activity Selection Problem 0-1 Knapsack Problem Huffman Code Construction Dynamic Programming Assembly-Line Scheduling C-C Tsai P.1 Greedy Algorithms A greedy algorithm always

More information

Union Find. Data Structures and Algorithms Andrei Bulatov

Union Find. Data Structures and Algorithms Andrei Bulatov Union Find Data Structures and Algorithms Andrei Bulatov Algorithms Union Find 6-2 Union Find In a nutshell, Kruskal s algorithm starts with a completely disjoint graph, and adds edges creating a graph

More information

Longest Common Subsequences and Substrings

Longest Common Subsequences and Substrings Longest Common Subsequences and Substrings Version November 5, 2014 Version November 5, 2014 Longest Common Subsequences and Substrings 1 / 16 Longest Common Subsequence Given two sequences X = (x 1, x

More information

=1, and F n. for all n " 2. The recursive definition of Fibonacci numbers immediately gives us a recursive algorithm for computing them:

=1, and F n. for all n  2. The recursive definition of Fibonacci numbers immediately gives us a recursive algorithm for computing them: Wavefront Pattern I. Problem Data elements are laid out as multidimensional grids representing a logical plane or space. The dependency between the elements, often formulated by dynamic programming, results

More information

Data Structures and Algorithms. Dynamic Programming

Data Structures and Algorithms. Dynamic Programming Data Structures and Algorithms Dynamic Programming Introduction Dynamic programming is simply the process of turning recursive calls in to table lookups. If a recursive function does redundant computations,

More information

Lecture 57 Dynamic Programming. (Refer Slide Time: 00:31)

Lecture 57 Dynamic Programming. (Refer Slide Time: 00:31) Programming, Data Structures and Algorithms Prof. N.S. Narayanaswamy Department of Computer Science and Engineering Indian Institution Technology, Madras Lecture 57 Dynamic Programming (Refer Slide Time:

More information

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2 CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 11: Dynamic Programming, Part 2 Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Goal: continue with DP (Knapsack, All-Pairs SPs, )

More information

CS 473: Fundamental Algorithms, Spring Dynamic Programming. Sariel (UIUC) CS473 1 Spring / 42. Part I. Longest Increasing Subsequence

CS 473: Fundamental Algorithms, Spring Dynamic Programming. Sariel (UIUC) CS473 1 Spring / 42. Part I. Longest Increasing Subsequence CS 473: Fundamental Algorithms, Spring 2011 Dynamic Programming Lecture 8 February 15, 2011 Sariel (UIUC) CS473 1 Spring 2011 1 / 42 Part I Longest Increasing Subsequence Sariel (UIUC) CS473 2 Spring 2011

More information

String Patterns and Algorithms on Strings

String Patterns and Algorithms on Strings String Patterns and Algorithms on Strings Lecture delivered by: Venkatanatha Sarma Y Assistant Professor MSRSAS-Bangalore 11 Objectives To introduce the pattern matching problem and the important of algorithms

More information

CMSC 451: Lecture 11 Dynamic Programming: Longest Common Subsequence Thursday, Oct 5, 2017

CMSC 451: Lecture 11 Dynamic Programming: Longest Common Subsequence Thursday, Oct 5, 2017 CMSC 451: Lecture 11 Dynamic Programming: Longest Common Subsequence Thursday, Oct 5, 217 Reading: This algorithm is not covered in KT or DPV. It is closely related to the Sequence lignment problem of

More information

Introduction to Algorithms I

Introduction to Algorithms I Summer School on Algorithms and Optimization Organized by: ACM Unit, ISI and IEEE CEDA. Tutorial II Date: 05.07.017 Introduction to Algorithms I (Q1) A binary tree is a rooted tree in which each node has

More information

Analysis of Algorithms Prof. Karen Daniels

Analysis of Algorithms Prof. Karen Daniels UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 2 Tuesday, 2/2/10 Design Patterns for Optimization Problems Greedy Algorithms Algorithmic Paradigm Context

More information

Algorithms and Data Structures. Algorithms and Data Structures. Algorithms and Data Structures. Algorithms and Data Structures

Algorithms and Data Structures. Algorithms and Data Structures. Algorithms and Data Structures. Algorithms and Data Structures Richard Mayr Slides adapted from Mary Cryan (2015/16) with some changes. School of Informatics University of Edinburgh ADS (2018/19) Lecture 1 slide 1 ADS (2018/19) Lecture 1 slide 3 ADS (2018/19) Lecture

More information

Data Structures and Algorithms (CSCI 340)

Data Structures and Algorithms (CSCI 340) University of Wisconsin Parkside Fall Semester 2008 Department of Computer Science Prof. Dr. F. Seutter Data Structures and Algorithms (CSCI 340) Homework Assignments The numbering of the problems refers

More information

Dynamic Programming Algorithms Greedy Algorithms. Lecture 29 COMP 250 Winter 2018 (Slides from M. Blanchette)

Dynamic Programming Algorithms Greedy Algorithms. Lecture 29 COMP 250 Winter 2018 (Slides from M. Blanchette) Dynamic Programming Algorithms Greedy Algorithms Lecture 29 COMP 250 Winter 2018 (Slides from M. Blanchette) Return to Recursive algorithms: Divide-and-Conquer Divide-and-Conquer Divide big problem into

More information

5.1 The String reconstruction problem

5.1 The String reconstruction problem CS125 Lecture 5 Fall 2014 5.1 The String reconstruction problem The greedy approach doesn t always work, as we have seen. It lacks flexibility; if at some point, it makes a wrong choice, it becomes stuck.

More information

CMSC351 - Fall 2014, Homework #4

CMSC351 - Fall 2014, Homework #4 CMSC351 - Fall 2014, Homework #4 Due: November 14th at the start of class PRINT Name: Grades depend on neatness and clarity. Write your answers with enough detail about your approach and concepts used,

More information

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2) Dynamic Programming Any recursive formula can be directly translated into recursive algorithms. However, sometimes the compiler will not implement the recursive algorithm very efficiently. When this is

More information