Dynamic Programming Shabsi Walfish NYU - Fundamental Algorithms Summer 2006

Similar documents
Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60

Unit-5 Dynamic Programming 2016

Dynamic Programming II

CS 231: Algorithmic Problem Solving

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Partha Sarathi Manal

Lecture 8. Dynamic Programming

Single Source Shortest Path (SSSP) Problem

Dynamic Programming: 1D Optimization. Dynamic Programming: 2D Optimization. Fibonacci Sequence. Crazy 8 s. Edit Distance

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

Data Structures and Algorithms Week 8

Chain Matrix Multiplication

Lecture 13: Chain Matrix Multiplication

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12

Review of course COMP-251B winter 2010

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2

Optimization II: Dynamic Programming

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

Dynamic Programming part 2

F(0)=0 F(1)=1 F(n)=F(n-1)+F(n-2)

The Shortest Path Problem. The Shortest Path Problem. Mathematical Model. Integer Programming Formulation

Algorithms. All-Pairs Shortest Paths. Dong Kyue Kim Hanyang University

1 Dynamic Programming

Shortest path problems

Lecture 4: Dynamic programming I

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15

Solving problems on graph algorithms

Dynamic Programming II

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression

All Pairs Shortest Paths

Algorithms and Data Structures

Lecture 18 Solving Shortest Path Problem: Dijkstra s Algorithm. October 23, 2009

Recitation 12. Dynamic Programming Announcements. SegmentLab has been released and is due Apr 14.

ECE250: Algorithms and Data Structures Dynamic Programming Part B

Chapter 3 Dynamic programming

So far... Finished looking at lower bounds and linear sorts.

Artificial Intelligence

Dynamic Programming (Part #2)

Chapter 25: All-Pairs Shortest-Paths

1 Non greedy algorithms (which we should have covered

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm

1 Single Source Shortest Path Algorithms

Single Source Shortest Path: The Bellman-Ford Algorithm. Slides based on Kevin Wayne / Pearson-Addison Wesley

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if

Introduction to Algorithms

MA 252: Data Structures and Algorithms Lecture 36. Partha Sarathi Mandal. Dept. of Mathematics, IIT Guwahati

Shortest Paths. Nishant Mehta Lectures 10 and 11

CSC-140 Assignment 5

Minimum Spanning Trees

CSC 373: Algorithm Design and Analysis Lecture 8

Shortest Paths. Nishant Mehta Lectures 10 and 11

Lecture 6 Basic Graph Algorithms

Problem 1. Which of the following is true of functions =100 +log and = + log? Problem 2. Which of the following is true of functions = 2 and =3?

SUGGESTION : read all the questions and their values before you start.

Advanced Algorithm Design and Analysis (Lecture 5) SW5 fall 2007 Simonas Šaltenis

CS583 Lecture 10. Graph Algorithms Shortest Path Algorithms Dynamic Programming. Many slides here are based on D. Luebke slides.

Lecture 57 Dynamic Programming. (Refer Slide Time: 00:31)

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Final Exam Solutions

S Postgraduate Course on Signal Processing in Communications, FALL Topic: Iteration Bound. Harri Mäntylä

Algorithm Design Techniques part I

Dynamic Programming. Lecture Overview Introduction

END-TERM EXAMINATION

Dijkstra s Algorithm. Dijkstra s algorithm is a natural, greedy approach towards

CS171 Final Practice Exam

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors

We ve done. Now. Next

Shortest Path Problem

Dynamic Programming I

Last week: Breadth-First Search

Dijkstra s Algorithm Last time we saw two methods to solve the all-pairs shortest path problem: Min-plus matrix powering in O(n 3 log n) time and the

Dynamic Programming Algorithms

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

CSE 101- Winter 18 Discussion Section Week 6

Dijkstra s algorithm for shortest paths when no edges have negative weight.

/463 Algorithms - Fall 2013 Solution to Assignment 3

Algorithms: Dynamic Programming

1 More on the Bellman-Ford Algorithm

Graph Algorithms (part 3 of CSC 282),

Algorithmic Paradigms. Chapter 6 Dynamic Programming. Steps in Dynamic Programming. Dynamic Programming. Dynamic Programming Applications

Algorithmic Paradigms

L.J. Institute of Engineering & Technology Semester: VIII (2016)

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

15.Dynamic Programming

Matrix Multiplication and All Pairs Shortest Paths (2002; Zwick)

Practice Problems for the Final

Algebraic method for Shortest Paths problems

Graphs II - Shortest paths

Dynamic Programming. Outline and Reading. Computing Fibonacci

Dynamic-Programming algorithms for shortest path problems: Bellman-Ford (for singlesource) and Floyd-Warshall (for all-pairs).

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶

Dynamic Programming Matrix-chain Multiplication

More Graph Algorithms: Topological Sort and Shortest Distance

CS350: Data Structures Dijkstra s Shortest Path Alg.

Transcription:

Dynamic Programming

What is Dynamic Programming? Technique for avoiding redundant work in recursive algorithms Works best with optimization problems that have a nice underlying structure Can often be used to transform recursive code into iterative code

Toy Example: Fibonacci F(n): if n == 1 or n == 2 then return 1 else return F(n-1) + F(n-2) What is the runtime of F? For a quick lower bound, change the last line to read: return F(n-2) + F(n-2) Easy to see runtime is Ω(2 n ), exponential!

Fixing Fibonacci, Part 1 F(n): static int a[max] // initialized to all 0 if a[n]!= 0 then return a[n] if n == 1 or n == 2 then a[n] = 1 else a[n] = F(n-1) + F(n-2) return a[n] What is the new runtime of F? One addition operation per array entry, so total runtime is O(n). Linear! Still uses recursion. Can we do better? How is the array actually filled in?

Fixing Fibonacci, Part 2 F(n): int a[max] // Need not be initialized a[1] = a[2] = 1 for j = 3 to n a[ j ] = a[ j-1] + a[ j-2] return a[n] Recursion has been eliminated and replaced with a simple loop that builds the array directly Recursive calls are replaced by table lookups Runtime remains O(n)

Fixing Fibonacci, Part 3 F(n): int a[max] // Need not be initialized a[1] = a[2] = 1 for j = 3 to n j1 = 1 + ( j mod 2 ) j2 = 1 + ( j -1 mod 2) a[ j1 ] = a[ j1 ] + a[ j2 ] return a[ 1+ (n mod 2) ] Table has been compressed to constant space Runtime remains O(n)

Tackling Optimization Problems Instead of finding the optimal solution itself, try to find only the cost of the optimal solution Find a recursive program (using a minimal number of parameters) to solve for the cost Apply dynamic programming, building a table to avoid excess work (as in the Fibonacci example) Convert to iterative code if desired Memoize the table entries in order to recover the optimal solution itself

The LCS Problem Given seqs. X={x 1,,x m } and Y={y 1,,y n }, find the longest common subsequence A subsequence is formed by removing zero or more elements from a sequence A common subseq. is a subseq. shared by X and Y Example: X = {A,P,P,L,E,S} Y={P,E,A,R,S} The LCS is {P,E,S} May not always be unique (e.g. Y = {P,E,A,R,L,S}) Naïve solution: Try all possible subsequences What is the runtime? (Hint: Not good!)

Recursive sol n for length of LCS X i denotes first i elements of X, simlarly,y j LCSLen(X,Y): m = len(x); n = len(y) if m == 0 or n == 0 then return 0 else if x m == y n then return LCSLen(X m-1,y n-1 ) + 1 else return max( LCSLen(X m-1,y n ), LCSLen(X m,y n-1 ) ) Easy to argue correctness Worst case runtime is exponential (analysis is similar to Fibonacci)

A Dynamic Programming Fix LCSLen(X,Y): static int c[maxlen,maxlen] // Initialized to all -1 m = len(x); n = len(y) if c[m,n]!= -1 then return c[m,n] if m == 0 or n == 0 then c[m,n] = 0 else if x m == y n then c[m,n] = LCSLen(X m-1,y n-1 ) + 1 else c[m,n] = max( LCSLen(X m-1,y n ), LCSLen(X m,y n-1 ) ) return c[m,n] Worst case runtime is now O(mn) Constant time per table entry, table will be m by n How is the table actually being built?

Iterative sol n for length of LCS LCSLen(X,Y): static int c[maxlen,maxlen] // Initialized to all 0 m = len(x); n = len(y) for i = 1 to m for j = 1 to n if x i == y j then c[ i, j ] = c[ i-1, j-1] + 1 else c[ i, j ] = max( c[ i-1, j ], c[ i, j-1] ) return c[m,n] Runtime is clearly still O(mn) No recursion

Finding the LCS: Memoize LCSTable(X,Y): static int c[maxlen,maxlen] // Initialized to all 0 static int b[maxlen,maxlen] // May be uninitialized m = len(x); n = len(y) for i = 1 to m; for j = 1 to n if x i == y j then c[ i, j ] = c[ i-1, j-1] + 1; b[ i, j ] = 1 // Case 1 else if c[ i-1, j ] > c[ i, j-1] then c[ i, j ] = c[ i-1, j ]; b[ i, j ] = 2 // Case 2 else c[ i, j ] = c[ i, j-1 ]; b[ i, j ] = 3 // Case 3 Need a second algorithm to recover the LCS from the table b[.,.] and print it out

Printing out the LCS PrintLCS(X, b, i, j): if i == 0 or j == 0 then return if b[ i, j ] == 1 then PrintLCS(X, b, i-1, j-1) print x i else if b[ i, j ] == 2 then PrintLCS(X, b, i-1, j) else PrintLCS(X, b, i, j-1) Runtime is O(m + n), very efficient Notice that it was convenient to recover the LCS solution recursively

Matrix Multiplication Matrix multiplication basics: Matrix multiplication is not commutative An p q matrix can only multiply with a q r matrix Using the standard technique, it takes pqr scalar multiplications to compute the product Matrix Chain Multiplication Problem: Given a chain of n matrices (A 1,, A n ) such that A i is a p i-1 p i matrix, parenthesize the product A 1 A 2 A n to minimize the number of scalar multiplications required to compute the product Ex: Is it faster to compute A 1 (A 2 A 3 ) or (A 1 A 2 )A 3?

Naïve Solution Try all possible parenthesizations and compute the cost of each to find the minimum How many possible parenthesizations of n matrices are there? Call this P(n). Observe P(1) = 1. For n > 1, we have P(n) = n-1 k=1 P(k)P(n-k) Can show P(n) = Ω(2 n )! Not practical for large matrix chains (when it is most critical to find the best parenthesization)

Recursive sol n for minimum cost of a matrix chain multiplication Let m[i,j] be the minimum cost of computing the product of the chain (A i,, A j ) (i < j) Observe that if i = j then m[i, j] = 0 m[i,j] = min i k < j {m[i,k] + m[k+1, j] + p i-1 p k p j } In other words, we try parenthesizing the chain at each possible location k, and find the cost of the multiplication using that parenthesization Directly implementing this recursive algorithm requires exponential time

Iterative sol n for matrix chain order Matrix-Chain-Order(p[1..n+1]) Initialize array m[n,n] to for i = 1 to n m[ i, i ] = 0 for L = 2 to n // L is chain length for i = 1 to n L + 1 j = i+l-1 for k = i to j-1 q = m[ i,k ] + m[ k+1,j ] + p i-1 p k p j if q < m[ i,j ] then m[ i,j ] = q ; s[ i,j ] = k // Memoization Runtime is now O(n 3 ) using space O(n 2 )

All Pairs Shortest Paths Given a graph with weighted edges, find the shortest path between every pair of nodes No negative weight cycles allowed (neg. edges OK) Naïve solution is to apply Single Source Shortest Path algorithm V times Dijkstra doesn t work with negative edge weights Bellman-Ford is O( V E ) per source, yielding a worst case runtime of O( V 4 ) Can we do better using dynamic programming? Not obvious at all

Floyd-Warshall: Recursive APSP w[n,n] is edge weight matrix for n vertex graph Only vertices numbered k or less will be allowed to become internal vertices for shortest paths FW(w[n,n], k): int c[n,n]; int d[n,n] // May be uninitialized if k == 0 then return w[.,. ] d[.,. ] = FW(w[.,. ], k-1) // Dynamic programming? for i = 1 to n; for j = 1 to n c[ i, j ] = min( d[ i, j ], d[ i, k ] + d[ k, j ] ) return c[.,. ] Can prove FW(w[n,n], n) will be sol n to APSP cost

Removing Recursion from F-W FW(w[n,n]): static int d[n,n,n] // May be uninitialized copy w[.,. ] into d[.,., 0] // This is the base case for k = 1 to n for i = 1 to n; for j = 1 to n d[ i,j, k ] = min( d[ i,j, k-1], d[ i,k, k-1] + d[ k,j, k-1] ) return d[.,., n] Once again, observe how the table is filled Entries at k th level depend only on k-1 st level Careful attention to detail reveals that it is safe to overwrite previous levels during the loop, like this

Removing Recursion from F-W FW(w[n,n]): static int d[n,n] // May be uninitialized copy w[.,. ] into d[.,.] // This is the base case for k = 1 to n for i = 1 to n; for j = 1 to n d[ i,j ] = min( d[ i,j ], d[ i,k ] + d[ k,j ] ) return d[.,. ] Runtime is still O(n 3 ), space is now O(n 2 ) Now we need to memoize to recover the actual shortest paths

Memoized F-W FWTable(w[n,n]): static int d[n,n]; b[n,n] // Initialize b[n,n] to all -1 copy w[.,. ] into d[.,.] // This is the base case for i = 1 to n; for j = 1 to n; if w[ i,j ] < INF then b[ i,j ] = 0 for k = 1 to n for i = 1 to n; for j = 1 to n if d[ i, j ] > d[ i, k ] + d[ k, j ] then d[ i, j ] = d[ i, k ] + d[ k, j ]; b[ i, j ] = k Can recover the actual paths from table b[.,. ]

Printing out an F-W Path PrintFW(b[n,n], i, j): if b[ i, j ] = -1 then print No path else if b[ i, j ] = 0 then print ( i, j ) else PrintFW(b[.,. ], i, b[ i, j ] ) PrintFW(b[.,. ], b[ i, j ], j ) return Notice the use of recursion in path recovery