Dynamic Programming 1

Similar documents
Chapter 3 Dynamic programming

Introduction to Algorithms

CS 380 ALGORITHM DESIGN AND ANALYSIS

Unit 4: Dynamic Programming

Longest Common Subsequence. Definitions

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

Homework3: Dynamic Programming - Answers

CS Algorithms and Complexity

We ve done. Now. Next

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Lecture 8. Dynamic Programming

Lecture 4: Dynamic programming I

Dynamic Programming II

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms

So far... Finished looking at lower bounds and linear sorts.

Algorithms. Ch.15 Dynamic Programming

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

Algorithm Design and Analysis

Dynamic Programming. Lecture Overview Introduction

Dynamic Programmming: Activity Selection

15.Dynamic Programming

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions

Dynamic programming 4/9/18

Dynamic Programming (Part #2)

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

Data Structure and Algorithm II Homework #2 Due: 13pm, Monday, October 31, === Homework submission instructions ===

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

Assignment 8 Solution

Dynamic Programming CS 445. Example: Floyd Warshll Algorithm: Computing all pairs shortest paths

CMSC 451: Lecture 11 Dynamic Programming: Longest Common Subsequence Thursday, Oct 5, 2017

memoization or iteration over subproblems the direct iterative algorithm a basic outline of dynamic programming

15.4 Longest common subsequence

15.4 Longest common subsequence

Algorithms for Data Science

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

CS 4349 Lecture September 13th, 2017

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

Algorithm Design Techniques part I

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Dynamic Programming

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left

Dynamic Programming Algorithms

ECE608 - Chapter 15 answers

Lecture #9: Image Resizing and Segmentation

CMSC Introduction to Algorithms Spring 2012 Lecture 16

1 Dynamic Programming

Introduction to Algorithms

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

Chain Matrix Multiplication

- Main approach is recursive, but holds answers to subproblems in a table so that can be used again without re-computing

Lecture 13: Chain Matrix Multiplication

10. EXTENDING TRACTABILITY

Dynamic Programming. CIS 110, Fall University of Pennsylvania

Robert Collins CSE586 CSE 586, Spring 2009 Advanced Computer Vision

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Greedy Algorithms Huffman Coding

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

Algorithms Dr. Haim Levkowitz

Longest Common Subsequences and Substrings

CMSC 451: Lecture 10 Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct 3, 2017

Image gradients and edges April 11 th, 2017

Image gradients and edges April 10 th, 2018

CMPS 102 Solutions to Homework 6

Lecture 2. Dynammic Programming Notes.txt Page 1

1. Basic idea: Use smaller instances of the problem to find the solution of larger instances

COMP 182: Algorithmic Thinking Dynamic Programming

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Divide and Conquer

Tutorial 6-7. Dynamic Programming and Greedy

Today: Matrix Subarray (Divide & Conquer) Intro to Dynamic Programming (Rod cutting) COSC 581, Algorithms January 21, 2014

CSE 101 Homework 5. Winter 2015

Dynamic Programming part 2

Solving NP-hard Problems on Special Instances

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced

technique: seam carving Image and Video Processing Chapter 9

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

/463 Algorithms - Fall 2013 Solution to Assignment 3

Unit-5 Dynamic Programming 2016

Last week: Breadth-First Search

CS141: Intermediate Data Structures and Algorithms Dynamic Programming

Solutions for Operations Research Final Exam

CMSC 451: Dynamic Programming

ECE250: Algorithms and Data Structures Dynamic Programming Part B

CMSC351 - Fall 2014, Homework #4

Dynamic Programming CPE 349. Theresa Migler-VonDollen

ECE608 - Chapter 16 answers

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors

Outline Purpose How to analyze algorithms Examples. Algorithm Analysis. Seth Long. January 15, 2010

Dynamic Programming. Bjarki Ágúst Guðmundsson Tómas Ken Magnússon. School of Computer Science Reykjavík University

Algorithms Assignment 3 Solutions

Practice Problems for the Final

School of Informatics, University of Edinburgh

Subsequence Definition. CS 461, Lecture 8. Today s Outline. Example. Assume given sequence X = x 1, x 2,..., x m. Jared Saia University of New Mexico

Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm

y j LCS-Length(X,Y) Running time: O(st) set c[i,0] s and c[0,j] s to 0 for i=1 to s for j=1 to t if x i =y j then else if

Wook Kim. 14 September Korea University Computer Graphics Lab.

Image gradients and edges

Jade Yu Cheng ICS 311 Homework 8 Sep 29, 2008

CMPS 102 Solutions to Homework 7

Transcription:

Dynamic Programming 1 Jie Wang University of Massachusetts Lowell Department of Computer Science 1 I thank Prof. Zachary Kissel of Merrimack College for sharing his lecture notes with me; some of the examples presented here are borrowed from his notes. J. Wang (UMass Lowell) Dynamic Programming 1 / 36

What is DP? DP is a technique for solving a recurrence relation consisting of a lot of repeating subproblems. Example: Compute the Fibonacci sequence F (i), where { 1, if i = 1 or i = 2, F (i) = F (i 1) + F (i 2), if i > 2. A direct implementation: Fib(n) 1 if n == 1 or n == 2 2 return 1 3 else 4 return Fib(n 1) + Fib(n 2) J. Wang (UMass Lowell) Dynamic Programming 2 / 36

Time Analysis T (n) = { Θ(1), if n {1, 2}, T (n 1) + T (n 2) + Θ(1), otherwise. T (n) 2T (n 2) + 1 = 2(2T (n 4) + 1) + 1 = k 1 = 2 k T (n 2k) + = 2 n 1 Θ(2 n ). n 2 2 + 2 i i=0 i=0 2 i J. Wang (UMass Lowell) Dynamic Programming 3 / 36

We Can Do Better Fib(n) This exponential blowup is due to significant re-computation of repeating subproblems. Note that the i-th Fibonacci number is the sum of the previous two Fibonacci numbers. If we save (memoize) the previous numbers in a table, we can cutdown the running time to Θ(n). 1 if memo[n] 2 return memo[n] 3 else 4 if n 2 5 v = 1 6 else 7 v = Fib(n 1) + Fib(n 2) 8 memo[n] = v 9 return v J. Wang (UMass Lowell) Dynamic Programming 4 / 36

DP for Optimization DP is often used for solving optimization problems. To do so, first seek if we can devise a recurrence relation for the problem. We can then use DP to achieve a faster algorithm (polynomial vs exponential) if the following two conditions hold: 1 The number of different subproblems is polynomial (a.k.a. overlapping subproblems). 2 The optimal solution contains within it optimal solutions to the subproblems (a.k.a. optimal substructure). We can then use memoization (store values of previous subproblems) to achieve a poly-time algorithm. J. Wang (UMass Lowell) Dynamic Programming 5 / 36

Tricks to Obtain a Recurrence Relation Formulation. Interpret and formulate the problem by specifying parameters, where parameters represent the size of the problem (original and subproblems). Localization. For a given subproblem (specified by parameters), figure out how it is affected by smaller subproblems (also specified by parameters). Express this dependency as a recurrence relation. (Note: this is often the hardest part of DP.) J. Wang (UMass Lowell) Dynamic Programming 6 / 36

Rod Cutting Input: Given a rod of length n inches and a table of prices p[1.. n] listing the value of a rod of length i inches. Output: The maximum revenue r n obtainable by cutting up the rod and selling the pieces. J. Wang (UMass Lowell) Dynamic Programming 7 / 36

Formulation and Localization Formulation: Let r(n) denote the maximum revenue that can be obtained by cutting a rod of length n. There are n subproblems. We want to compute r(n). Localization: A rod can be cut into two pieces of length i and length n i. { max 1 i n {p[i] + r(n i)} If n > 0, r(n) = 0 Otherwise. J. Wang (UMass Lowell) Dynamic Programming 8 / 36

Memoization Use an array memo[1..n] to record results of subproblems. CutRod(p[1.. n], n) 1 if memo[n] 2 v = memo[n] 3 elseif n == 0 4 v = 0 5 else 6 v = 0 7 for i = 1 to n 8 v = max{v, p[i] + CutRod(p, n i)} 9 memo[n] = v 10 return v Solution: Compute CutRod(p, n). Running time: Θ(n 2 ). J. Wang (UMass Lowell) Dynamic Programming 9 / 36

Constructing Cuts To construct the cuts we use an array c[0..n] so that c[n] holds the optimal size i of the first piece to cutoff when solving a subproblem of size j. CutRod(p[1.. n], n) 1 if memo[n] 2 v = memo[n] 3 elseif n == 0 4 v = 0 5 else 6 v = 0 7 for i = 1 to n 8 if v < p[i] + CutRod(p, n i)} 9 v = p[i] + CutRod(p, n i) 10 c = i 11 memo[n] = v 12 c[n] = c 13 return v J. Wang (UMass Lowell) Dynamic Programming 10 / 36

Coin-Collecting Robot 2 Input: An m n board with no more than one coin at each cell and a coin-collecting robot in the upper left-hand corner of the board. The movement of the robot is subject to the following three rules: 1 The robot must stop in the bottom right corner of the board. 2 The robot can move at most on cell right or one cell down in a single step. Visiting a cell that contains a coin results in the robot collecting the coin. Output: The maximum number of coins the robot can collect and the path required to collect the coins. 2 These problems come from Introduction to the Design and Analysis of Algorithms by Anany Levitin. J. Wang (UMass Lowell) Dynamic Programming 11 / 36

Formulation and Localization Formulation: Let CC(i, j) denote the largest number of coins the robot can collect when it visits cell (i, j). There are mn subproblems. Want to compute CC(m, n). Localization: The robot can reach the cell (i, j) in one of two possible ways: 1 From cell (i 1, j) (the cell above) 2 From cell (i, j 1) (the cell to the left) Let c ij = 1 if there is a coin in cell (i, j) and 0 otherwise. Then for each (i, j) with i [0, m] and j [0, n], { max{cc(i 1, j), CC(i, j 1)} + c ij, If i > 0 and j > 0, CC(i, j) = 0, If i = 0 or j = 0. J. Wang (UMass Lowell) Dynamic Programming 12 / 36

Memoization Row-wise memoization using memo[1..m, 1..n]: CC(i, j, C[1..m, 1..n]) 1 if memo[i, j] 2 v = memo[i, j] 3 elseif 1 i m and 1 j n 4 v = max{cc(i 1, j, C), CC(i, j 1, C)} + C[i, j] 5 elseif i == 0 and 1 j n 6 v = 0 7 elseif j == 0 and 1 i m 8 v = 0 9 memo[i, j] = v 10 return v Solution: Compute CC(m, n). Running time: Θ(mn). J. Wang (UMass Lowell) Dynamic Programming 13 / 36

Constructing the Path To find the path we work backward from cell (m, n). If memo[i 1, j] > memo[i, j 1] then the path to (i, j) came from above. If memo[i 1, j] < memo[i, j 1] then the path to (i, j) came from the left. If memo[i 1, j] = memo[i, j 1] then either direction is optimal (and thus valid). When a tie occurs we simply select any one of them. We can recover the path in Θ(n + m) time. J. Wang (UMass Lowell) Dynamic Programming 14 / 36

Subproblem Graphs Subproblems relate to each other and form an acyclic graph. A subproblem graph is a graph G = (V, E): V is the set of subproblems, one vertex per subproblem. E is a set of edges such that an edge (u, v) V iff subproblem u depends on subproblem v. We can think of a dynamic program with memoization as a depth-first search of the subproblem graph. Subproblem graphs aid in determining running time for dynamic programs. The size of V is the total number of subproblems. The time to compute a solution to a subproblem is proportional to the out-degree of the subproblem. The total running time of a dynamic program is proportional to the number of vertices and edges. J. Wang (UMass Lowell) Dynamic Programming 15 / 36

Bottom Up The bottom-up approach is a common technique to remove recursion. (Note: Unlike the memoization approach, there is no recursion in the bottom-up approach.) In the bottom-up approach we use a table to fill in values in the order of subproblem dependency. i.e., perform a topological sort of the subproblem graph according to the size of the subproblems. Solve the smaller subproblem first. Filling in tables dynamically was considered programming before the computer era, hence the name of dynamic programming. J. Wang (UMass Lowell) Dynamic Programming 16 / 36

Fibonacci Bottom Up Fib(n) 1 Allocate F [1.. n] 2 for i = 1 to n 3 if i 2 4 F [i] = 1 5 else 6 F [i] = F [i 1] + F [i 2] 7 return F [n] J. Wang (UMass Lowell) Dynamic Programming 17 / 36

Rod Cutting Bottom Up CutRod(p[1.. n], n) 1 Allocate r[0.. n] 2 r[0] = 0 3 for j = 1 to n 4 v = 0 5 for i = 1 to j 6 v = max{v, p[i] + r[j i]} 7 r[j] = v 8 return r J. Wang (UMass Lowell) Dynamic Programming 18 / 36

Rod Cutting Bottom Up with Path Construction Extended-CutRod(p[1.. n], n) 1 Allocate r[0.. n] and c[0..n] 2 r[0] = 0 3 for j = 1 to n 4 v = 0 5 for i = 1 to j 6 if v < p[i] + r[j i]} 7 v = p[i] + r[j i] 8 c[j] = i 9 r[j] = v 10 return r and c J. Wang (UMass Lowell) Dynamic Programming 19 / 36

Longest Common Subsequence (LCS) Let X and Y be two sequences over the same alphabet. A sequence σ is a subsequence of X if there exists a strictly increasing sequence of indices of X : i 1, i 2,..., i k, such that σ j = x ij for all j = 1, 2,..., k. A subsequence σ is common to X and Y if σ is a subsequence of X and σ is a subsequence of Y. LCS is defined as follows: Input: Two sequences X = x 1, x 2,..., x m and Y = y 1, y 2,..., y n. Output: The length of the longest common subsequence. J. Wang (UMass Lowell) Dynamic Programming 20 / 36

Formulation and Localization Formulation: Let LCS(i, j) denote the length of the longest common subsequence of sequences X i = x 1, x 2,..., x i and Y j = y 1, y 2,..., y j. Want to compute LCS(m, n). There are mn subproblems. Localization: LCS(i, j) is 1 Based on LCS(i 1, j 1), i.e., element x i = y j and so it should be in the longest common subsequence. 2 Based on LCS(i 1, j), i.e., x i y j and x i is not in the longest common subsequence. 3 Based on LCS(i, j 1), i.e., x i y j and y j is not in the longest common subsequence. 0, If i = 0 or j = 0, LCS(i, j) = LCS(i 1, j 1) + 1, If i, j > 0 & x i = y j, max{lcs(i 1, j), LCS(i, j 1)}, If i, j > 0 & x i y j. J. Wang (UMass Lowell) Dynamic Programming 21 / 36

Memoization LCS(i, j, X [1.. m], Y [1.. n]) 1 if memo[i, j] 2 v = memo[i, j] 3 elseif i = 0 or j = 0 4 v = 0 5 elseif X [i] == Y [j] 6 v = LCS(i 1, j 1, X, Y ) + 1 7 else 8 v = max{lcs(i 1, j, X, Y ), LCS(i, j 1, X, Y )} 9 memo[i, j] = v 10 return v Running time: Θ(mn). J. Wang (UMass Lowell) Dynamic Programming 22 / 36

Bottom Up for LCS Construction LCS-Length(X, Y ) 1 Initialization(X, Y, c, b) 2 for i = 1 to m 3 for j = 1 to n 4 if x i == y j 5 c[i, j] = c[i 1, j 1] + 1 6 b[i, j] = 1 7 elseif c[i 1, j] c[i, j 1] 8 c[i, j] = c[i 1, j] 9 b[i, j] = 2 10 else c[i, j] = c[i 1, j] 11 b[i, j] = 3 12 return c and b J. Wang (UMass Lowell) Dynamic Programming 23 / 36

Initialization(X, Y, c, b) 1 m = X.length 2 n = Y.length 3 Allocate b[1..m, 1..n] and c[0..m, 0..n] 4 for i = 1 to m 5 c[i, 0] = 0 6 for j = 0 to n 7 c[0, j] = 0 J. Wang (UMass Lowell) Dynamic Programming 24 / 36

LCS Construction Print-LCS(b, X, i, j) 1 if i == 0 or j == 0 2 return 3 if b[i, j] == 1 4 Pring-LCS(b, X, i 1, j 1) 5 print x i 6 elseif b[i, j] == 2 7 Pring-LCS(b, X, i 1, j) 8 else Pring-LCS(b, X, i, j 1) Note: This method returns the rightmost LCS; that is, it finds the first LCS from the right. Questions: (1) How to return the leftmost LCS? (2) How to return all LCS? J. Wang (UMass Lowell) Dynamic Programming 25 / 36

Resizing Images by Seam Carving Imagine you want to resize an image by changing its width (with its height fixed) without having it looked distorted. Working from the top row of pixels to the bottom: Remove pixels that are neighbors to the pixels below them subject to the constraint that their colors are similar to one of the neighbors in question This top row to bottom row sequence of pixels is referred to as a seam. This idea is called seam carving and is from a SIGGRAPH 07 paper. Avidan, Shai, and Ariel Shamir. Seam carving for content-aware image resizing. ACM Transactions on graphics (TOG). Vol. 26. No. 3. ACM, 2007. Video can be found at https://www.youtube.com/watch?v=6ncijxtlugc J. Wang (UMass Lowell) Dynamic Programming 26 / 36

Definition Suppose that we have An m n matrix A of RGB values. Each location (i, j) in the array is called a pixel. (A pixel is an acronym for picture element.) An m n matrix D of real values called disruption values (also referred to as energy). Removing pixels with lower disruption values (i.e., lower energy) is better. (Remark: (1) The matrix D is computed from A using an energy method, such as the dual gradient energy measure (see the next slide). (2) No single measure works well for all images.) Formalize the seam carving problem as follows: Input: An m n matrix A that represents the image and the corresponding matrix D representing the disruption measure. Output: The total disruption measure of the least disruptive seam (and later the seam itself). J. Wang (UMass Lowell) Dynamic Programming 27 / 36

The Dual Gradient Energy Measure Let (a, b) be a pixel with RGB value (r a,b, b a,b, b a,b ). Let R x (a, b), G x (a, b), and B x (a, b) denote, respectively, the absolute value in differences of red, green, and blue components between pixel (a + 1, b) and pixel (a 1, b). For example, R x (a, b) = r a+1,b r a 1,b. Let R y (a, b), G y (a, b), and B y (a, b) denote, respectively, the absolute value in differences of red, green, and blue components between pixel (a, b + 1) and pixel (a, b 1). For example, G y (a, b) = g a,b+1 g a,b 1. To calculate the energy of pixels at the border of the image, replace the missing pixel with the pixel from the opposite edge. For example, B x (0, 0) = b 1,0 b m 1,0. Define D(a, b) = 2 x(a, b) + 2 y (a, b), where 2 x(a, b) = R 2 x (a, b) + G 2 x (a, b) + B 2 x (a, b), 2 y (a, b) = R 2 y (a, b) + G 2 y (a, b) + B 2 y (a, b). J. Wang (UMass Lowell) Dynamic Programming 28 / 36

Dual Gradient Energy Example Let (255, 100, 55) (255, 100, 150) (255, 100, 255) A = (255, 150, 55) (255, 150, 153) (255, 155, 255) (255, 200, 55) (255, 200, 150) (255, 200, 255) There are 8 border pixels and one non-border pixel (1,1). R x (1, 1) = 255 255 = 0, G x (1, 1) = 200 100 = 100, B x (1, 1) = 150 150 = 0. 2 x(1, 1) = 100 2 = 10000. Likewise, we can compute 2 y (1, 1) = 5 2 + 200 2 = 40025. D(1, 1) = 10000 + 40025 = 50025. D(0, 0) = 0 2 + 50 2 + 0 2 + 0 2 + 0 2 + 105 2 = 10025 + 2500 = 12525. J. Wang (UMass Lowell) Dynamic Programming 29 / 36

Formulation and Localization Formulation: Let S(i, j) denote the disruption of the optimal seam rooted at pixel (i, j). In the worst case an m n image requires to solve mn subproblems. We only need to compute a subseam once. Localization: The minimum disruption value for any subseam root S(i, j) can be determine based on one of the three lower neighbors in the row below the current row. Idea: use the minimum disruptive seam rooted at (i 1, k) for k = j 1, j, or j + 1 and add the disruption factor of the current pixel. There are four distinct cases we need to consider when trying to determine S(i, j). J. Wang (UMass Lowell) Dynamic Programming 30 / 36

Localization 1 The current pixel (i, j) is in the top row of A. This is the base case; just the disruption value at location (i, j). The current pixel (i, j) is in the rightmost column and therefore only has the following neighbors: below left or directly below. j 2 j 1 j i i + 1. J. Wang (UMass Lowell) Dynamic Programming 31 / 36

Localization 2 The current pixel (i, j) is in the leftmost column and therefore only has the following neighbors: below right or directly below. j j + 1 j + 2 i i + 1. J. Wang (UMass Lowell) Dynamic Programming 32 / 36

Localization 3 The current pixel (i, j) is somewhere between the left-most and right-most column of a given row. It has three neighbors: below left, below right, or directly below. j 1 j j + 1 i i + 1. J. Wang (UMass Lowell) Dynamic Programming 33 / 36

Recurrence The recurrence relation that handles all these cases is as follows: d ij If i = m, d ij + min{s(i + 1, j), S(i + 1, j 1)} If j = n and i < m, S(i, j) = d ij + min{s(i + 1, j), S(i + 1, j + 1)} If j = 1 and i < m, d ij + min {S(i + 1, j), S(i + 1, j 1), S(i + 1, j + 1)} Solution: Compute min 1 j n {S(1, j)}. We will assume that there exists a global memo pad memo[1.. m, 1.. n] with every entry initialized to. Otherwise. We will require two algorithms RSeam that returns the disruption of the best seam rooted at location (i, j) and OSeam which returns the best seam in the image (the solution). J. Wang (UMass Lowell) Dynamic Programming 34 / 36

Memoization RSeam(i, j, D[1.. m, 1.. n]) 1 if memo[i, j] 2 v = memo[i, j] 3 elseif i == m 4 v = D[i, j] 5 elseif i < m and j == n 6 v = D[i, j] + min{rseam(i + 1, j, D), RSeam(i + 1, j 1)} 7 elseif i < m and j == 1 8 v = D(i, j) + min{rseam(i + 1, j, D), RSeam(i + 1, j + 1)} 9 else 10 v = D(i, j) + min {RSeam(i + 1, j 1, D), RSeam(i + 1, j, D), RSeam(i + 1, j + 1)} 11 memo[i, j] = v 12 return v Running time: Θ((n i)(m j)) for the operations. J. Wang (UMass Lowell) Dynamic Programming 35 / 36

Optimal Seam Construction OSeam(D[1.. m, 1.. n]) 1 min = 2 for j = 1 to n 3 v = RSeam(1, j, D) 4 if v < min 5 min = v 6 return min Problems to think about: (1) How to remove a horizontal seam? (2) How to insert a vertical seam? (3) How to insert a horizontal seam? J. Wang (UMass Lowell) Dynamic Programming 36 / 36