Unit-5 Dynamic Programming 2016

Similar documents
Dynamic Programming. December 15, CMPE 250 Dynamic Programming December 15, / 60

Module 27: Chained Matrix Multiplication and Bellman-Ford Shortest Path Algorithm

Lecture 13: Chain Matrix Multiplication

Chain Matrix Multiplication

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

CS 231: Algorithmic Problem Solving

Dynamic Programming. Nothing to do with dynamic and nothing to do with programming.

Dynamic Programming Shabsi Walfish NYU - Fundamental Algorithms Summer 2006

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

Dynamic Programming. Outline and Reading. Computing Fibonacci

12 Dynamic Programming (2) Matrix-chain Multiplication Segmented Least Squares

Dynamic Programming II

Data Structures and Algorithms Week 8

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University

Lecture 57 Dynamic Programming. (Refer Slide Time: 00:31)

Last week: Breadth-First Search

Lecture 8. Dynamic Programming

Algorithms: COMP3121/3821/9101/9801

Algorithm Design Techniques part I

Shortest path problems


Outline. CS38 Introduction to Algorithms. Fast Fourier Transform (FFT) Fast Fourier Transform (FFT) Fast Fourier Transform (FFT)

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

Chapter 3 Dynamic programming

We ve done. Now. Next

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book)

Chapter 6. Dynamic Programming

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

Introduction to Optimization

Data Structures, Algorithms & Data Science Platforms

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

Introduction to Algorithms

Practice Problems for the Final

6. Algorithm Design Techniques

Algorithms for Data Science

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

CSE 101, Winter Design and Analysis of Algorithms. Lecture 11: Dynamic Programming, Part 2

A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010

1 Non greedy algorithms (which we should have covered

Artificial Intelligence

Computer Sciences Department 1

Algorithms for Data Science

5.1 The String reconstruction problem

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2006

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks)

******** Chapter-4 Dynamic programming

So far... Finished looking at lower bounds and linear sorts.

Framework for Design of Dynamic Programming Algorithms

CSC 373: Algorithm Design and Analysis Lecture 8

Dynamic Programming II

PSD1A. DESIGN AND ANALYSIS OF ALGORITHMS Unit : I-V

Dynamic Programming. An Introduction to DP

Dynamic Programming (Part #2)

Algorithm Design and Analysis

Dynamic Programming: 1D Optimization. Dynamic Programming: 2D Optimization. Fibonacci Sequence. Crazy 8 s. Edit Distance

Subset sum problem and dynamic programming

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

14 Dynamic. Matrix-chain multiplication. P.D. Dr. Alexander Souza. Winter term 11/12

Data Structures and Algorithms. Dynamic Programming

Dynamic programming. Trivial problems are solved first More complex solutions are composed from the simpler solutions already computed

Dynamic Programming. Introduction, Weighted Interval Scheduling, Knapsack. Tyler Moore. Lecture 15/16

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

Solving NP-hard Problems on Special Instances

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017

Computer Science 385 Design and Analysis of Algorithms Siena College Spring Topic Notes: Dynamic Programming

Dynamic Programming part 2

The Shortest Path Problem. The Shortest Path Problem. Mathematical Model. Integer Programming Formulation

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15

/463 Algorithms - Fall 2013 Solution to Assignment 3

Algorithms. All-Pairs Shortest Paths. Dong Kyue Kim Hanyang University

Shortest Paths. Nishant Mehta Lectures 10 and 11

Optimization II: Dynamic Programming

Resources matter. Orders of Growth of Processes. R(n)= (n 2 ) Orders of growth of processes. Partial trace for (ifact 4) Partial trace for (fact 4)

The Citizen s Guide to Dynamic Programming

Dynamic programming 4/9/18

Chapter 17. Dynamic Programming

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Shortest Paths. Nishant Mehta Lectures 10 and 11

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

Advanced Algorithms Class Notes for Monday, November 10, 2014

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

1 Dynamic Programming

COMP4128 Programming Challenges

ESO207A: Data Structures and Algorithms End-semester exam

Analysis of Algorithms, I

Single Source Shortest Path (SSSP) Problem

CS 491 CAP Intermediate Dynamic Programming

Title. Ferienakademie im Sarntal Course 2 Distance Problems: Theory and Praxis. Nesrine Damak. Fakultät für Informatik TU München. 20.

CSE 373 Analysis of Algorithms, Fall Homework #3 Solutions Due Monday, October 18, 2003

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming

Lecture 4: Dynamic programming I

Lecture 12: Dynamic Programming Part 1 10:00 AM, Feb 21, 2018

Write an algorithm to find the maximum value that can be obtained by an appropriate placement of parentheses in the expression

1 More on the Bellman-Ford Algorithm

Transcription:

5 Dynamic programming Overview, Applications - shortest path in graph, matrix multiplication, travelling salesman problem, Fibonacci Series. 20% 12 Origin: Richard Bellman, 1957 Programming referred to a series of choices Dynamic: choices are made on the fly, not in beginning Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. The idea is simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again. If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then dynamic programming can be applied. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem which is referred to as the Optimal Substructure Property. A method for solving complex problems by breaking them up into sub-problems first. This technique can be used when a given problem can be split into overlapping sub-problems and when there is an optimal sub-structure to the problem. It can be used to solve many problems in time O(n 2 ) or O(n 3 ) for which a naive approach would take exponential time. Dynamic programming is a technique for solving problems recursively and is applicable when the computations of the subproblems overlap. Dynamic programming is typically implemented using tabulation, but can also be implemented using memoization There are two ways of doing this. 1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization. If you use memoization to solve the problem you do it by maintaining a map of already solved sub problems. You do it "top down" in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems). 2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, Analysis and Design of Algorithm Page 1

it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Tabulation. When you solve a dynamic programming problem using tabulation you solve the problem "bottom up", i.e., by solving all related sub-problems first, typically by filling up an n- dimensional table. Based on the results in the table, the solution to the "top" / original problem is then computed. Principle of Optimality (Optimal Substructure Property) A problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem. Suppose that in solving a problem, we have to make a sequence of decisions D1, D2 Dn. If this sequence is optimal, then the last k decisions, 1 k n must be optimal. Ex: The shortest path problem If i1, i2 j is a shortest path from i to j, then i1, i2 j must be a shortest path from i1to j Difference Between Dynamic Programming and Divide and Conquer: Dynamic programming is similar to the divide-and-conquer approach in that the solution of a large problem depends on previously obtained solutions to easier subproblems. The significant difference, however, is that dynamic programming permits subproblems to overlap. By overlap, we mean that the subproblem can be used in the solution of two different subproblems. In contrast, the divide-and-conquer approach creates subproblems that are completely separate and can be solved independently. the problem to be solved is shown as the root of a tree, where children are easier subproblems. The leaves of the trees are trivial subproblems that can be solved directly in D.P., these leaves are often the input to the algorithm. The primary difference between divide and conquer and D.P. is clear: subproblems in divide and conquer do not interact, while in D.P., they might. Analysis and Design of Algorithm Page 2

The primary difference is that the subproblems of the divide-and-conquer approach are independent, while in dynamic programming they interact. Also, dynamic programming solves problems in a bottom-up manner as opposed to divide-and-conquer s top-down approach. A second difference is also illustrated by the figure: while the divide-and-conquer approach is recursive, top-down, D.P is best thought as bottom up. Example The following computer problems can be solved using dynamic programming approach Fibonacci number series Knapsack problem Tower of Hanoi All pair shortest path by Floyd-Warshall Shortest path by Dijkstra Project scheduling Dynamic programming can be used in both top-down and bottom-up manner. Most of the times, referring to previous solution output is cheaper than re-computing in terms of CPU cycles thus by reduction in time complexity. Application: 1) Fibonacci Series 1. Consider the Fibonacci Series : 0,1,1,2,3,5,8,13,21... Analysis and Design of Algorithm Page 3

F(0)=0 ; F(1) = 1; F(N)=F(N-1)+F(N-2) Pseudo-code for a simple recursive function will be : fib(int n) { if (n==0) return 0; if (n==1) return 1; return fib(n-1)+fib(n-2); Using the above function, if we want to compute fib(7), we end up making a tree of calls which computes the function on the same value, several times : fib(6) = fib(5) + fib(4) fib(6) = ( fib(4)+fib(3) ) + ( fib(3) + fib(2) ) fib(6) = ( (fib(3) + fib(2)) + ( fib(2) + fib(1)) ) + ( (fib(2) + fib(1)) + (fib(1) + fib(0) ) fib(6) = ( ( ( fib(2) + fib(1) ) +( fib(1) + fib(0) ) ) + ( ( fib(1) + fib(0) ) + fib(1) ) ) + ( ( (fib(1)+fib(0)) + fib(1) ) + ( fib(1) + fib(0) ) ) fib(6) = ( ( ((fib(1)+fib(0)) + fib(1) ) +( fib(1) + fib(0) ) ) + ( ( fib(1) + fib(0) ) + fib(1) ) ) + ( ( (fib(1)+fib(0)) + fib(1) ) + ( fib(1) + fib(0) ) ) As you can see : fib(5) has been computed once. fib(4) has been computed 2 times. fib(3) has been computed 3 times. fib(2) has been computed 5 times. Example: Write a function int fib(int n) that returns F n. For example, if n = 0, then fib() should return 0. If n = 1, then it should return 1. For n > 1, it should return F n-1 + F n-2 Method 1 ( Use recursion ) A simple method that is a direct recusrive implementation mathematical recurance relation given above. #include<stdio.h> int fib(int n) { if (n <= 1) return n; return fib(n-1) + fib(n-2); Analysis and Design of Algorithm Page 4

int main () { int n = 9; printf("%d", fib(n)); getchar(); return 0;. Time Complexity: T(n) = T(n-1) + T(n-2) which is exponential Extra Space: O(n) if we consider the function call stack size, otherwise O(1). fib(5) / \ fib(4) fib(3) / \ / \ fib(3) fib(2) fib(2) fib(1) / \ / \ / \ fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) / \ fib(1) fib(0) Method 2 ( Use Dynamic Programming ) We can avoid the repeated work done is the method 1 by storing the Fibonacci numbers calculated so far. #include<stdio.h> int fib(int n) { /* Declare an array to store Fibonacci numbers. */ int f[n+1]; int i; /* 0th and 1st number of the series are 0 and 1*/ f[0] = 0; f[1] = 1; for (i = 2; i <= n; i++) { /* Add the previous 2 numbers in the series and store it */ f[i] = f[i-1] + f[i-2]; return f[n]; int main () { Analysis and Design of Algorithm Page 5

int n = 9; printf("%d", fib(n)); getchar(); return 0; Time Complexity: O(n) Extra Space: O(n) Analysis and Design of Algorithm Page 6

Travelling Salesman Problem (TSP): Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point. TSP problem can be described as: find a tour of N cities in a country (assuming all cities to be visited are reachable), the tour should (a) visit every city just once, (b) return to the starting point and (c) be of minimum distance. Algorithm : Number the cities 1, 2,..., N and assume we start at city 1, and the distance between city i and city j is d ij. Consider subsets S {2,...,N of cities and, for c S, let D(S, c) be the minimum distance, starting at city 1, visiting all cities in S and finishing at city c. First phase: if S = {c, then D(S, c) = d 1,c. Otherwise: D(S, c) = min x S c (D(S c, x) + d x,c ) Second phase: the minimum distance for a complete tour of all cities is M = min c {2,...,N (D({2,..., N, c) + d c,1 ) A tour n 1,..., n N is of minimum distance just when it satisfies M = D({2,..., N, n N ) + d nn,1. Pseucodode: function algorithm TSP (G, n) for k := 2 to n do C({1, k, k) := d 1,k end for for s := 3 to n do for all S {1, 2,..., n, S = s do for all k S do {C(S, k) = min m 1,m k,m S [C(S {k, m) + d m,k ] end for end for end for opt := min k 1 [C({1, 2, 3,..., n, k) + d k,1 ] return (opt) end Analysis and Design of Algorithm Page 7

Example: Distance matrix: Functions description: g(x, S) - starting from 1, path min cost ends at vertex x, passing vertices in set S exactly once c xy - edge cost ends at x from y p(x, S) - the second-to-last vertex to x from set S. Used for constructing the TSP path back at the end. k = 0, null set: Set : g(2, ) = c 21 = 1 g(3, ) = c 31 = 15 g(4, ) = c 41 = 6 k = 1, consider sets of 1 element: Set {2: g(3,{2) = c 32 + g(2, ) = c 32 + c 21 = 7 + 1 = 8 p(3,{2) = 2 g(4,{2) = c 42 + g(2, ) = c 42 + c 21 = 3 + 1 = 4 p(4,{2) = 2 Set {3: g(2,{3) = c 23 + g(3, ) = c 23 + c 31 = 6 + 15 = 21 p(2,{3) = 3 g(4,{3) = c 43 + g(3, ) = c 43 + c 31 = 12 + 15 = 27 p(4,{3) = 3 Set {4: g(2,{4) = c 24 + g(4, ) = c 24 + c 41 = 4 + 6 = 10 p(2,{4) = 4 g(3,{4) = c 34 + g(4, ) = c 34 + c 41 = 8 + 6 = 14 p(3,{4) = 4 k = 2, consider sets of 2 elements: Set {2,3: g(4,{2,3) = min {c 42 + g(2,{3), c 43 + g(3,{2) = min {3+21, 12+8= min {24, 20= 20 p(4,{2,3) = 3 Set {2,4: Analysis and Design of Algorithm Page 8

g(3,{2,4) = min {c 32 + g(2,{4), c 34 + g(4,{2) = min {7+10, 8+4= min {17, 12 = 12 p(3,{2,4) = 4 Set {3,4: g(2,{3,4) = min {c 23 + g(3,{4), c 24 + g(4,{3) = min {6+14, 4+27= min {20, 31= 20 p(2,{3,4) = 3 Length of an optimal tour: f = g(1,{2,3,4) = min { c 12 + g(2,{3,4), c 13 + g(3,{2,4), c 14 + g(4,{2,3) = min {2 + 20, 9 + 12, 10 + 20 = min {22, 21, 30 = 21 Successor of node 1: p(1,{2,3,4) = 3 Successor of node 3: p(3, {2,4) = 4 Successor of node 4: p(4, {2) = 2 The optimal TSP tour reaches: 1 2 4 3 1 The worst-case time complexity of this algorithm is O(2 n n 2 ) and the space O(2 n n). Analysis and Design of Algorithm Page 9

Matrix Chain Multiplication Problem: Given a sequence of matrices, find the most efficient way to multiply these matrices together. The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications. We have many options to multiply a chain of matrices because matrix multiplication is associative. In other words, no matter how we parenthesize the product, the result will be the same. For example, if we had four matrices A, B, C, and D, we would have: (ABC)D = (AB)(CD) = A(BCD) =... However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the efficiency. For example, suppose A is a 10 30 matrix, B is a 30 5 matrix, and C is a 5 60 matrix. Then, (AB)C = (10 30 5) + (10 5 60) = 1500 + 3000 = 4500 operations A(BC) = (30 5 60) + (10 30 60) = 9000 + 18000 = 27000 operations. Clearly the first parenthesization requires less number of operations. Given an array p[] which represents the chain of matrices such that the ith matrix Ai is of dimension p[i-1] x p[i]. We need to write a function MatrixChainOrder() that should return the minimum number of multiplications needed to multiply the chain. Example: Input: p[] = {40, 20, 30, 10, 30 Output: 26000 There are 4 matrices of dimensions 40x20, 20x30, 30x10 and 10x30. Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained by putting parenthesis in following way (A(BC))D --> 20*30*10 + 40*20*10 + 40*10*30 Input: p[] = {10, 20, 30, 40, 30 Output: 30000 There are 4 matrices of dimensions 10x20, 20x30, 30x40 and 40x30. Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained by putting parenthesis in following way ((AB)C)D --> 10*20*30 + 10*30*40 + 10*40*30 Input: p[] = {10, 20, 30 Output: 6000 There are only two matrices of dimensions 10x20 and 20x30. So there is only one way to multiply the matrices, cost of which is 10*20*30 Analysis and Design of Algorithm Page 10

Dynamic Programming Approach Unit-5 Dynamic Programming 2016 The first step of the dynamic programming paradigm is to characterize the structure of an optimal solution. For the chain matrix problem, like other dynamic programming problems, involves determining the optimal structure (in this case, a parenthesization). We would like to break the problem into subproblems, whose solutions can be combined to obtain a solution to the global problem. For convenience, let us adopt the notation A i.. j, where i j, for the result from evaluating the product A i A i + 1... A j. That is, A i.. j A i A i + 1... A j, where i j, It is easy to see that is a matrix A i.. j is of dimensions p i p i + 1. In parenthesizing the expression, we can consider the highest level of parenthesization. At this level we are simply multiplying two matrices together. That is, for any k, 1 k n 1, A 1..n = A 1..k A k+1..n. Therefore, the problem of determining the optimal sequence of multiplications is broken up into two questions: Question 1: How do we decide where to split the chain? (What is k?) Question 2: How do we parenthesize the subchains A 1..k A k+1..n? The subchain problems can be solved by recursively applying the same scheme. On the other hand, to determine the best value of k, we will consider all possible values of k, and pick the best of them. Notice that this problem satisfies the principle of optimality, because once we decide to break the sequence into the product, we should compute each subsequence optimally. That is, for the global problem to be solved optimally, the subproblems must be solved optimally as well. The key observation is that the parenthesization of the "prefix" subchain A 1..k within this optimal parenthesization of A 1..n. must be anoptimal parenthesization of A 1..k. Step:2 The second step of the dynamic programming paradigm is to define the value of an optimal solution recursively in terms of the optimal solutions to subproblems. To help us keep track of solutions to subproblems, we will use a table, and build the table in a bottomup manner. For 1 i j n, let m[i, j] be the minimum number of scalar multiplications needed to compute the A i..j. The optimum cost can be described by the following recursive formulation. Basis: Observe that if i = j then the problem is trivial; the sequence contains only one matrix, and so the cost is 0. (In other words, there is nothing to multiply.) Thus, m[i, i] = 0 for i = 1, 2,..., n. Step: If i j, then we are asking about the product of the subchain A i..j and we take advantage of the structure of an optimal solution. We assume that the optimal parenthesization splits the product, A i..j into for each value of k, 1 k n 1 as A i..k. A k+1..j. Analysis and Design of Algorithm Page 11

The optimum time to compute is m[i, k], and the optimum time to compute is m[k + 1, j]. We may assume that these values have been computed previously and stored in our array. Since A i..k is a matrix, and A k+1..j is a matrix, the time to multiply them is p i 1. p k. p j. This suggests the following recursive rule for computing m[i, j]. To keep track of optimal subsolutions, we store the value of k in a table s[i, j]. Recall, k is the place at which we split the product A i..j to get an optimal parenthesization. That is, s[i, j] = k such that m[i, j] = m[i, k] + m[k + 1, j] + p i 1. p k. p j. Step-3: The third step of the dynamic programming paradigm is to construct the value of an optimal solution in a bottom-up fashion. It is pretty straight forward to translate the above recurrence into a procedure. As we have remarked in the introduction that the dynamic programming is nothing but the fancy name for divide-and-conquer with a table. But here in dynamic programming, as opposed to divide-and-conquer, we solve subproblems sequentially. It means the trick here is to solve them in the right order so that whenever the solution to a subproblem is needed, it is already available in the table. Consequently, in our problem the only tricky part is arranging the order in which to compute the values (so that it is readily available when we need it). In the process of computing m[i, j] we will need to access values m[i, k] and m[k + 1, j] for each value of k lying between i and j. This suggests that we should organize our computation according to the number of matrices in the subchain. So, lets work on the subchain: Let L = j i + 1 denote the length of the subchain being multiplied. The subchains of length 1 (m[i, i]) are trivial. Then we build up by computing the subchains of length 2, 3,..., n. The final answer is m[1, n]. Now set up the loop: Observe that if a subchain of length L starts at position i, then j = i + L 1. Since, we would like to keep j in bounds, this means we want j n, this, in turn, means that we want i + L 1 n, actually what we are saying here is that we want i n L +1. This gives us the closed interval for i. So our loop for i runs from 1 to n L + 1. Matrix-Chain(array p[1.. n], int n) { Array s[1.. n 1, 2.. n]; FOR i = 1 TO n DO m[i, i] = 0; FOR L = 2 TO n DO { FOR i = 1 TO n L + 1 do { j = i + L 1; m[i, j] = infinity; FOR k = i TO j 1 DO { q = m[i, k] + m[k + 1, j] + p[i 1] p[k] p[j]; IF (q < m[i, j]) { m[i, j] = q; s[i, j] = k; // initialize // L=length of subchain // check all splits Analysis and Design of Algorithm Page 12

E.G. return m[1, n](final cost) and s (splitting markers); The m-table computed by MatrixChain procedure for n = 6 matrices A 1, A 2, A 3, A 4, A 5, A 6 and their dimensions 30, 35, 15, 5, 10, 20, 25. the m-table is rotated so that the main diagonal runs horizontally. Only the main diagonal and upper triangle is used. Complexity Analysis Clearly, the space complexity of this procedure Ο(n 2 ). Since the tables m and s require Ο(n 2 ) space. As far as the time complexity is concern, a simple inspection of the for-loop(s) structures gives us a running time of the procedure. Since, the three for-loops are nested three deep, and each one of them iterates at most n times (that is to say indices L, i, and j takes on at most n 1 values). Therefore, The running time of this procedure is Ο(n 3 ). Extracting Optimum Sequence This is Step 4 of the dynamic programming paradigm in which we construct an optimal solution from computed information. The arrays[i, j] can be used to extract the actual sequence. The basic idea is to keep a split marker in s[i, j] that indicates what is the best split, that is, what value of k leads to the minimum value of m[i, j]. s[i, j] = k tells us that the best way to multiply the subchain is to first multiply the subchain A i..k and then multiply the subchain A k+1..j, and finally multiply these two subchains together. Intuitively, s[i, j] tells us what multiplication to perform last. Note that we only need to store s[i, j] when we have at least two matrices, that is, if j > i. The actual multiplication algorithm uses the s[i, j] value to determine how to split the current sequence. Assume that the matrices are stored in an array of matrices A[1..n], and that s[i, j] is global to this recursive procedure. The procedure returns a matrix. Mult(i, j) { if (i = = j) return A[i]; else { k = s[i, j]; // Basis Analysis and Design of Algorithm Page 13

X = Mult(i, k]; Y = Mult(k + 1, j]; return XY; // X=A[i] A[k] // Y=A[k+1] A[j] // multiply matrices X and Y Again, we rotate the s-table so that the main diagonal runs horizontally but in this table we use only upper triangle (and not the main diagonal). In the example, the procedure computes the chain matrix product according to the parenthesization ((A 1 (A 2 A 3 ))((A 4 A 5 ) A 6 ). Shortest Path: Given a graph and a source vertex src in graph, find shortest paths from src to all vertices in the given graph. The graph may contain negative weight edges. Bellman-Ford is also simpler than Dijkstra and suites well for distributed systems. But time complexity of Bellman-Ford is O(VE), which is more than Dijkstra. Algorithm Following are the detailed steps. Input: Graph and a source vertex src Output: Shortest distance to all vertices from src. If there is a negative weight cycle, then shortest distances are not calculated, negative weight cycle is reported. 1) This step initializes distances from source to all vertices as infinite and distance to source itself as 0. Create an array dist[] of size V with all values as infinite except dist[src] where src is source vertex. 2) This step calculates shortest distances. Do following V -1 times where V is the number of vertices in given graph...a) Do following for each edge u-v If dist[v] > dist[u] + weight of edge uv, then update dist[v].dist[v] = dist[u] + weight of edge uv 3) This step reports if there is a negative weight cycle in graph. Do following for each edge u- v If dist[v] > dist[u] + weight of edge uv, then Graph contains negative weight cycle Analysis and Design of Algorithm Page 14

The idea of step 3 is, step 2 guarantees shortest distances if graph doesn t contain negative weight cycle. If we iterate through all edges one more time and get a shorter path for any vertex, then there is a negative weight cycle How does this work? Like other Dynamic Programming Problems, the algorithm calculate shortest paths in bottom-up manner. It first calculates the shortest distances for the shortest paths which have at-most one edge in the path. Then, it calculates shortest paths with at-nost 2 edges, and so on. After the ith iteration of outer loop, the shortest paths with at most i edges are calculated. There can be maximum V 1 edges in any simple path, that is why the outer loop runs v 1 times. Example Let the given source vertex be 0. Initialize all distances as infinite, except the distance to source itself. Total number of vertices in the graph is 5, so all edges must be processed 4 times. Let all edges are processed in following order: (B,E), (D,B), (B,D), (A,B), (A,C), (D,C), (B,C), (E,D). We get following distances when all edges are processed first time. The first row in shows initial distances. The second row shows distances when edges (B,E), (D,B), (B,D) and (A,B) are processed. The third row shows distances when (A,C) is processed. The fourth row shows when (D,C), (B,C) and (E,D) are processed. The first iteration guarantees to give all shortest paths which are at most 1 edge long. We get following distances when all edges are processed second time (The last row shows final values). Analysis and Design of Algorithm Page 15

The second iteration guarantees to give all shortest paths which are at most 2 edges long. The algorithm processes all edges 2 more times. The distances are minimized after the second iteration, so third and fourth iterations don t update the distances. Analysis and Design of Algorithm Page 16