Chapter 11. Dynamic Programming

Size: px
Start display at page:

Download "Chapter 11. Dynamic Programming"

Transcription

1 Chapter 11 Dynamic Programming Summary of Chapter 11- from the book Programming Challenges: The Programming Contest Training Manual By: Steven S. Skiena, and Miguel A. Revilla 2003 Springer-Verlag New York, Inc. ISBN:

2 Dynamic Programming: Dynamic programming is both a mathematical optimization method and a computer programming method. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. In computer science, a problem which can be broken down recursively is said to have optimal substructure. There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems which are only slightly smaller. When the overlapping problems are, say, half the size of the original problem the strategy is called "divide and conquer" rather than "dynamic programming". This is why mergesort, quicksort, and finding all matches of a regular expression are not classified as dynamic programming problems. Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such optimal substructure. Such optimal substructures are usually described by means of recursion. For example, given a graph G = (V, E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then the path p 1 from u to w and p 2 from w to v are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in CLRS). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman-Ford algorithm does. As an example of a problem that is unlikely to exhibit optimal substructure, consider the problem of finding the cheapest airline ticket from Buenos Aires to Moscow. Even if that ticket involves stops in Miami and then London, we can't conclude that the cheapest ticket from Miami to Moscow stops in London, because the price at which an airline sells a multi-flight trip is usually not the sum of the prices at which it would sell the individual flights in the trip. Overlapping sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci series: F i = F i-1 + F i-2, with base case F 1 =F 2 =1. Then F 5 = F 4 + F 3, and F 4 = F 3 + F 2. Now F 3 is being solved in the recursive sub-trees of both F 5 as well as F 4. Even though the total number of sub-problems is actually small (only 5 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once. Note that the sub-problems must be only 'slightly' smaller (typically taken to mean a constant additive factor) than the larger problem; when they are a multiplicative factor smaller the problem is no longer classified as dynamic programming. This can be achieved in either of two ways: Top-down approach This is the direct fall-out of the recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its subproblems, and if its sub-problems are overlapping, then one can easily memoize or store the solutions to the sub-problems in a table. Whenever we attempt to solve a new sub-problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly; otherwise we solve the sub-problem and add its solution to the table. Bottom-up approach This is the more interesting case. Once we formulate the solution to a problem recursively as in terms of its sub-problems, we can try reformulating the problem in 2

3 a bottom-up fashion: try solving the sub-problems first and use their solutions to build-on and arrive at solutions to bigger sub-problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub-problems by using the solutions to small sub-problems. For example, if we already know the Fibonacci values of F 41 and F 40, we can directly calculate the value of F 42. Fibonacci sequence: Suppose we have a simple map object, m, which maps each value of Fibonacci that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time. var m := map(0 0, 1 1) function fib(n) if map m does not contain key n m[n] := fib(n 1) + fib(n 2) return m[n] This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into sub-problems and then calculate and store values. #include <iostream> #include <map> using namespace std; int fib(int); map<int,int> mymap; pair<int,int> w; void main( ) { mymap.insert(pair<int,int>(0,0)); mymap.insert(pair<int,int>(1,1)); int n = 8; cout << "fib(" << n << ") is " << fib(n) << endl; } int fib(int n) { int f1, f2; if(n == 0 n == 1) return n; if(n == 2) { mymap.insert(pair<int,int>(2,1)); return 1; } f1=fib(n-1); 3

4 w=*(mymap.find(n-2)); f2=f1 + (int)w.second; mymap.insert(pair<int,int>(n,f2)); } return f2; Sample output: In the bottom-up approach we calculate the smaller values of Fibonacci first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n 1 times, however it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map. function fib(n) var previousfib := 0, currentfib := 1 if n = 0 return 0 else if n = 1 return 1 repeat n 1 times var newfib := previousfib + currentfib previousfib := currentfib currentfib := newfib return currentfib Measuring potential partners: In this example David Smith investigated the problem of finding the best potential partner from a fixed number of potential partners, assuming that each potential partner has a beauty score. Helen of Troy is fabled to have had "a face that could launch a thousand ships". There's an old joke that: one Helen is the beauty needed to launch a thousand ships, one millihelen is therefore the beauty required to launch just one ship! Suppose we use this scale to measure each potential partner's score from 0 millihelens up to a maximum of 1000 millihelens with all values equally likely. We must now search for a rule which will make sure that the average score of the partner we choose is as large as possible. The obvious way to look at this would be to think about what you would do when you met the first potential partner. Then you would compare this partner's score with what you might expect to get later on if you rejected them. Unfortunately, working out what you might expect later on is a complicated mixture of all the possible decisions you could make that becomes too much to work out. Instead, a mathematical way of thinking about it is to look at what you should do at the end, if you get to that stage. So you think about the best decision with the last potential partner (which you must choose) and then the last but one and so on. This way of tackling the problem backwards is Dynamic programming. 4

5 The word Programming in the name has nothing to do with writing computer programs. The word is used to describe a set of rules which anyone can follow to solve a problem. They do not have to be written in a computer language. The dynamic programming approach to the potential partner problem starts by thinking about what happens when faced with the last partner. If you need to make a decision about potential partner number N then you must accept their score (which we'll call X N ) and live happily ever after! When you encounter potential partner number N-1 all that you know are the values X N-1 and what you expect to get if you wait. You expect to get the average value of X N = 500 because X N varies from Common sense says that you take the better of these, so your rule will be: If X N-1 is more than 500, accept that potential partner. If not, go on to potential partner number N When you encounter potential partner number N-2, you know X N-2 and the average value of the score you will get by waiting. Half the time, waiting will mean that you accept potential partner number N-1, whose score will be between 500 and 1000, averaging 750; the other half of the time, you will pass over that potential partner and you will expect a score of 500. So, waiting will give you an average score of 625, and taking the better will give the rule: If X N-2 is more than 625, accept that potential partner If not, go on to potential partner number N-1 Assuming that the score of 10 partners are given in the following table: It is much simpler than starting with potential partner number 1 and trying to think of all the possible sequences of decisions, and working forwards. For each potential partner that you meet, the best set of decisions afterwards will give a critical value for comparison; if the potential partner does better than it, choose that partner. If not, go on, even though the future is not certain. The critical values when N=10 are: Longest Common Subsequence: The Longest Common Subsequence (LCS) problem is as follows. We are given two strings: string S of length n, and string T of length m. Our goal is to find their longest common subsequence: the longest sequence of characters that appear left-to-right (but not necessarily in a contiguous block) in both strings. For example, consider: S = ABAZDC T = BACBAD 5

6 In this case, the LCS has length 4 and is the string ABAD. Another way to look at it is we are finding a 1-1 matching between some of the letters in S and some of the letters in T such that none of the edges in the matching cross each other. For instance, this type of problem comes up all the time in genomics: given two DNA fragments, the LCS gives information about what they have in common and the best way to line them up. Let s now solve the LCS problem using Dynamic Programming. As sub-problems we will look at the LCS of a prefix of S and a prefix of T, running over all pairs of prefixes. For simplicity, let s worry first about finding the length of the LCS and then we can modify the algorithm to produce the actual sequence itself. So, here is the question: say LCS[i, j] is the length of the LCS of S[1..i] with T[1..j]. How can we solve for LCS[i, j] in terms of the LCS s of the smaller problems? Case 1: what if S[i] T[j]? Then, the desired subsequence has to ignore one of S[i] or T[j] so we have: LCS[i, j] = max(lcs[i 1, j], LCS[i, j 1]). Case 2: what if S[i] = T[j]? Then the LCS of S[1..i] and T[1..j] might as well match them up. For instance, if a common subsequence that matched S[i] to an earlier location in T, for instance, then it could be matched to T[j] instead. So, in this case we have: LCS[i, j] = 1 + LCS[i 1, j 1]. So, we can just do two loops (over values of i and j), filling in the LCS using these rules. Note that LCS[i, j] = 0 for i, j < 1 Here s what it looks like pictorially for the example above, with S along the leftmost column and T along the top row B A C B A D 0 φ φ φ φ φ φ φ 1 A φ φ φa=a A A φa=a A 2 B φ φb=b A, B A, B AB AB AB 3 A φ B BA BA AB, BA ABA ABA 4 Z φ B BA BA AB, BA ABA ABA 5 D φ B BA BA AB, BA ABA ABAD 6 C φ B BA BAC BAC ABA, BAC ABAD B A C B A D A B A Z D C We just fill out this matrix row by row, doing constant amount of work per entry, so this takes O(mn) time overall. The final answer (the length of the LCS of S and T) is in the lower-right corner. 6

7 How can we now find the sequence? To find the sequence, we just walk backwards through matrix starting the lower-right corner. If either the cell directly above or directly to the right contains a value equal to the value in the current cell, then move to that cell (if both do, then chose either one). If both such cells have values strictly less than the value in the current cell, then move diagonally up-left (this corresponds to applying Case 2, i.e. the sequence must have had a common element), and output the associated character. This will output the characters in the LCS in reverse order. For instance, running on the matrix above, this outputs DABA. Here is another example on how to find the sequence. Compare the sequence XMJYAUZ with MZJAWXU M Z J A W X U X M J Y A U Z Starting from cell (7, 7): Go to cell (6, 7) without any decrease in value so Z is not in the sequence. Go to cell (5, 6) with a decrease in value so U is in the sequence. Go to cell (4, 5) with a decrease in value so A is in the sequence. Go to cell (3, 5) without any decrease in value so Y is not in the sequence. Go to cell (2, 4) with a decrease in value so J is in the sequence. Go to cell (1, 3) with a decrease in value so M is in the sequence. So the longest common sequence is MJAU. We have been looking at what is called bottom-up Dynamic Programming. Here is another way of thinking about Dynamic Programming, that also leads to basically the same algorithm, but viewed from the other direction. Sometimes this is called top-down Dynamic Programming. This view of Dynamic Programming is often called memoizing. For example, for the LCS problem, using our analysis we had at the beginning we might have produced the following exponential-time recursive program: LCS(S,n,T,m) { if (n==0 m==0) return 0; if (S[n] == T[m]) result = 1 + LCS(S,n-1,T,m-1); // no harm in matching up else result = max( LCS(S,n-1,T,m), LCS(S,n,T,m-1) ); return result; } 7

8 This algorithm runs in exponential time. In fact, if S and T use completely disjoint sets of characters (so that we never have S[n]==T[m]) then the number of times that LCS(S,1,T,1) is recursively called equals. In the memoized version, we store results in a matrix so that any given set of arguments to LCS only produces new work (new recursive calls) once. The memoized version begins by initializing array[i][j] to unknown for all i,j, and then proceeds as follows: LCS(S,n,T,m) { if (n==0 m==0) return 0; if (array[n][m]!= unknown) return array[n][m]; if (S[n] == T[m]) result = 1 + LCS(S,n-1,T,m-1); else result = max( LCS(S,n-1,T,m), LCS(S,n,T,m-1) ); array[n][m] = result; return result; } Edit Distance: When a spell checker encounters a possible misspelling; it looks in its dictionary for other words that are close by. What is the appropriate notion of closeness in this case? A natural measure of the distance between two strings is the extent to which they can be aligned, or matched up. Technically, an alignment is simply a way of writing the strings one above the other. For instance, here are two possible alignments of SNOXY and SUNNY: S _ N O X Y _ S N O X _ Y S U N N _ Y S U N N Y cost 3 cost 5 The _ indicates a gap ; any number of these can be placed in either string. The cost of an alignment is the number of columns in which the letters differ. And the edit distance between two strings is the cost of their best possible alignment. Do you see that there is no better alignment of SNOXY and SUNNY than the one shown here with a cost of 3? Edit distance is so named because it can also be thought of as the minimum number of editsinsertions, deletions, and substitutions of characters - needed to transform the first string into the second. For instance, the alignment shown on the left corresponds to three edits: insert U, substitute O by N, and delete X. There are three natural types of changes: Substitution Change a single character from pattern s to a different character in text t, such as changing shot to spot. Insertion Insert a single character into pattern s to help it match text t, such as changing ago to agog. Deletion Delete a single character from pattern s to help it match text t, such as changing hour to our. 8

9 Properly posing the question of string similarity requires us to set the cost of each of these string transform operations. Setting each operation to cost one step defines the edit distance between two strings. Other cost values also yield interesting results. But how can we compute the edit distance? We can define a recursive algorithm using the observation that the last character in the string must either be matched, substituted, inserted, or deleted. Chopping off the characters involved in the last edit operation leaves a pair of smaller strings. Let i and j be the last character of the relevant prefix of s and t, respectively. There are three pairs of shorter strings after the last operation, corresponding to the strings after a match / substitution, insertion, or deletion. If we knew the cost of editing the three pairs of smaller strings, we could decide which option leads to the best solution and choose that option accordingly. We can learn this cost, through the magic of recursion. In general, there are so many possible alignments between two strings that it would be terribly inefficient to search through all of them for the best one. Instead we turn to dynamic programming. When solving a problem by dynamic programming, the most crucial question is, what are the subproblems? As long as they are chosen so as to have the following property: There is an ordering on the sub-problems, and a relation that shows how to solve a sub-problem given the answers to smaller sub-problems, that is, subproblems that appear earlier in the ordering. It is an easy matter to write down the algorithm: iteratively solve one sub-problem after the other, in order of increasing size. Our goal is to find the edit distance between two strings x[1 m] and y[1 n]. What is a good sub-problem? Well, it should go part of the way toward solving the whole problem; so how about looking at the edit distance between some prefix of the first string, x[1 i], and some prefix of the second, y[1 j]? Call this sub-problem E(i, j). Our final objective, then, is to compute E(m, n). E X P O N E N T I A L P O L Y N O M I A L Sub-problem E(4, 3) For this to work, we need to somehow express E(i, j) in terms of smaller sub-problems. Let's see - what do we know about the best alignment between x[1 i] and y[1 j]? Well, its rightmost column can only be one of three things: x[i] - x[i] or or - y[j] y[j] The first case incurs a cost of 1 for this particular column, and it remains to align x[1 i-1] with y[1 j]. But this is exactly the sub-problem E(i-1, j). In the second case, also with cost 1, we still need to align x[1 i] with y[1 j-1]. This is again another sub-problem, E(i, j-1). And in the final case, which either costs 1 (if x[i] y[j]) or 0 (if x[i] = y[j]), what's left is the sub-problem E(i-1; j-1). In short, we have expressed E(i, j) in terms of three smaller sub-problems E(i-1, j), E(i, j-1), E(i-1, j- 1). We have no idea which of them is the right one, so we need to try them all and pick the best: E(i, j) = min{ 1 + E(i-1, j); 1 + E(i, j-1); diff(i, j) + E(i-1; j-1)} where for convenience diff(i, j) is defined to be 0 if x[i] = y[j] and 1 otherwise. 9

10 For instance, in computing the edit distance between EXPONENTIAL and POLYNOMIAL, subproblem E(4; 3) corresponds to the prefixes EXPO and POL. The rightmost column of their best alignment must be one of the following: O - O or or - L L Thus E(4, 3) = min{ 1 + E(3, 3), 1 + E(4, 2), 1 + E(3; 2)}. The answers to all the sub-problems E(i, j) form a two-dimensional table, as in the following Figure. In what order should these sub-problems be solved? Any order is fine, as long as E(i-1, j), E(i, j-1), and E(i-1, j-1) are handled before E(i, j). For instance, we could fill in the table one row at a time, from top row to bottom row, and moving left to right across each row. Or alternatively, we could fill it in column by column. Both methods would ensure that by the time we get around to computing a particular table entry, all the other entries we need are already filled in. P O L Y N O M I A L E X P O N E N T I A L With both the sub-problems and the ordering specified, we are almost done. There just remain the base cases of the dynamic programming, the very smallest sub-problems. In the present situation, these are E(0,.) and E(., 0), both of which are easily solved. E(0, j) is the edit distance between the 0-length prefix of x, namely the empty string, and the first j letters of y: clearly, j. And similarly, E(i, 0) = i. At this point, the algorithm for edit distance basically writes itself. for i = 0; 1; 2; ;m: E(i; 0) = i for j = 1; 2; ; n: E(0; j) = j for i = 1; 2; ;m: for j = 1; 2; ; n: E(i; j) = min{e(i - 1; j) + 1;E(i; j - 1) + 1;E(i - 1; j - 1) + diff(i; j)} return E(m; n) This procedure fills in the table row by row, and left to right within each row. Each entry takes constant time to fill in, so the overall running time is just the size of the table, O(m n). And in our example, the edit distance turns out to be 6. The underlying diagram 10

11 Every dynamic program has an underlying dag structure: think of each node as representing a subproblem, and each edge as a precedence constraint on the order in which the sub-problems can be tackled. Having nodes u 1,, u k point to v means sub-problem v can only be solved once the answers to u 1,, u k are known. In our present edit distance application, the nodes of the underlying diagram correspond to subproblems, or equivalently, to positions (i, j) in the table. Its edges are the precedence constraints, of the form (i-1, j) (i, j), (i, j-1) (i, j), and (i-1, j-1) (i, j). In fact, we can take things a little further and put weights on the edges so that the edit distances are given by shortest paths in the diagram. To see this, set all edge lengths to 1, except for {(i-1, j-1) (i, j) : x[i] = y[j]} (shown by a thick diagonal line), whose length is 0. The final answer is then simply the distance between nodes s = (0, 0) and t = (m, n). One possible shortest path is shown, the one that yields the alignment we found earlier. On this path, each move down is a deletion, each move right is an insertion, and each diagonal move is either a match or a substitution. E X P O N E N - T I A L - - P O L Y N O M I A L By altering the weights on this diagram, we can allow generalized forms of edit distance, in which insertions, deletions, and substitutions have different associated costs. Knapsack: During a robbery, a burglar finds much more loot than he had expected and has to decide what to take. His bag (or knapsack ) will hold a total weight of at most W kilograms. There are n items to pick from, of weight w 1,, w n and dollar value v 1,, v n. What's the most valuable combination of items he can fit into his bag? For instance, take W = 10 and Item Weight Value 1 2 $9 2 3 $ $ $30 11

12 There are two versions of this problem. If there are unlimited quantities of each item available, the optimal choice is to pick item 1 and two of item 4 (total: $48). On the other hand, if there is one of each item (the burglar has broken into an art gallery, say), then the optimal knapsack contains items 1 and 3 (total: $46). As we shall see neither version of this problem is likely to have a polynomial time algorithm. However, using dynamic programming they can both be solved in O(nW) time, which is reasonable when W is small, but is not polynomial since the input size is proportional to logw rather than W. Knapsack with repetition Let's start with the version that allows repetition. As always, the main question in dynamic programming is, what are the sub-problems? In this case we can shrink the original problem in two ways: we can either look at smaller knapsack capacities w W, or we can look at fewer items (for instance, items 1, 2,, j, for j n). The first restriction calls for smaller capacities. Accordingly, define K(w) = maximum value achievable with a knapsack of capacity w: Can we express this in terms of smaller sub-problems? Well, if the optimal solution to K(w) includes item i, then removing this item from the knapsack leaves an optimal solution to K(w - w i ). In other words, K(w) is simply K(w - w i ) + v i, for some i. We don't know which i, so we need to try all possibilities. max : where as usual our convention is that the maximum over an empty set is 0. We're done! The algorithm now writes itself, and it is characteristically simple and elegant. K(0) = 0 for w = 1 to W: K(w) = max {K(w - wi) + vi : wi w} return K(W) To find the most valuable combination of items that can fit into a knapsack of capacity 10 we follow the steps below: K(0) = 0 K(1) = 0 K(2) = max{k(2-2) + 9} = max{k(0) + 9}=max{0 + 9} = 9 K(3) = max{k(3-2) + 9, K(3-3) + 14} = max{9, 14} = 14 K(4) = max{k(4-2) + 9, K(4-3) + 14, K(4-4) + 16} = max{18, 14, 16} = 18 K(5) = max{k(5-2) + 9, K(5-3) + 14, K(5-4) + 16} = max{23, 23, 16} = 23 K(6) = max{k(6-2) + 9, K(6-3) + 14, K(6-4) + 16, K(6-6) + 30} = max{27, 28, 25, 30} = 30 K(7) = max{k(7-2) + 9, K(7-3) + 14, K(7-4) + 16, K(7-6) + 30} = max{32, 32, 30, 30} = 32 K(8) = max{k(8-2) + 9, K(8-3) + 14, K(8-4) + 16, K(8-6) + 30} = max{39, 37, 34, 39} = 39 K(9) = max{k(9-2) + 9, K(9-3) + 14, K(9-4) + 16, K(9-6) + 30} = max{41, 44, 39, 44} = 44 K(10) = max{k(10-2) + 9, K(10-3) + 14, K(10-4) + 16, K(10-6) + 30} = max{48, 46, 46, 48} = 48 12

13 Knapsack without repetition On to the second variant: what if repetitions are not allowed? Our earlier sub-problems now become completely useless. For instance, knowing that the value K(w - w n ) is very high doesn't help us, because we don't know whether or not item n already got used up in this partial solution. We must therefore refine our concept of a sub-problem to carry additional information about the items being used. We add a second parameter, 0 j n: K(w, j) = maximum value achievable using a knapsack of capacity w and items 1,, j The answer we seek is K(W, n). How can we express a sub-problem K(w, j) in terms of smaller sub-problems? Quite simple: either item j is needed to achieve the optimal value, or it isn't needed: K(w, j) = max{k(w - w j, j - 1) + v j, K(w, j - 1)} (The first case is invoked only if w j w) In other words, we can express K(w; j) in terms of subproblems K(., j - 1). The algorithm then consists of filling out a two-dimensional table, with W + 1 rows and n + 1 columns. Initialize all K(0, j) = 0 and all K(w, 0) = 0 for j = 1 to n: for w = 1 to W: if wj > w: K(w, j) = K(w, j - 1) else: K(w, j) = max{k(w, j - 1), K(w wj, j - 1) + vj} return K(W, n) 13

14 Some useful links:

15 Problems Distinct Subsequences 15

16 PC/UVa IDs: /10069 Popularity: B Success rate: average Level: 3 A subsequence of a given sequence S consists of S with zero or more elements deleted. Formally, a sequence Z = z 1 z 2... z k is a subsequence of X = x 1 x 2... x m if there exists a strictly increasing sequence < i 1, i 2,..., i k > of indices of X such that for all j = 1, 2,..., k, we have. For example, Z = bcdb is a subsequence of X = abcbdab with corresponding index sequence < 2, 3, 5, 7 >. Your job is to write a program that counts the number of occurrences of Z in X as a subsequence such that each has a distinct index sequence. Input The first line of the input contains an integer N indicating the number of test cases to follow. The first line of each test case contains a string X, composed entirely of lowercase alphabetic characters and having length no greater than 10,000. The second line contains another string Z having length no greater than 100 and also composed of only lowercase alphabetic characters. Be assured that neither Z nor any prefix or suffix of Z will have more than distinct occurrences in X as a subsequence. Output For each test case, output the number of distinct occurrences of Z in X as a subsequence. Output for each input set must be on a separate line. Sample Input 2 babgbag bag rabbbit rabbit Sample Output 5 3 Cutting Sticks 16

17 PC/UVa IDs: /10003 Popularity: B Success rate: average Level: 2 You have to cut a wood stick into several pieces. The most affordable company, Analog Cutting Machinery (ACM), charges money according to the length of the stick being cut. Their cutting saw allows them to make only one cut at a time. It is easy to see that different cutting orders can lead to different prices. For example, consider a stick of length 10 m that has to be cut at 2, 4, and 7 m from one end. There are several choices. One can cut first at 2, then at 4, then at 7. This leads to a price of = 24 because the first stick was of 10 m, the resulting stick of 8 m, and the last one of 6 m. Another choice could cut at 4, then at 2, then at 7. This would lead to a price of = 20, which is better for us. Your boss demands that you write a program to find the minimum possible cutting cost for any given stick. Input The input will consist of several input cases. The first line of each test case will contain a positive number l that represents the length of the stick to be cut. You can assume l < 1,000. The next line will contain the number n (n < 50) of cuts to be made. The next line consists of n positive numbers c i (0 < c i < l) representing the places where the cuts must be made, given in strictly increasing order. An input case with l = 0 represents the end of input. Output Print the cost of the minimum cost solution to cut each stick in the format shown below. Sample Input Sample Output The minimum cutting is 200. The minimum cutting is 22. Chopsticks 17

18 PC/UVa IDs: /10271 Popularity: B Success rate: average Level: 3 In China, people use pairs of chopsticks to eat food, but Mr. L is a bit different. He uses a set of three chopsticks, one pair plus an extra; a long chopstick to get large items by stabbing the food. The length of the two shorter, standard chopsticks should be as close as possible, but the length of the extra one is not important so long as it is the longest. For a set of chopsticks with lengths A, B, C (A B C), the function (A B) 2 defines the badness of the set. Mr. L has invited K people to his birthday party, and he is eager to introduce his way of using chopsticks. He must prepare K + 8 sets of chopsticks (for himself, his wife, his little son, little daughter, his mother, father, mother-in-law, father-in-law, and K other guests). But Mr. L s chopsticks are of many different lengths! Given these lengths, he must find a way of composing the K + 8 sets such that the total badness of the sets is minimized. Input The first line in the input contains a single integer T indicating the number of test cases (1 T 20). Each test case begins with two integers K and N (0 K 1,000, 3K + 24 N 5,000) giving the number of guests and the number of chopsticks. Then follow N positive integers L i, in non decreasing order, indicating the lengths of the chopsticks (1 L i 32,000). Output For each test case in the input, print a line containing the minimal total badness of all the sets. Sample Input Sample Output 23 Note: A possible collection of the nine chopstick sets for this sample input is (8, 10, 16), (19, 22, 27), (61, 63, 75), (71, 72, 88), (81, 81, 84), (96, 98, 103), (128, 129, 148), (134, 134, 139), and (157, 157, 160). Adventures in Moving: Part IV 18

19 PC/UVa IDs: /10201 Popularity: A Success rate: low Level: 3 You are considering renting a moving truck to help you move from Waterloo to the big city. Gas prices being so high these days, you want to know how much the gas for this beast will set you back. The truck consumes a full liter of gas for each kilometer it travels. It has a 200-liter gas tank. When you rent the truck in Waterloo, the tank is half-full. When you return it in the big city, the tank must be at least half-full, or you ll get gouged even more for gas by the rental company. You would like to spend as little as possible on gas, but you don t want to run out along the way. Input The input begins with a single positive integer on a line by itself indicating the number of test cases, followed by a blank line. Each test case is composed only of integers. The first integer is the distance in kilometers from Waterloo to the big city, at most 10,000. Next comes a set of up to 100 gas station specifications, describing all the gas stations along your route, in non-decreasing order by distance. Each specification consists of the distance in kilometers of the gas station from Waterloo, and the price of a liter of gas at the gas station, in tenths of a cent, at most 2,000. There is a blank line between each two consecutive inputs. Output For each test case, output the minimum amount of money that you can spend on gas to get from Waterloo to the big city. If it is not possible to get from Waterloo to the big city subject to the constraints above, print Impossible. The output of each two consecutive cases will be separated by a blank line. Sample Input 1 Sample Output

Dynamic Programming. Lecture Overview Introduction

Dynamic Programming. Lecture Overview Introduction Lecture 12 Dynamic Programming 12.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University

Algorithms IV. Dynamic Programming. Guoqiang Li. School of Software, Shanghai Jiao Tong University Algorithms IV Dynamic Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Dynamic Programming Shortest Paths in Dags, Revisited Shortest Paths in Dags, Revisited The special distinguishing

More information

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 Dynamic Programming is a powerful technique that allows one to solve many different types

More information

CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING. Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17

CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING. Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17 CS 170 DISCUSSION 8 DYNAMIC PROGRAMMING Raymond Chan raychan3.github.io/cs170/fa17.html UC Berkeley Fall 17 DYNAMIC PROGRAMMING Recursive problems uses the subproblem(s) solve the current one. Dynamic

More information

Unit-5 Dynamic Programming 2016

Unit-5 Dynamic Programming 2016 5 Dynamic programming Overview, Applications - shortest path in graph, matrix multiplication, travelling salesman problem, Fibonacci Series. 20% 12 Origin: Richard Bellman, 1957 Programming referred to

More information

Chapter 3 Dynamic programming

Chapter 3 Dynamic programming Chapter 3 Dynamic programming 1 Dynamic programming also solve a problem by combining the solutions to subproblems. But dynamic programming considers the situation that some subproblems will be called

More information

CMSC 451: Lecture 11 Dynamic Programming: Longest Common Subsequence Thursday, Oct 5, 2017

CMSC 451: Lecture 11 Dynamic Programming: Longest Common Subsequence Thursday, Oct 5, 2017 CMSC 451: Lecture 11 Dynamic Programming: Longest Common Subsequence Thursday, Oct 5, 217 Reading: This algorithm is not covered in KT or DPV. It is closely related to the Sequence lignment problem of

More information

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2)

Recursive-Fib(n) if n=1 or n=2 then return 1 else return Recursive-Fib(n-1)+Recursive-Fib(n-2) Dynamic Programming Any recursive formula can be directly translated into recursive algorithms. However, sometimes the compiler will not implement the recursive algorithm very efficiently. When this is

More information

CMSC 451: Lecture 10 Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct 3, 2017

CMSC 451: Lecture 10 Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct 3, 2017 CMSC 45 CMSC 45: Lecture Dynamic Programming: Weighted Interval Scheduling Tuesday, Oct, Reading: Section. in KT. Dynamic Programming: In this lecture we begin our coverage of an important algorithm design

More information

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.) Dynamic programming is a problem solving method that is applicable to many different types of problems. I think it is best learned by example, so we will mostly do examples today. 1 Rod cutting Suppose

More information

A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010

A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010 A BRIEF INTRODUCTION TO DYNAMIC PROGRAMMING (DP) by Amarnath Kasibhatla Nanocad Lab University of California, Los Angeles 04/21/2010 Overview What is DP? Characteristics of DP Formulation Examples Disadvantages

More information

Dynamic Programming 1

Dynamic Programming 1 Dynamic Programming 1 Jie Wang University of Massachusetts Lowell Department of Computer Science 1 I thank Prof. Zachary Kissel of Merrimack College for sharing his lecture notes with me; some of the examples

More information

Dynamic Programming CPE 349. Theresa Migler-VonDollen

Dynamic Programming CPE 349. Theresa Migler-VonDollen Dynamic Programming CPE 349 Theresa Migler-VonDollen Dynamic Programming Definition Dynamic programming is a very powerful algorithmic tool in which a problem is solved by identifying a collection of subproblems

More information

Algorithms Assignment 3 Solutions

Algorithms Assignment 3 Solutions Algorithms Assignment 3 Solutions 1. There is a row of n items, numbered from 1 to n. Each item has an integer value: item i has value A[i], where A[1..n] is an array. You wish to pick some of the items

More information

So far... Finished looking at lower bounds and linear sorts.

So far... Finished looking at lower bounds and linear sorts. So far... Finished looking at lower bounds and linear sorts. Next: Memoization -- Optimization problems - Dynamic programming A scheduling problem Matrix multiplication optimization Longest Common Subsequence

More information

1 Dynamic Programming

1 Dynamic Programming CS161 Lecture 13 Dynamic Programming and Greedy Algorithms Scribe by: Eric Huang Date: May 13, 2015 1 Dynamic Programming The idea of dynamic programming is to have a table of solutions of subproblems

More information

More Dynamic Programming

More Dynamic Programming CS 374: Algorithms & Models of Computation, Fall 2015 More Dynamic Programming Lecture 12 October 8, 2015 Chandra & Manoj (UIUC) CS374 1 Fall 2015 1 / 43 What is the running time of the following? Consider

More information

1 More on the Bellman-Ford Algorithm

1 More on the Bellman-Ford Algorithm CS161 Lecture 12 Shortest Path and Dynamic Programming Algorithms Scribe by: Eric Huang (2015), Anthony Kim (2016), M. Wootters (2017) Date: May 15, 2017 1 More on the Bellman-Ford Algorithm We didn t

More information

Dynamic Programming Algorithms

Dynamic Programming Algorithms CSC 364S Notes University of Toronto, Fall 2003 Dynamic Programming Algorithms The setting is as follows. We wish to find a solution to a given problem which optimizes some quantity Q of interest; for

More information

Chapter 6. Dynamic Programming

Chapter 6. Dynamic Programming Chapter 6 Dynamic Programming CS 573: Algorithms, Fall 203 September 2, 203 6. Maximum Weighted Independent Set in Trees 6..0. Maximum Weight Independent Set Problem Input Graph G = (V, E) and weights

More information

Algorithms (IX) Guoqiang Li. School of Software, Shanghai Jiao Tong University

Algorithms (IX) Guoqiang Li. School of Software, Shanghai Jiao Tong University Algorithms (IX) Guoqiang Li School of Software, Shanghai Jiao Tong University Q: What we have learned in Algorithm? Algorithm Design Algorithm Design Basic methodologies: Algorithm Design Algorithm Design

More information

6.001 Notes: Section 4.1

6.001 Notes: Section 4.1 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,

More information

Algorithm Design Techniques part I

Algorithm Design Techniques part I Algorithm Design Techniques part I Divide-and-Conquer. Dynamic Programming DSA - lecture 8 - T.U.Cluj-Napoca - M. Joldos 1 Some Algorithm Design Techniques Top-Down Algorithms: Divide-and-Conquer Bottom-Up

More information

Dynamic Programmming: Activity Selection

Dynamic Programmming: Activity Selection Dynamic Programmming: Activity Selection Select the maximum number of non-overlapping activities from a set of n activities A = {a 1,, a n } (sorted by finish times). Identify easier subproblems to solve.

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms

Dynamic Programming. Design and Analysis of Algorithms. Entwurf und Analyse von Algorithmen. Irene Parada. Design and Analysis of Algorithms Entwurf und Analyse von Algorithmen Dynamic Programming Overview Introduction Example 1 When and how to apply this method Example 2 Final remarks Introduction: when recursion is inefficient Example: Calculation

More information

Longest Common Subsequence. Definitions

Longest Common Subsequence. Definitions Longest Common Subsequence LCS is an interesting variation on the classical string matching problem: the task is that of finding the common portion of two strings (more precise definition in a couple of

More information

Homework3: Dynamic Programming - Answers

Homework3: Dynamic Programming - Answers Most Exercises are from your textbook: Homework3: Dynamic Programming - Answers 1. For the Rod Cutting problem (covered in lecture) modify the given top-down memoized algorithm (includes two procedures)

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 600.463 Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 11.1 Introduction Dynamic programming can be very confusing until you ve used it a

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Greedy Homework Problems

Greedy Homework Problems CS 1510 Greedy Homework Problems 1. (2 points) Consider the following problem: INPUT: A set S = {(x i, y i ) 1 i n} of intervals over the real line. OUTPUT: A maximum cardinality subset S of S such that

More information

CS 380 ALGORITHM DESIGN AND ANALYSIS

CS 380 ALGORITHM DESIGN AND ANALYSIS CS 380 ALGORITHM DESIGN AND ANALYSIS Lecture 14: Dynamic Programming Text Reference: Chapter 15 Dynamic Programming We know that we can use the divide-and-conquer technique to obtain efficient algorithms

More information

Divide and Conquer Strategy. (Page#27)

Divide and Conquer Strategy. (Page#27) MUHAMMAD FAISAL MIT 4 th Semester Al-Barq Campus (VGJW01) Gujranwala faisalgrw123@gmail.com Reference Short Questions for MID TERM EXAMS CS502 Design and Analysis of Algorithms Divide and Conquer Strategy

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors

1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors 1. (a) O(log n) algorithm for finding the logical AND of n bits with n processors on an EREW PRAM: See solution for the next problem. Omit the step where each processor sequentially computes the AND of

More information

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms Efficient Sequential Algorithms, Comp39 Part 3. String Algorithms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press (21).

More information

Dynamic Programming Homework Problems

Dynamic Programming Homework Problems CS 1510 Dynamic Programming Homework Problems 1. Consider the recurrence relation T(0) = T(1) = 2 and for n > 1 n 1 T(n) = T(i)T(i 1) i=1 We consider the problem of computing T(n) from n. (a) Show that

More information

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 1 Dynamic Programming 1 Introduction An algorithm design paradigm like divide-and-conquer Programming : A tabular method (not writing computer code) Divide-and-Conquer (DAC):

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

String Algorithms. CITS3001 Algorithms, Agents and Artificial Intelligence. 2017, Semester 2. CLRS Chapter 32

String Algorithms. CITS3001 Algorithms, Agents and Artificial Intelligence. 2017, Semester 2. CLRS Chapter 32 String Algorithms CITS3001 Algorithms, Agents and Artificial Intelligence Tim French School of Computer Science and Software Engineering The University of Western Australia CLRS Chapter 32 2017, Semester

More information

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017

Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017 CS161 Lecture 13 Longest Common Subsequence, Knapsack, Independent Set Scribe: Wilbur Yang (2016), Mary Wootters (2017) Date: November 6, 2017 1 Overview Last lecture, we talked about dynamic programming

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Subset sum problem and dynamic programming

Subset sum problem and dynamic programming Lecture Notes: Dynamic programming We will discuss the subset sum problem (introduced last time), and introduce the main idea of dynamic programming. We illustrate it further using a variant of the so-called

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

CS 231: Algorithmic Problem Solving

CS 231: Algorithmic Problem Solving CS 231: Algorithmic Problem Solving Naomi Nishimura Module 5 Date of this version: June 14, 2018 WARNING: Drafts of slides are made available prior to lecture for your convenience. After lecture, slides

More information

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018

Memoization/Dynamic Programming. The String reconstruction problem. CS124 Lecture 11 Spring 2018 CS124 Lecture 11 Spring 2018 Memoization/Dynamic Programming Today s lecture discusses memoization, which is a method for speeding up algorithms based on recursion, by using additional memory to remember

More information

Dynamic Programming Homework Problems

Dynamic Programming Homework Problems CS 1510 Dynamic Programming Homework Problems 1. (2 points) Consider the recurrence relation T (0) = T (1) = 2 and for n > 1 n 1 T (n) = T (i)t (i 1) i=1 We consider the problem of computing T (n) from

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

FastA & the chaining problem

FastA & the chaining problem FastA & the chaining problem We will discuss: Heuristics used by the FastA program for sequence alignment Chaining problem 1 Sources for this lecture: Lectures by Volker Heun, Daniel Huson and Knut Reinert,

More information

Algorithm classification

Algorithm classification Types of Algorithms Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We ll talk about a classification scheme for algorithms This classification scheme

More information

QB LECTURE #1: Algorithms and Dynamic Programming

QB LECTURE #1: Algorithms and Dynamic Programming QB LECTURE #1: Algorithms and Dynamic Programming Adam Siepel Nov. 16, 2015 2 Plan for Today Introduction to algorithms Simple algorithms and running time Dynamic programming Soon: sequence alignment 3

More information

CS 206 Introduction to Computer Science II

CS 206 Introduction to Computer Science II CS 206 Introduction to Computer Science II 03 / 25 / 2013 Instructor: Michael Eckmann Today s Topics Comments/Questions? More on Recursion Including Dynamic Programming technique Divide and Conquer techniques

More information

CSE373: Data Structure & Algorithms Lecture 23: More Sorting and Other Classes of Algorithms. Catie Baker Spring 2015

CSE373: Data Structure & Algorithms Lecture 23: More Sorting and Other Classes of Algorithms. Catie Baker Spring 2015 CSE373: Data Structure & Algorithms Lecture 23: More Sorting and Other Classes of Algorithms Catie Baker Spring 2015 Admin No class on Monday Extra time for homework 5 2 Sorting: The Big Picture Surprising

More information

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 15 Lecturer: Michael Jordan October 26, Notes 15 for CS 170

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 15 Lecturer: Michael Jordan October 26, Notes 15 for CS 170 UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 15 Lecturer: Michael Jordan October 26, 2005 Notes 15 for CS 170 1 Introduction to Dynamic Programming Consider the following algorithm

More information

Lecture 11: Dynamic Programming Steven Skiena. skiena

Lecture 11: Dynamic Programming Steven Skiena.   skiena Lecture 11: Dynamic Programming Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Dynamic Programming Dynamic programming

More information

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions U.C. Berkeley CS70 : Algorithms Midterm 2 Solutions Lecturers: Sanjam Garg and Prasad aghavra March 20, 207 Midterm 2 Solutions. (0 points) True/False Clearly put your answers in the answer box in front

More information

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech License CC BY-NC-SA 2.0 http://creativecommons.org/licenses/by-nc-sa/2.0/fr/ Outline Previously on IN101 Python s anatomy Functions,

More information

Slide Set 5. for ENEL 353 Fall Steve Norman, PhD, PEng. Electrical & Computer Engineering Schulich School of Engineering University of Calgary

Slide Set 5. for ENEL 353 Fall Steve Norman, PhD, PEng. Electrical & Computer Engineering Schulich School of Engineering University of Calgary Slide Set 5 for ENEL 353 Fall 207 Steve Norman, PhD, PEng Electrical & Computer Engineering Schulich School of Engineering University of Calgary Fall Term, 207 SN s ENEL 353 Fall 207 Slide Set 5 slide

More information

Statistics Case Study 2000 M. J. Clancy and M. C. Linn

Statistics Case Study 2000 M. J. Clancy and M. C. Linn Statistics Case Study 2000 M. J. Clancy and M. C. Linn Problem Write and test functions to compute the following statistics for a nonempty list of numeric values: The mean, or average value, is computed

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

FastA and the chaining problem, Gunnar Klau, December 1, 2005, 10:

FastA and the chaining problem, Gunnar Klau, December 1, 2005, 10: FastA and the chaining problem, Gunnar Klau, December 1, 2005, 10:56 4001 4 FastA and the chaining problem We will discuss: Heuristics used by the FastA program for sequence alignment Chaining problem

More information

Dynamic Programming Algorithms

Dynamic Programming Algorithms Based on the notes for the U of Toronto course CSC 364 Dynamic Programming Algorithms The setting is as follows. We wish to find a solution to a given problem which optimizes some quantity Q of interest;

More information

Computer Sciences Department 1

Computer Sciences Department 1 1 Advanced Design and Analysis Techniques (15.1, 15.2, 15.3, 15.4 and 15.5) 3 Objectives Problem Formulation Examples The Basic Problem Principle of optimality Important techniques: dynamic programming

More information

Greedy Algorithms Huffman Coding

Greedy Algorithms Huffman Coding Greedy Algorithms Huffman Coding Huffman Coding Problem Example: Release 29.1 of 15-Feb-2005 of TrEMBL Protein Database contains 1,614,107 sequence entries, comprising 505,947,503 amino acids. There are

More information

Solutions to Problem Set 1

Solutions to Problem Set 1 CSCI-GA.3520-001 Honors Analysis of Algorithms Solutions to Problem Set 1 Problem 1 An O(n) algorithm that finds the kth integer in an array a = (a 1,..., a n ) of n distinct integers. Basic Idea Using

More information

Lecture 8. Dynamic Programming

Lecture 8. Dynamic Programming Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018

More information

Lectures by Volker Heun, Daniel Huson and Knut Reinert, in particular last years lectures

Lectures by Volker Heun, Daniel Huson and Knut Reinert, in particular last years lectures 4 FastA and the chaining problem We will discuss: Heuristics used by the FastA program for sequence alignment Chaining problem 4.1 Sources for this lecture Lectures by Volker Heun, Daniel Huson and Knut

More information

Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi.

Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi. Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 18 Tries Today we are going to be talking about another data

More information

CS 491 CAP Intermediate Dynamic Programming

CS 491 CAP Intermediate Dynamic Programming CS 491 CAP Intermediate Dynamic Programming Victor Gao University of Illinois at Urbana-Champaign Oct 28, 2016 Linear DP Knapsack DP DP on a Grid Interval DP Division/Grouping DP Tree DP Set DP Outline

More information

Algorithms Exam TIN093/DIT600

Algorithms Exam TIN093/DIT600 Algorithms Exam TIN093/DIT600 Course: Algorithms Course code: TIN 093 (CTH), DIT 600 (GU) Date, time: 24th October 2015, 14:00 18:00 Building: M Responsible teacher: Peter Damaschke, Tel. 5405. Examiner:

More information

Practice Problems for the Final

Practice Problems for the Final ECE-250 Algorithms and Data Structures (Winter 2012) Practice Problems for the Final Disclaimer: Please do keep in mind that this problem set does not reflect the exact topics or the fractions of each

More information

5.1 The String reconstruction problem

5.1 The String reconstruction problem CS125 Lecture 5 Fall 2014 5.1 The String reconstruction problem The greedy approach doesn t always work, as we have seen. It lacks flexibility; if at some point, it makes a wrong choice, it becomes stuck.

More information

CSC 373: Algorithm Design and Analysis Lecture 8

CSC 373: Algorithm Design and Analysis Lecture 8 CSC 373: Algorithm Design and Analysis Lecture 8 Allan Borodin January 23, 2013 1 / 19 Lecture 8: Announcements and Outline Announcements No lecture (or tutorial) this Friday. Lecture and tutorials as

More information

Chapter 6. Dynamic Programming

Chapter 6. Dynamic Programming Chapter 6 Dynamic Programming We began our study of algorithmic techniques with greedy algorithms, which in some sense form the most natural approach to algorithm design. Faced with a new computational

More information

Dynamic Programming. An Introduction to DP

Dynamic Programming. An Introduction to DP Dynamic Programming An Introduction to DP Dynamic Programming? A programming technique Solve a problem by breaking into smaller subproblems Similar to recursion with memoisation Usefulness: Efficiency

More information

CMPS 102 Solutions to Homework 7

CMPS 102 Solutions to Homework 7 CMPS 102 Solutions to Homework 7 Kuzmin, Cormen, Brown, lbrown@soe.ucsc.edu November 17, 2005 Problem 1. 15.4-1 p.355 LCS Determine an LCS of x = (1, 0, 0, 1, 0, 1, 0, 1) and y = (0, 1, 0, 1, 1, 0, 1,

More information

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples Elements of Dynamic Programming COSC 3A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken from Monica Nicolescu,

More information

CS Algorithms and Complexity

CS Algorithms and Complexity CS 350 - Algorithms and Complexity Dynamic Programming Sean Anderson 2/20/18 Portland State University Table of contents 1. Homework 3 Solutions 2. Dynamic Programming 3. Problem of the Day 4. Application

More information

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Dynamic Programming Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 25 Dynamic Programming Terrible Fibonacci Computation Fibonacci sequence: f = f(n) 2

More information

1 Computing alignments in only linear space

1 Computing alignments in only linear space 1 Computing alignments in only linear space One of the defects of dynamic programming for all the problems we have discussed is that the dynamic programming tables use Θ(nm) space when the input strings

More information

Last week: Breadth-First Search

Last week: Breadth-First Search 1 Last week: Breadth-First Search Set L i = [] for i=1,,n L 0 = {w}, where w is the start node For i = 0,, n-1: For u in L i : For each v which is a neighbor of u: If v isn t yet visited: - mark v as visited,

More information

CS125 : Introduction to Computer Science. Lecture Notes #38 and #39 Quicksort. c 2005, 2003, 2002, 2000 Jason Zych

CS125 : Introduction to Computer Science. Lecture Notes #38 and #39 Quicksort. c 2005, 2003, 2002, 2000 Jason Zych CS125 : Introduction to Computer Science Lecture Notes #38 and #39 Quicksort c 2005, 2003, 2002, 2000 Jason Zych 1 Lectures 38 and 39 : Quicksort Quicksort is the best sorting algorithm known which is

More information

From NP to P Musings on a Programming Contest Problem

From NP to P Musings on a Programming Contest Problem From NP to P Musings on a Programming Contest Problem Edward Corwin Antonette Logar Mathematics and CS SDSM&T Rapid City, SD 57701 edward.corwin@sdsmt.edu ABSTRACT A classic analysis of algorithms problem

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 16 Dynamic Programming Least Common Subsequence Saving space Adam Smith Least Common Subsequence A.k.a. sequence alignment edit distance Longest Common Subsequence

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Analysis of Algorithms

Analysis of Algorithms Algorithm An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and

More information

Introduction to Algorithms I

Introduction to Algorithms I Summer School on Algorithms and Optimization Organized by: ACM Unit, ISI and IEEE CEDA. Tutorial II Date: 05.07.017 Introduction to Algorithms I (Q1) A binary tree is a rooted tree in which each node has

More information

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III. SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK UNIT-III SUB CODE: CS2251 DEPT: CSE SUB NAME: DESIGN AND ANALYSIS OF ALGORITHMS SEM/YEAR: III/ II PART A (2 Marks) 1. Write any four examples

More information

Dynamic Programming. CIS 110, Fall University of Pennsylvania

Dynamic Programming. CIS 110, Fall University of Pennsylvania Dynamic Programming CIS 110, Fall 2012 University of Pennsylvania Dynamic Programming Dynamic programming records saves computation for reuse later. Programming: in the optimization sense ( Linear Programming

More information

Dynamic Programming. See p of the text

Dynamic Programming. See p of the text Dynamic Programming See p. 329-333 of the text Clicker Q: There are some situations in which recursion can be massively inefficient. For example, the standard Fibonacci recursion Fib(n) = Fib(n-1) + Fib(n-2)

More information

Solution to Problem 1 of HW 2. Finding the L1 and L2 edges of the graph used in the UD problem, using a suffix array instead of a suffix tree.

Solution to Problem 1 of HW 2. Finding the L1 and L2 edges of the graph used in the UD problem, using a suffix array instead of a suffix tree. Solution to Problem 1 of HW 2. Finding the L1 and L2 edges of the graph used in the UD problem, using a suffix array instead of a suffix tree. The basic approach is the same as when using a suffix tree,

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15 600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15 14.1 Introduction Today we re going to talk about algorithms for computing shortest

More information

Unit #2: Recursion, Induction, and Loop Invariants

Unit #2: Recursion, Induction, and Loop Invariants Unit #2: Recursion, Induction, and Loop Invariants CPSC 221: Algorithms and Data Structures Will Evans 2012W1 Unit Outline Thinking Recursively Recursion Examples Analyzing Recursion: Induction and Recurrences

More information

4 Dynamic Programming

4 Dynamic Programming 4 Dynamic Programming Dynamic Programming is a form of recursion. In Computer Science, you have probably heard the tradeoff between Time and Space. There is a trade off between the space complexity and

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Chapter 17. Dynamic Programming

Chapter 17. Dynamic Programming Chapter 17 Dynamic Programming An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman

More information