15.4 Longest common subsequence

Size: px
Start display at page:

Download "15.4 Longest common subsequence"

Transcription

1 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible bases are Adenine, Guanine, Cytosine, and Thymine We express a strand of DNA as a string over the alphabet {A,C,G,T} E.g., the DNA of two organisms may be = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA = GTCGTTCGGAATGCCGTTGCTCTGTAAA MAT AADS, Fall Oct By comparing two strands of DNA we determine how similar they are, as some measure of how closely related the two organisms are We can de ne similarity in many different ways E.g., we can say that two DNA strands are similar if one is a substring of the other Neither nor is a substring of the other Alternatively, we could say that two strands are similar if the number of changes needed to turn one into the other is small MAT AADS, Fall Oct

2 Yet another way to measure the similarity of and is by nding a third strand in which the bases in appear in each of and these bases must appear in the same order, but not necessarily consecutively The longer the strand we can nd, the more similar and are In our example, the longest strand is = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA = GTCGTTCGGAATGCCGTTGCTCTGTAAA = GTCGTCGGAAGCCGGCCGAA MAT AADS, Fall Oct Formalize this notion of similarity as the longestcommon-subsequence problem A subsequence is just the given sequence with zero or more elements left out Formally, given a sequence =,,,, another sequence =,,, is a subsequence of if there exists a strictly increasing sequence,,, of indices of such that for all =1,2,,, we have = For example, =,,, is a subsequence of =,,,,,, with corresponding index sequence 2,3,5,7 MAT AADS, Fall Oct

3 A sequence is a common subsequence of and if is a subsequence of both and For example, if =,,,,,, and =,,,,,, the sequence,, is a common subsequence of both and It is not a longest common subsequence (LCS) of and The sequence,,, is also common to both and and has length 4 This sequence is an LCS of and, as is,,, ; and have no common subsequence of length 5 or greater MAT AADS, Fall Oct In longest-common-subsequence problem, we are given =,,, and =,,, and wish to nd a max-length common subsequence of and Step 1: Characterizing a longest common subsequence In a brute-force approach, we would enumerate all subsequences of and check each of them to see whether it is also a subsequence of, keeping track of the longest subsequence we nd Each subsequence of corresponds to a subset of the indices 1,2,, of Because has 2 subsequences, this approach requires exponential time, making it impractical for long sequences MAT AADS, Fall Oct

4 The LCS problem has an optimal-substructure property, however, as the following theorem shows The natural classes of subproblems correspond to pairs of pre xes of the two input sequences Precisely, given a sequence =,,,, we de ne the th pre x of, for =0,1,,, as =,,, For example, if =,,,,,,, then =,,, and is the empty sequence MAT AADS, Fall Oct Theorem 15.1 (Optimal substructure of LCS) Let =,,, and =,,, be sequences, and let =,,, be any LCS of and. 1. If =, then = = and is an LCS of and. 2. If, then implies that is an LCS of and. 3. If, then implies that is an LCS of and. MAT AADS, Fall Oct

5 Proof (1) If, then we could append = to to obtain a common subsequence of and of length +1, contradicting the supposition that is a LCS of and. Thus, we must have = =. Now, the pre x is a length-( 1) common subsequence of and. We wish to show that it is an LCS. Suppose for the purpose of contradiction that there exists a common subsequence of and with length greater than 1. Then, appending = to produces a common subsequence of and whose length is greater than, which is a contradiction. MAT AADS, Fall Oct (2) If, then is a common subsequence of and. If there were a common subsequence with length greater than, then would also be a common subsequence of and, contradicting the assumption that is an LCS of and. (3) The proof is symmetric to (2). Theorem 15.1 tells us that an LCS of two sequences contains within it an LCS of pre xes of the two sequences Thus, the LCS problem has an optimalsubstructure property MAT AADS, Fall Oct

6 Step 2: A recursive solution We examine either one or two subproblems when nding an LCS of and If =, we nd an LCS of and Appending = yields an LCS of and If, then we (1) nd an LCS of and and (2) nd an LCS of and Whichever of these two LCSs is longer is an LCS of and These cases exhaust all possibilities, and we know that one of the optimal subproblem solutions must appear within an LCS of and MAT AADS, Fall Oct To nd an LCS of and, we may need to nd the LCSs of and and of and Each subproblem has the subsubproblem of nding an LCS of and Many other subproblems share subsubproblems As in the matrix-chain multiplication, recursive solution to the LCS problem involves a recurrence for the value of an optimal solution Let us de ne [, ] to be the length of an LCS of the sequences and If either =0or =0, one of the sequences has length 0, and so the LCS has length 0 MAT AADS, Fall Oct

7 The optimal substructure of the LCS problem gives 0 if =0 or =0, = 1, 1 +1 if, >0 and = max (, 1, 1, ) if, >0 and Observe that a condition in the problem restricts which subproblems we may consider When =, we consider nding an LCS of and Otherwise, we instead consider the two subproblems of nding an LCS of and and of and In the previous dynamic-programming algorithms for rod cutting and matrix-chain multiplication we ruled out no subproblems due to conditions in the problem MAT AADS, Fall Oct Step 3: Computing the length of an LCS Since the LCS problem has only distinct subproblems, we can use dynamic programming to compute the solutions bottom up LCS-LENGTH stores the [, ] values in [0..,0.. ], and it computes the entries in row-major order I.e., the procedure lls in the rst row of from left to right, then the second row, and so on The procedure also maintains the table [1..,1.. ] Intuitively, [, ] points to the table entry corresponding to the optimal subproblem solution chosen when computing,, contains the length of an LCS of and MAT AADS, Fall Oct

8 LCS-LENGTH(, ) let [1..,1.. ]and [0..,0.. ] be new tables 4. for 1 to 5., for 0to 7. 0, 0 8. for 1 to 9. for 1to 10. if = 11., 1, [, 13. elseif [ 1, [, 1] 14., [ 1, ] 15. [, 16. else, [, 1] 17., 18. return and Running time: ( ) MAT AADS, Fall Oct The and tables computed by LCS-LENGTH on =,,,,,, and =,,,,, MAT AADS, Fall Oct

9 Step 4: Constructing an LCS The table returned by LCS-LENGTH enables us to quickly construct an LCS of and We simply begin at [, ] and trace through the table by following the arrows Whenever we encounter a in entry,, it implies that = is an element of the LCS that LCS-LENGTH found With this method, we encounter the elements of this LCS in reverse order A recursive procedure prints out an LCS of and in the proper, forward order MAT AADS, Fall Oct The square in row and column contains the value of, and the appropriate arrow for the value of [, ] The entry 4 in [7,6] the lower right-hand corner of the table is the length of an LCS,,, For, >0, entry, depends only on whether = and the values in entries 1,,, 1, and 1, 1, which are computed before, To reconstruct the elements of an LCS, follow the [, ] arrows from the lower right-hand corner Each on the shaded sequence corresponds to an entry (highlighted) for which = is a member of an LCS MAT AADS, Fall Oct

10 Improving the code Each, entry depends on only 3 other table entries: 1,,, 1, and 1, 1 Given the value of,, we can determine in (1) time which of these three values was used to compute,, without inspecting table We can reconstruct an LCS in ( + ) time The auxiliary space requirement for computing an LCS does not asymptotically decrease, since we need space for the table anyway MAT AADS, Fall Oct We can, however, reduce the asymptotic space requirements for LCS-LENGTH, since it needs only two rows of table at a time the row being computed and the previous row This improvement works if we need only the length of an LCS if we need to reconstruct the elements of an LCS, the smaller table does not keep enough information to retrace our steps in ( + ) time MAT AADS, Fall Oct

11 15.5 Optimal binary search trees We are designing a program to translate text Perform lookup operations by building a BST with words as keys and their equivalents as satellite data We can ensure an (lg ) search time per occurrence by using a RBT or any other balanced BST A frequently used word may appear far from the root while a rarely used word appears near the root We want frequent words to be placed nearer the root How do we organize a BST so as to minimize the number of nodes visited in all searches, given that we know how often each word occurs? MAT AADS, Fall Oct What we need is an optimal binary search tree Formally, given a sequence =,,, distinct sorted keys ( < < ), we wish to build a BST from these keys For each key, we have a probability that a search will be for Some searches may be for values not in, so we also have +1 dummy keys,,, representing values not in In particular, represents all values less than, represents all values greater than of MAT AADS, Fall Oct

12 For = 1,2,, 1, the dummy key represents all values between and For each dummy key, we have a probability that a search will correspond to MAT AADS, Fall Oct Each key is an internal node, and each dummy key is a leaf Every search is either successful ( nds a key ) or unsuccessful ( nds a dummy key ), and so we have + =1 Because we have probabilities of searches for each key and each dummy key, we can determine the expected cost of a search in a given BST MAT AADS, Fall Oct

13 Let us assume that the actual cost of a search equals the number of nodes examined, i.e., the depth of the node found by the search in +1 Then the expected cost of a search in, E search cost in = depth +1 + (depth + 1) =1+ depth + depth, where depth denotes a node s depth in tree MAT AADS, Fall Oct Node Depth Probability Contribution Total 2.80 MAT AADS, Fall Oct

14 For a given set of probabilities, we wish to construct a BST whose expected search cost is smallest We call such a tree an optimal binary search tree An optimal BST for the probabilities given has expected cost 2.75 An optimal BST is not necessarily a tree whose overall height is smallest Nor can we necessarily construct an optimal BST by always putting the key with the greatest probability at the root The lowest expected cost of any BST with at the root is 2.85 MAT AADS, Fall Oct MAT AADS, Fall Oct

15 Step 1: The structure of an optimal BST Consider any subtree of a BST It must contain keys in a contiguous range,,, for some In addition, a subtree that contains keys,, must also have as its leaves the dummy keys,, If an optimal BST has a subtree containing keys,,, then this subtree must be optimal as well for the subproblem with keys,, and dummy keys,, MAT AADS, Fall Oct Given keys,,, one of them, say, is the root of an optimal subtree containing these keys The left subtree of the root contains the keys,, (and dummy keys,, ) The right subtree contains the keys,, (and dummy keys,, ) As long as we examine all candidate roots, where, and determine all optimal BSTs containing,, and those containing,,, we are guaranteed to nd an optimal BST MAT AADS, Fall Oct

16 Suppose that in a subtree with keys,,, we select as the root s left subtree contains the keys,, Interpret this sequence as containing no keys Subtrees, however, also contain dummy keys Adopt the convention that a subtree containing keys,, has no actual keys but does contain the single dummy key Symmetrically, if we select as the root, then s right subtree contains no actual keys, but it does contain the dummy key MAT AADS, Fall Oct Step 2: A recursive solution We pick our subproblem domain as nding an optimal BST containing the keys,,, where 1,, and 1 Let us de ne [, ] as the expected cost of searching an optimal BST containing the keys,, Ultimately, we wish to compute [1, ] The easy case occurs when = 1 Then we have just the dummy key The expected search cost is, 1 = MAT AADS, Fall Oct

17 When >, we need to select a root from among,, and make an optimal BST with keys,, as its left subtree and an optimal BST with keys,, as its right subtree What happens to the expected search cost of a subtree when it becomes a subtree of a node? Depth of each node increases by 1 Expected search cost of this subtree increases by the sum of all the probabilities in it For a subtree with keys,,, let us denote this sum of probabilities as, = + MAT AADS, Fall Oct Thus, if is the root of an optimal subtree containing keys,,, we have, = +, 1 +, 1 +( + 1, + + 1, ) Noting that, =, ( + 1, ) we rewrite, =, , + (, ) We choose the root that gives the lowest expected search cost:, = min, =, , + (, ) if = 1 if MAT AADS, Fall Oct

18 The, values give the expected search costs in optimal BSTs To help us keep track of the structure of optimal BSTs, we de ne root,, for, to be the index for which is the root of an optimal BST containing keys,, Although we will see how to compute the values of root,, we leave the construction of an optimal binary search tree from these values as an exercise MAT AADS, Fall Oct Step 3: Computing the expected search cost of an optimal BST We store, values in a table ,0.. The rst index needs to run to +1because to have a subtree containing only the dummy key, we need to compute and store + 1, The second index needs to start from 0 because to have a subtree containing only the dummy key, we need to compute and store 1,0 MAT AADS, Fall Oct

19 We use only the entries, for which 1 We also use a table root,, for recording the root of the subtree containing keys,, This table uses only the entries We also store the, [ ,0.. ] values in a table For the base case, we compute, 1 = For, we compute, =, Thus, we can compute the ( ) values of, in (1) time each MAT AADS, Fall Oct OPTIMAL-BST(,, ) 1. let ,0.., [ ,0.. ], root 1..,1.. be new tables 2. for =1 to +1 3., 1 = 4., 1 = 5. for =1 to 6. for =1 to = , 9., =, for = to 11. =, , + [, ] 12. if < [, ] 13., = 14. root, = 15.return and root MAT AADS, Fall Oct

20 MAT AADS, Fall Oct The OPTIMAL-BST procedure takes ( ) time, just like MATRIX-CHAIN-ORDER Its running time is ( ), since its for loops are nested three deep and each loop index takes on at most values The loop indices in OPTIMAL-BST do not have exactly the same bounds as those in MATRIX- CHAIN-ORDER, but they are within 1in all directions Thus, like MATRIX-CHAIN-ORDER, the OPTIMAL- BST procedure takes ( ) time MAT AADS, Fall Oct

21 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices is overkill; simpler, more ef cient algorithms will do A greedy algorithm always makes the choice that looks best at the moment That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution MAT AADS, Fall Oct An activity-selection problem Suppose we have a set =,,, of proposed activities that wish to use a resource (e.g., a lecture hall), which can serve only one activity at a time Each activity has a start time and a nish time, where < If selected, activity takes place during the halfopen time interval [, ) MAT AADS, Fall Oct

22 Activities and are compatible if the intervals [, ) and [, ) do not overlap I.e., and are compatible if or We wish to select a maximum-size subset of mutually compatible activities We assume that the activities are sorted in monotonically increasing order of nish time: MAT AADS, Fall Oct Consider, e.g., the following set of activities: For this example, the subset,, consists of mutually compatible activities It is not a maximum subset, however, since the subset,,, is larger In fact, it is a largest subset of mutually compatible activities; another largest subset is,,, MAT AADS, Fall Oct

23 The optimal substructure of the activityselection problem Let be the set of activities that start after nishes and that nish before starts We wish to nd a maximum set of mutually compatible activities in Suppose that such a maximum set is, which includes some activity By including in an optimal solution, we are left with two subproblems: nding mutually compatible activities in the set and nding mutually compatible activities in the set MAT AADS, Fall Oct Let = and =, so that contains the activities in that nish before starts and contains the activities in that start after nishes Thus, =, and so the maximum-size set in consists of = + +1activities The usual cut-and-paste argument shows that the optimal solution must also include optimal solutions for and MAT AADS, Fall Oct

24 This suggests that we might solve the activityselection problem by dynamic programming If we denote the size of an optimal solution for the set by [, ], then we would have the recurrence, =, +, +1 Of course, if we did not know that an optimal solution for the set includes activity, we would have to examine all activities in to nd which one to choose, so that 0 if, = max, =, +, +1 if MAT AADS, Fall Oct Making the greedy choice For the activity-selection problem, we need consider only the greedy choice We choose an activity that leaves the resource available for as many other activities as possible Now, of the activities we end up choosing, one of them must be the rst one to nish Choose the activity in with the earliest nish time, since that leaves the resource available for as many of the activities that follow it as possible Activities are sorted in monotonically increasing order by nish time; greedy choice is activity MAT AADS, Fall Oct

25 If we make the greedy choice, we only have to nd activities that start after nishes < and is the earliest nish time of any activity no activity can have a nish time Thus, all activities that are compatible with activity must start after nishes Let = : be the set of activities that start after nishes Optimal substructure: if is in the optimal solution, then an optimal solution to the original problem consists of and all the activities in an optimal solution to the subproblem MAT AADS, Fall Oct Theorem 16.1 Consider any nonempty subproblem, and let be an activity in with the earliest nish time. Then is included in some maximum-size subset of mutually compatible activities of. Proof Let be a max-size subset of mutually compatible activities in, and let be the activity in with the earliest nish time. If =, we are done, since is in a max-size subset of mutually compatible activities of. If, let the set =. The activities in are disjoint because the activities in are disjoint, is the rst activity in to nish, and. Since =, we conclude that is a maximum-size subset of mutually compatible activities of and includes. MAT AADS, Fall Oct

26 We can repeatedly choose the activity that nishes rst, keep only the activities compatible with this activity, and repeat until no activities remain Moreover, because we always choose the activity with the earliest nish time, the nish times of the activities we choose must strictly increase We can consider each activity just once overall, in monotonically increasing order of nish times MAT AADS, Fall Oct A recursive greedy algorithm RECURSIVE-ACTIVITY-SELECTOR(,,, ) while and [ ] < [ ] // nd the rst // activity in to nish if 5. return RECURSIVE-ACTIVITY- SELECTOR(,,, ) 6. else return MAT AADS, Fall Oct

27 MAT AADS, Fall Oct An iterative greedy algorithm GREEDY-ACTIVITY-SELECTOR(, ) for 2to 5. if [ [ ] return MAT AADS, Fall Oct

28 The set returned by the call GREEDY-ACTIVITY-SELECTOR(, ) is precisely the set returned by the call RECURSIVE-ACTIVITY-SELECTOR(,,, ) Both the recursive version and the iterative algorithm schedule a set of activities in ( ) time, assuming that the activities were already sorted initially by their nish times MAT AADS, Fall Oct

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

16 Greedy Algorithms

16 Greedy Algorithms 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices

More information

Chapter 3 Dynamic programming

Chapter 3 Dynamic programming Chapter 3 Dynamic programming 1 Dynamic programming also solve a problem by combining the solutions to subproblems. But dynamic programming considers the situation that some subproblems will be called

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

We assume uniform hashing (UH):

We assume uniform hashing (UH): We assume uniform hashing (UH): the probe sequence of each key is equally likely to be any of the! permutations of 0,1,, 1 UH generalizes the notion of SUH that produces not just a single number, but a

More information

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples

Elements of Dynamic Programming. COSC 3101A - Design and Analysis of Algorithms 8. Discovering Optimal Substructure. Optimal Substructure - Examples Elements of Dynamic Programming COSC 3A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken from Monica Nicolescu,

More information

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced 13 Red-Black Trees A red-black tree (RBT) is a BST with one extra bit of storage per node: color, either RED or BLACK Constraining the node colors on any path from the root to a leaf Ensures that no such

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left 13.2 Rotations MAT-72006 AA+DS, Fall 2013 24-Oct-13 368 LEFT-ROTATE(, ) 1. // set 2. // s left subtree s right subtree 3. if 4. 5. // link s parent to 6. if == 7. 8. elseif == 9. 10. else 11. // put x

More information

II (Sorting and) Order Statistics

II (Sorting and) Order Statistics II (Sorting and) Order Statistics Heapsort Quicksort Sorting in Linear Time Medians and Order Statistics 8 Sorting in Linear Time The sorting algorithms introduced thus far are comparison sorts Any comparison

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 10. Dynamic Programming. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 1 Dynamic Programming 1 Introduction An algorithm design paradigm like divide-and-conquer Programming : A tabular method (not writing computer code) Divide-and-Conquer (DAC):

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms Dynamic Programming Well known algorithm design techniques: Brute-Force (iterative) ti algorithms Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes Greedy Algorithms CLRS Chapters 16.1 16.3 Introduction to greedy algorithms Activity-selection problem Design of data-compression (Huffman) codes (Minimum spanning tree problem) (Shortest-path problem)

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 16 Greedy algorithms Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 Overview A greedy

More information

Subsequence Definition. CS 461, Lecture 8. Today s Outline. Example. Assume given sequence X = x 1, x 2,..., x m. Jared Saia University of New Mexico

Subsequence Definition. CS 461, Lecture 8. Today s Outline. Example. Assume given sequence X = x 1, x 2,..., x m. Jared Saia University of New Mexico Subsequence Definition CS 461, Lecture 8 Jared Saia University of New Mexico Assume given sequence X = x 1, x 2,..., x m Let Z = z 1, z 2,..., z l Then Z is a subsequence of X if there exists a strictly

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

Computer Sciences Department 1

Computer Sciences Department 1 1 Advanced Design and Analysis Techniques (15.1, 15.2, 15.3, 15.4 and 15.5) 3 Objectives Problem Formulation Examples The Basic Problem Principle of optimality Important techniques: dynamic programming

More information

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms Efficient Sequential Algorithms, Comp39 Part 3. String Algorithms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press (21).

More information

Longest Common Subsequence. Definitions

Longest Common Subsequence. Definitions Longest Common Subsequence LCS is an interesting variation on the classical string matching problem: the task is that of finding the common portion of two strings (more precise definition in a couple of

More information

COMP251: Greedy algorithms

COMP251: Greedy algorithms COMP251: Greedy algorithms Jérôme Waldispühl School of Computer Science McGill University Based on (Cormen et al., 2002) Based on slides from D. Plaisted (UNC) & (goodrich & Tamassia, 2009) Disjoint sets

More information

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, )

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, ) Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 1. if >. 2. error new key is greater than current key 3.. 4.. 5. if NIL and.

More information

Algorithm Design Techniques part I

Algorithm Design Techniques part I Algorithm Design Techniques part I Divide-and-Conquer. Dynamic Programming DSA - lecture 8 - T.U.Cluj-Napoca - M. Joldos 1 Some Algorithm Design Techniques Top-Down Algorithms: Divide-and-Conquer Bottom-Up

More information

15.Dynamic Programming

15.Dynamic Programming 15.Dynamic Programming Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. Like divide and conquer, DP solves problems by combining solutions

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Optimization

More information

CS 380 ALGORITHM DESIGN AND ANALYSIS

CS 380 ALGORITHM DESIGN AND ANALYSIS CS 380 ALGORITHM DESIGN AND ANALYSIS Lecture 14: Dynamic Programming Text Reference: Chapter 15 Dynamic Programming We know that we can use the divide-and-conquer technique to obtain efficient algorithms

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

18.3 Deleting a key from a B-tree

18.3 Deleting a key from a B-tree 18.3 Deleting a key from a B-tree B-TREE-DELETE deletes the key from the subtree rooted at We design it to guarantee that whenever it calls itself recursively on a node, the number of keys in is at least

More information

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems.

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems. 2.3 Designing algorithms There are many ways to design algorithms. Insertion sort uses an incremental approach: having sorted the subarray A[1 j - 1], we insert the single element A[j] into its proper

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Dynamic Programming II

Dynamic Programming II June 9, 214 DP: Longest common subsequence biologists often need to find out how similar are 2 DNA sequences DNA sequences are strings of bases: A, C, T and G how to define similarity? DP: Longest common

More information

Problem Strategies. 320 Greedy Strategies 6

Problem Strategies. 320 Greedy Strategies 6 Problem Strategies Weighted interval scheduling: 2 subproblems (include the interval or don t) Have to check out all the possibilities in either case, so lots of subproblem overlap dynamic programming:

More information

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( ) 17.4 Dynamic tables Let us now study the problem of dynamically expanding and contracting a table We show that the amortized cost of insertion/ deletion is only (1) Though the actual cost of an operation

More information

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015

15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 15-451/651: Design & Analysis of Algorithms January 26, 2015 Dynamic Programming I last changed: January 28, 2015 Dynamic Programming is a powerful technique that allows one to solve many different types

More information

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 11 Greedy Algorithms 1 Activity Selection Problem Input: a set S {1, 2,, n} of n activities s i =Start time of activity i, f i = Finish time of activity i Activity i takes place

More information

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book)

Dynamic Programming. An Enumeration Approach. Matrix Chain-Products. Matrix Chain-Products (not in book) Matrix Chain-Products (not in book) is a general algorithm design paradigm. Rather than give the general structure, let us first give a motivating example: Matrix Chain-Products Review: Matrix Multiplication.

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

Algorithm classification

Algorithm classification Types of Algorithms Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We ll talk about a classification scheme for algorithms This classification scheme

More information

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms Chapter 13 Greedy Algorithms Activity Selection Problem 0-1 Knapsack Problem Huffman Code Construction Dynamic Programming Assembly-Line Scheduling C-C Tsai P.1 Greedy Algorithms A greedy algorithm always

More information

ECE608 - Chapter 16 answers

ECE608 - Chapter 16 answers ¼ À ÈÌ Ê ½ ÈÊÇ Ä ÅË ½µ ½ º½¹ ¾µ ½ º½¹ µ ½ º¾¹½ µ ½ º¾¹¾ µ ½ º¾¹ µ ½ º ¹ µ ½ º ¹ µ ½ ¹½ ½ ECE68 - Chapter 6 answers () CLR 6.-4 Let S be the set of n activities. The obvious solution of using Greedy-Activity-

More information

Dynamic Programming Algorithms

Dynamic Programming Algorithms CSC 364S Notes University of Toronto, Fall 2003 Dynamic Programming Algorithms The setting is as follows. We wish to find a solution to a given problem which optimizes some quantity Q of interest; for

More information

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech

IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech IN101: Algorithmic techniques Vladimir-Alexandru Paun ENSTA ParisTech License CC BY-NC-SA 2.0 http://creativecommons.org/licenses/by-nc-sa/2.0/fr/ Outline Previously on IN101 Python s anatomy Functions,

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

16.Greedy algorithms

16.Greedy algorithms 16.Greedy algorithms 16.1 An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a n } of n proposed activities that with to use a resource. Each activity a i has a start time s i and a

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms 6.046J/18.401J LECTURE 12 Dynamic programming Longest common subsequence Optimal substructure Overlapping subproblems Prof. Charles E. Leiserson Dynamic programming Design technique,

More information

CSE 421 Applications of DFS(?) Topological sort

CSE 421 Applications of DFS(?) Topological sort CSE 421 Applications of DFS(?) Topological sort Yin Tat Lee 1 Precedence Constraints In a directed graph, an edge (i, j) means task i must occur before task j. Applications Course prerequisite: course

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 15, 2015 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing

More information

Analysis of Algorithms - Greedy algorithms -

Analysis of Algorithms - Greedy algorithms - Analysis of Algorithms - Greedy algorithms - Andreas Ermedahl MRTC (Mälardalens Real-Time Reseach Center) andreas.ermedahl@mdh.se Autumn 2003 Greedy Algorithms Another paradigm for designing algorithms

More information

Main approach: always make the choice that looks best at the moment.

Main approach: always make the choice that looks best at the moment. Greedy algorithms Main approach: always make the choice that looks best at the moment. - More efficient than dynamic programming - Always make the choice that looks best at the moment (just one choice;

More information

Dynamic Programming. Lecture Overview Introduction

Dynamic Programming. Lecture Overview Introduction Lecture 12 Dynamic Programming 12.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I MCA 312: Design and Analysis of Algorithms [Part I : Medium Answer Type Questions] UNIT I 1) What is an Algorithm? What is the need to study Algorithms? 2) Define: a) Time Efficiency b) Space Efficiency

More information

Exercises Optimal binary search trees root

Exercises Optimal binary search trees root 5.5 Optimal binary search trees 403 e w 5 5 j 4.75 i j 4.00 i 3.75.00 3 3 0.70 0.80 3.5.0. 4 0.55 0.50 0.60 4 0.90 0.70 0.60 0.90 5 0.45 0.35 0. 0.50 5 0 0.45 0.40 0.5 0. 0.50 6 0 0. 0.5 0.5 0.0 0.35 6

More information

Today: Matrix Subarray (Divide & Conquer) Intro to Dynamic Programming (Rod cutting) COSC 581, Algorithms January 21, 2014

Today: Matrix Subarray (Divide & Conquer) Intro to Dynamic Programming (Rod cutting) COSC 581, Algorithms January 21, 2014 Today: Matrix Subarray (Divide & Conquer) Intro to Dynamic Programming (Rod cutting) COSC 581, Algorithms January 21, 2014 Reading Assignments Today s class: Chapter 4.1, 15.1 Reading assignment for next

More information

Chapter 16. Greedy Algorithms

Chapter 16. Greedy Algorithms Chapter 16. Greedy Algorithms Algorithms for optimization problems (minimization or maximization problems) typically go through a sequence of steps, with a set of choices at each step. A greedy algorithm

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

Dynamic Programming (Part #2)

Dynamic Programming (Part #2) Dynamic Programming (Part #) Introduction to Algorithms MIT Press (Chapter 5) Matrix-Chain Multiplication Problem: given a sequence A, A,, A n, compute the product: A A A n Matrix compatibility: C = A

More information

Lecture 8. Dynamic Programming

Lecture 8. Dynamic Programming Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018

More information

Problem Set 6 Solutions

Problem Set 6 Solutions Introduction to Algorithms November 15, 2002 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi oldwasser Handout 22 Problem Set 6 Solutions (Exercises were not to be

More information

Lecture 20 : Trees DRAFT

Lecture 20 : Trees DRAFT CS/Math 240: Introduction to Discrete Mathematics 4/12/2011 Lecture 20 : Trees Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last time we discussed graphs. Today we continue this discussion,

More information

Main approach: always make the choice that looks best at the moment. - Doesn t always result in globally optimal solution, but for many problems does

Main approach: always make the choice that looks best at the moment. - Doesn t always result in globally optimal solution, but for many problems does Greedy algorithms Main approach: always make the choice that looks best at the moment. - More efficient than dynamic programming - Doesn t always result in globally optimal solution, but for many problems

More information

Lecture 3 February 9, 2010

Lecture 3 February 9, 2010 6.851: Advanced Data Structures Spring 2010 Dr. André Schulz Lecture 3 February 9, 2010 Scribe: Jacob Steinhardt and Greg Brockman 1 Overview In the last lecture we continued to study binary search trees

More information

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1 Asymptotics, Recurrence and Basic Algorithms 1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1 2. O(n) 2. [1 pt] What is the solution to the recurrence T(n) = T(n/2) + n, T(1)

More information

8 SortinginLinearTime

8 SortinginLinearTime 8 SortinginLinearTime We have now introduced several algorithms that can sort n numbers in O(n lg n) time. Merge sort and heapsort achieve this upper bound in the worst case; quicksort achieves it on average.

More information

Advanced Algorithms and Data Structures

Advanced Algorithms and Data Structures Advanced Algorithms and Data Structures Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Prerequisites A seven credit unit course Replaced OHJ-2156 Analysis of Algorithms We take things a bit further than

More information

Problem Set 2 Solutions

Problem Set 2 Solutions Problem Set 2 Solutions Graph Theory 2016 EPFL Frank de Zeeuw & Claudiu Valculescu 1. Prove that the following statements about a graph G are equivalent. - G is a tree; - G is minimally connected (it is

More information

Figure 4.1: The evolution of a rooted tree.

Figure 4.1: The evolution of a rooted tree. 106 CHAPTER 4. INDUCTION, RECURSION AND RECURRENCES 4.6 Rooted Trees 4.6.1 The idea of a rooted tree We talked about how a tree diagram helps us visualize merge sort or other divide and conquer algorithms.

More information

CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees

CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees Reminders > HW4 is due today > HW5 will be posted shortly Dynamic Programming Review > Apply the steps... 1. Describe solution in terms of solution

More information

CMSC 754 Computational Geometry 1

CMSC 754 Computational Geometry 1 CMSC 754 Computational Geometry 1 David M. Mount Department of Computer Science University of Maryland Fall 2005 1 Copyright, David M. Mount, 2005, Dept. of Computer Science, University of Maryland, College

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 6, 2016 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing up

More information

Greedy Algorithms. Informal Definition A greedy algorithm makes its next step based only on the current state and simple calculations on the input.

Greedy Algorithms. Informal Definition A greedy algorithm makes its next step based only on the current state and simple calculations on the input. Greedy Algorithms Informal Definition A greedy algorithm makes its next step based only on the current state and simple calculations on the input. easy to design not always correct challenge is to identify

More information

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms

Efficient Sequential Algorithms, Comp309. Problems. Part 1: Algorithmic Paradigms Efficient Sequential Algorithms, Comp309 Part 1: Algorithmic Paradigms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press

More information

Dynamic Programming Algorithms

Dynamic Programming Algorithms Based on the notes for the U of Toronto course CSC 364 Dynamic Programming Algorithms The setting is as follows. We wish to find a solution to a given problem which optimizes some quantity Q of interest;

More information

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165 S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165 5.22. You are given a graph G = (V, E) with positive edge weights, and a minimum spanning tree T = (V, E ) with respect to these weights; you may

More information

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 Objectives: learn general strategies for problems about order statistics learn how to find the median (or k-th largest) in linear average-case number of

More information

Algorithms: COMP3121/3821/9101/9801

Algorithms: COMP3121/3821/9101/9801 NEW SOUTH WALES Algorithms: COMP3121/3821/9101/9801 Aleks Ignjatović School of Computer Science and Engineering University of New South Wales TOPIC 5: DYNAMIC PROGRAMMING COMP3121/3821/9101/9801 1 / 38

More information

Design and Analysis of Algorithms Lecture- 9: Binary Search Trees

Design and Analysis of Algorithms Lecture- 9: Binary Search Trees Design and Analysis of Algorithms Lecture- 9: Binary Search Trees Dr. Chung- Wen Albert Tsao 1 Binary Search Trees Data structures that can support dynamic set operations. Search, Minimum, Maximum, Predecessor,

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

III Data Structures. Dynamic sets

III Data Structures. Dynamic sets III Data Structures Elementary Data Structures Hash Tables Binary Search Trees Red-Black Trees Dynamic sets Sets are fundamental to computer science Algorithms may require several different types of operations

More information

1 Dynamic Programming

1 Dynamic Programming CS161 Lecture 13 Dynamic Programming and Greedy Algorithms Scribe by: Eric Huang Date: May 13, 2015 1 Dynamic Programming The idea of dynamic programming is to have a table of solutions of subproblems

More information

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.)

1 i n (p i + r n i ) (Note that by allowing i to be n, we handle the case where the rod is not cut at all.) Dynamic programming is a problem solving method that is applicable to many different types of problems. I think it is best learned by example, so we will mostly do examples today. 1 Rod cutting Suppose

More information

CS521 \ Notes for the Final Exam

CS521 \ Notes for the Final Exam CS521 \ Notes for final exam 1 Ariel Stolerman Asymptotic Notations: CS521 \ Notes for the Final Exam Notation Definition Limit Big-O ( ) Small-o ( ) Big- ( ) Small- ( ) Big- ( ) Notes: ( ) ( ) ( ) ( )

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

So far... Finished looking at lower bounds and linear sorts.

So far... Finished looking at lower bounds and linear sorts. So far... Finished looking at lower bounds and linear sorts. Next: Memoization -- Optimization problems - Dynamic programming A scheduling problem Matrix multiplication optimization Longest Common Subsequence

More information

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya CS60020: Foundations of Algorithm Design and Machine Learning Sourangshu Bhattacharya Dynamic programming Design technique, like divide-and-conquer. Example: Longest Common Subsequence (LCS) Given two

More information

These are not polished as solutions, but ought to give a correct idea of solutions that work. Note that most problems have multiple good solutions.

These are not polished as solutions, but ought to give a correct idea of solutions that work. Note that most problems have multiple good solutions. CSE 591 HW Sketch Sample Solutions These are not polished as solutions, but ought to give a correct idea of solutions that work. Note that most problems have multiple good solutions. Problem 1 (a) Any

More information

Solutions to Problem Set 1

Solutions to Problem Set 1 CSCI-GA.3520-001 Honors Analysis of Algorithms Solutions to Problem Set 1 Problem 1 An O(n) algorithm that finds the kth integer in an array a = (a 1,..., a n ) of n distinct integers. Basic Idea Using

More information

CMSC Introduction to Algorithms Spring 2012 Lecture 16

CMSC Introduction to Algorithms Spring 2012 Lecture 16 CMSC 351 - Introduction to Algorithms Spring 2012 Lecture 16 Instructor: MohammadTaghi Hajiaghayi Scribe: Rajesh Chitnis 1 Introduction In this lecture we will look at Greedy Algorithms and Dynamic Programming.

More information

CMPS 102 Solutions to Homework 7

CMPS 102 Solutions to Homework 7 CMPS 102 Solutions to Homework 7 Kuzmin, Cormen, Brown, lbrown@soe.ucsc.edu November 17, 2005 Problem 1. 15.4-1 p.355 LCS Determine an LCS of x = (1, 0, 0, 1, 0, 1, 0, 1) and y = (0, 1, 0, 1, 1, 0, 1,

More information

AXIOMS FOR THE INTEGERS

AXIOMS FOR THE INTEGERS AXIOMS FOR THE INTEGERS BRIAN OSSERMAN We describe the set of axioms for the integers which we will use in the class. The axioms are almost the same as what is presented in Appendix A of the textbook,

More information

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CMPS 2200 Fall Dynamic Programming. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk CMPS 00 Fall 04 Dynamic Programming Carola Wenk Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk 9/30/4 CMPS 00 Intro. to Algorithms Dynamic programming Algorithm design technique

More information

CPE702 Algorithm Analysis and Design Week 7 Algorithm Design Patterns

CPE702 Algorithm Analysis and Design Week 7 Algorithm Design Patterns CPE702 Algorithm Analysis and Design Week 7 Algorithm Design Patterns Pruet Boonma pruet@eng.cmu.ac.th Department of Computer Engineering Faculty of Engineering, Chiang Mai University Based on Slides by

More information

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming (2017F) Lecture12: Strings and Dynamic Programming Daijin Kim CSE, POSTECH dkim@postech.ac.kr Strings A string is a sequence of characters Examples of strings: Python program HTML document DNA sequence

More information

Friday Four Square! 4:15PM, Outside Gates

Friday Four Square! 4:15PM, Outside Gates Binary Search Trees Friday Four Square! 4:15PM, Outside Gates Implementing Set On Monday and Wednesday, we saw how to implement the Map and Lexicon, respectively. Let's now turn our attention to the Set.

More information

6.001 Notes: Section 4.1

6.001 Notes: Section 4.1 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,

More information

4.1 Interval Scheduling

4.1 Interval Scheduling 41 Interval Scheduling Interval Scheduling Interval scheduling Job j starts at s j and finishes at f j Two jobs compatible if they don't overlap Goal: find maximum subset of mutually compatible jobs a

More information