16 Greedy Algorithms

Size: px
Start display at page:

Download "16 Greedy Algorithms"

Transcription

1 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices is overkill; simpler, more efcient algorithms will do A greedy algorithm always makes the choice that looks best at the moment That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution MAT AADS, Fall Oct An activity-selection problem Suppose we have a set =,,, of proposed activities that wish to use a resource (e.g., a lecture hall), which can serve only one activity at a time Each activity has a start time and a nish time, where < If selected, activity takes place during the halfopen time interval [, ) MAT AADS, Fall Oct

2 Activities and are compatible if the intervals [, ) and [, ) do not overlap I.e., and are compatible if or We wish to select a maximum-size subset of mutually compatible activities We assume that the activities are sorted in monotonically increasing order of nish time: MAT AADS, Fall Oct Consider, e.g., the following set of activities: For this example, the subset,, consists of mutually compatible activities It is not a maximum subset, however, since the subset,,, is larger In fact, it is a largest subset of mutually compatible activities; another largest subset is,,, MAT AADS, Fall Oct

3 The optimal substructure of the activityselection problem Let be the set of activities that start after nishes and that nish before starts We wish to nd a maximum set of mutually compatible activities in Suppose that such a maximum set is, which includes some activity By including in an optimal solution, we are left with two subproblems: nding mutually compatible activities in the set and nding mutually compatible activities in the set MAT AADS, Fall Oct Let = and =, so that contains the activities in that nish before starts and contains the activities in that start after nishes Thus, =, and so the maximum-size set in consists of = + +1activities The usual cut-and-paste argument shows that the optimal solution must also include optimal solutions for and MAT AADS, Fall Oct

4 This suggests that we might solve the activityselection problem by dynamic programming If we denote the size of an optimal solution for the set by [, ], then we would have the recurrence, =, +, +1 Of course, if we did not know that an optimal solution for the set includes activity, we would have to examine all activities in to nd which one to choose, so that 0 if, = max, =, +, +1 if MAT AADS, Fall Oct Making the greedy choice For the activity-selection problem, we need consider only the greedy choice We choose an activity that leaves the resource available for as many other activities as possible Now, of the activities we end up choosing, one of them must be the rst one to nish Choose the activity in with the earliest nish time, since that leaves the resource available for as many of the activities that follow it as possible Activities are sorted in monotonically increasing order by nish time; greedy choice is activity MAT AADS, Fall Oct

5 If we make the greedy choice, we only have to nd activities that start after nishes < and is the earliest nish time of any activity no activity can have a nish time Thus, all activities that are compatible with activity must start after nishes Let = : be the set of activities that start after nishes Optimal substructure: if is in the optimal solution, then an optimal solution to the original problem consists of and all the activities in an optimal solution to the subproblem MAT AADS, Fall Oct Theorem 16.1 Consider any nonempty subproblem, and let be an activity in with the earliest nish time. Then is included in some maximum-size subset of mutually compatible activities of. Proof Let be a max-size subset of mutually compatible activities in, and let be the activity in with the earliest nish time. If =, we are done, since is in a max-size subset of mutually compatible activities of. If, let the set =. The activities in are disjoint because the activities in are disjoint, is the rst activity in to nish, and. Since =, we conclude that is a maximum-size subset of mutually compatible activities of and includes. MAT AADS, Fall Oct

6 We can repeatedly choose the activity that nishes rst, keep only the activities compatible with this activity, and repeat until no activities remain Moreover, because we always choose the activity with the earliest nish time, the nish times of the activities we choose must strictly increase We can consider each activity just once overall, in monotonically increasing order of nish times MAT AADS, Fall Oct A recursive greedy algorithm RECURSIVE-ACTIVITY-SELECTOR(,,, ) while and [] <[] // nd the rst // activity in to nish if 5. return RECURSIVE-ACTIVITY- SELECTOR(,,, ) 6. else return MAT AADS, Fall Oct

7 MAT AADS, Fall Oct An iterative greedy algorithm GREEDY-ACTIVITY-SELECTOR(, ) for 2to 5. if [ [] return MAT AADS, Fall Oct

8 The set returned by the call GREEDY-ACTIVITY-SELECTOR(, ) is precisely the set returned by the call RECURSIVE-ACTIVITY-SELECTOR(,,, ) Both the recursive version and the iterative algorithm schedule a set of activities in () time, assuming that the activities were already sorted initially by their nish times MAT AADS, Fall Oct Huffman codes Huffman codes compress data very effectively savings of 20% to 90% are typical, depending on the characteristics of the data being compressed We consider the data to be a sequence of characters Huffman s greedy algorithm uses a table giving how often each character occurs (i.e., its frequency) to build up an optimal way of representing each character as a binary string MAT AADS, Fall Oct

9 We have a 100,000-character data le that we wish to store compactly We observe that the characters in the le occur with the frequencies given in the table above That is, only 6 different characters appear, and the character occurs 45,000 times Here, we consider the problem of designing a binary character code (or code for short) in which each character is represented by a unique binary string, which we call a codeword MAT AADS, Fall Oct Using a xed-length code, requires 3 bits to represent 6 characters: = 000, = 001,, = 101 We now need 300,000 bits to code the entire le A variable-length code gives frequent characters short codewords and infrequent characters long codewords Here the 1-bit string 0 represents, and the 4-bit string 1100 represents This code requires ( )1,000 = 224,000bits (savings 25%) MAT AADS, Fall Oct

10 Prex codes We consider only codes in which no codeword is also a prex of some other codeword A prex code can always achieve the optimal data compression among any character code, and so we can restrict our attention to prex codes Encoding is always simple for any binary character code; we just concatenate the codewords representing each character of the le E.g., with the variable-length prex code, we code the 3-character le as = MAT AADS, Fall Oct Prex codes simplify decoding No codeword is a prex of any other, the codeword that begins an encoded le is unambiguous We can simply identify the initial codeword, translate it back to the original character, and repeat the decoding process on the remainder of the encoded le In our example, the string parses uniquely as , which decodes to MAT AADS, Fall Oct

11 The decoding process needs a convenient representation for the prex code so that we can easily pick off the initial codeword A binary tree whose leaves are the given characters provides one such representation We interpret the binary codeword for a character as the simple path from the root to that character, where 0 means go to the left child and 1 means go to the right child Note that the trees are not BSTs the leaves need not appear in sorted order and internal nodes do not contain character keys MAT AADS, Fall Oct The trees corresponding to the xed-length code = 000,, = 101 and the optimal prex code = 0, = 101,, = 1100 MAT AADS, Fall Oct

12 An optimal code for a le is always represented by a full binary tree, in which every nonleaf node has two children The xed-length code in our example is not optimal since its tree is not a full binary tree: it contains codewords beginning 10, but none beginning 11 Since we can now restrict our attention to full binary trees, we can say that if is the alphabet from which the characters are drawn and all character frequencies are positive, then the tree for an optimal prex code has exactly leaves, one for each letter of the alphabet, and exactly 1internal nodes MAT AADS, Fall Oct Given a tree corresponding to a prex code, we can easily compute the number of bits required to encode a le For each character, let the attribute. denote the frequency of and let () denote the depth of s leaf () is also the length of the codeword for Number of bits required to encode a le is thus =. () which we dene as the cost of the tree MAT AADS, Fall Oct

13 Constructing a Huffman code Let be a set of characters and each character be an object with an attribute. freq The algorithm builds the tree corresponding to the optimal code bottom-up It begins with leaves and performs 1 merging operations to create the nal tree We use a min-priority queue, keyed on freq, to identify the two least-frequent objects to merge The result is a new object whose frequency is the sum of the frequencies of the two objects MAT AADS, Fall Oct HUFFMAN() for 1to 1 4. allocate a new node 5.. left EXTRACT-MIN() 6.. right EXTRACT-MIN() 7.. freq. freq +. freq 8. INSERT(, ) 9. return EXTRACT-MIN() // return the root of the tree MAT AADS, Fall Oct

14 MAT AADS, Fall Oct To analyze the running time of HUFFMAN, let be implemented as a binary min-heap For a set of characters, we can initialize (line 2) in () time using the BUILD-MIN-HEAP The for loop executes exactly 1times, and since each heap operation requires time (lg ), the loop contributes (lg ), to the running time Thus, the total running time of HUFFMAN on a set of characters is (lg ) Running time reduces to ( lg lg ) by replacing the binary min-heap with a van Emde Boas tree MAT AADS, Fall Oct

15 Correctness of Huffman s algorithm The problem of determining an optimal prex code exhibits the greedy-choice and optimalsubstructure properties Lemma 16.2 Let be an alphabet in which each character has frequency. freq. Let and be two characters in having the lowest frequencies. Then there exists an optimal prex code for in which the codewords for and have the same length and differ only in the last bit. MAT AADS, Fall Oct Lemma 16.3 Let,. freq,, and be as in Lemma Let =,. Dene freq for as for, except that. freq =. freq +. freq. Let be any tree representing an optimal prex code for the alphabet. Then the tree, obtained from by replacing the leaf node for with an internal node having and as children, represents an optimal prex code for. Theorem 16.4 Procedure HUFFMAN produces an optimal prex code. MAT AADS, Fall Oct

16 17 Amortized Analysis We average the time required to perform a sequence of data-structure operations over all the operations performed Thus, we can show that the average cost of an operation is small, even if a single operation within the sequence might be expensive Amortized analysis differs from average-case analysis in that probability is not involved Amortized analysis guarantees the average performance of each operation in the worst case MAT AADS, Fall Oct Aggregate analysis We show that for all, a sequence of operations takes worst-case time () in total In the worst case, average cost, or amortized cost, per operation is therefore ()/ This amortized cost applies to each operation, even when there are several types of operations in the sequence The other two methods we shall study, may assign different amortized costs to different types of operations MAT AADS, Fall Oct

17 Stack operations The fundamental stack operations PUSH(, ) and POP() each takes (1) time Let s consider the cost of each operation to be 1 The total cost of a sequence of PUSH and POP operations is therefore, and the actual running time for operations is therefore () Now we add the stack operation MULTIPOP(, ), which removes the top objects of stack, popping the entire stack if the stack contains fewer than objects MAT AADS, Fall Oct The operation STACK-EMPTY returns TRUEif there are no objects currently on the stack, and FALSE otherwise MULTIPOP(, ) 1. while not STACK-EMPTY() and >0 2. POP 3. 1 The total cost of MULTIPOP is min(, ), and the actual running time is a linear function of this MAT AADS, Fall Oct

18 Let us analyze a sequence of PUSH, POP, and MULTIPOP operations on an initially empty stack The worst-case cost of a MULTIPOP operation is (), since the stack size is at most The worst-case time of any stack operation is (), and hence a sequence of operations costs ( ) This analysis is correct, but the result is not tight Using aggregate analysis, we can obtain a better upper bound that considers the entire sequence of operations MAT AADS, Fall Oct We can pop each object from the stack at most once for each time we have pushed it onto the stack The number of times that POP can be called on a nonempty stack, including calls within MULTIPOP, is at most the number of PUSH operations, which is at most Any sequence of PUSH, POP, and MULTIPOP operations takes a total of () time The average cost of an operation () = (1) MAT AADS, Fall Oct

19 Incrementing a binary counter Consider the problem of implementing a -bit binary counter that counts upward from 0 We use an array [0.. 1] of bits, where. =, as the counter A binary number that is stored in the counter has its lowest-order bit in [0] and its highestorder bit in [ 1], so that = [] 2 MAT AADS, Fall Oct Initially, = 0: = 0for = 0,1,, 1 To add 1 (modulo 2 ) to the value in the counter, we use the following procedure INCREMENT() while <. and = if < MAT AADS, Fall Oct

20 MAT AADS, Fall Oct At the start of each iteration of the while loop (lines 2 4), we wish to add a 1 into position If =1, then adding 1 ips the bit to 0 in position and yields a carry of 1, to be added into position +1on the next iteration of the loop Otherwise, the loop ends, and then, if <, we know that =0, so that line 6 adds a 1 into position, ipping the 0 to a 1 The cost of each INCREMENT operation is linear in the number of bits ipped MAT AADS, Fall Oct

21 A cursory analysis yields a bound that is correct but not tight Single execution of INCREMENT takes time () in the worst case, when array contains all 1s Thus, a sequence of INCREMENT operations on an initially zero counter takes time () Tighten the analysis to yield a worst-case cost of () by observing that not all bits ip each time INCREMENT is called [0] does ip each time INCREMENT is called MAT AADS, Fall Oct The next bit up, [1], ips only every other time a sequence of INCREMENT operations on zero counter causes [1] to ip 2 times Similarly, bit [2] ips only every fourth time, or 4 times in a sequence of INCREMENT operations In general, bit [] ips 2 times in a sequence of INCREMENT operations on an initially zero counter For, bit [] does not exist, and so it cannot ip MAT AADS, Fall Oct

22 The total number of ips in the sequence is thus 2 < 1 2 = by infinite decreasing geometric series =1 (1), where <1 Worst-case time for a sequence of INCREMENT operations on an initially zero counter is therefore () The average cost of each operation, and therefore the amortized cost per operation, is() = (1) MAT AADS, Fall Oct The accounting method Assign differing charges to different operations, some charged more/less than they actually cost When an operation s amortized cost exceeds its actual cost, we assign the difference to specic objects in the data structure as credit Credit can help pay for later operations We can view the amortized cost of an operation as being split between its actual cost and credit that is either deposited or used up Amortized cost of operations may differ MAT AADS, Fall Oct

23 We want to show that in the worst case the average cost per operation is small by analyzing with amortized costs We must ensure that the total amortized cost of a sequence of operations provides an upper bound on the total actual cost of the sequence Moreover, this relationship must hold for all sequences of operations Let be the actual cost of the th operation and its amortized cost, require for all -sequences MAT AADS, Fall Oct Total credit stored in the data structure is the difference between the total amortized cost and the total actual cost, Total credit associated with the data structure must be nonnegative at all times If we ever were to allow the total credit to become negative, then the total amortized cost would not be an upper bound on the total actual cost We must take care that the total credit in the data structure never becomes negative MAT AADS, Fall Oct

24 Stack operations Recall the actual costs of operations; PUSH: 1, POP: 1, and MULTIPOP min(, ), where is the argument of MULTIPOP and is the stack size Let us assign the following amortized costs: PUSH: 2, POP: 0, and MULTIPOP 0 The amortized cost of MULTIPOP is a constant, whereas the actual cost is variable In general, the amortized costs of the operations under consideration may differ from each other, and they may even differ asymptotically MAT AADS, Fall Oct We can pay for any sequence of stack operations by charging the amortized costs We start with an empty stack When we push on the stack, we use 1 unit of cost to pay the actual cost of the push and are left with a credit of 1 The unit stored serves as prepayment for the cost of popping it from the stack When we execute a POP operation, we charge the operation nothing and pay its actual cost using the credit stored in the stack MAT AADS, Fall Oct

25 We can also charge MULTIPOPs nothing To pop the 1 st element, we take 1 unit of cost from the credit and use it to pay the actual cost of a POP operation To pop a 2 nd element, we again have an unit of cost in the credit to pay for the POP operation, Thus, we have always charged enough up front to pay for MULTIPOP operations For any sequence of operations, the total amortized cost is an upper bound on actual cost Since the total amortized cost is (), so is the total actual cost MAT AADS, Fall Oct Incrementing a binary counter The running time is proportional to the number of bits ipped, which we use as our cost Let us charge an amortized cost of 2 units to set a bit to 1 We use 1 unit to pay for the setting of the bit, and we place the other unit on the bit as credit to be used later when we ip the bit back to 0 At any point in time, every 1 in the counter has a unit of credit on it, and thus we can charge nothing to reset a bit to 0 MAT AADS, Fall Oct

26 The amortized cost of INCREMENT: The cost of resetting the bits within the while loop is paid for by the units on the bits that are reset The INCREMENT procedure sets at most one bit (line 6) and therefore the amortized cost of an INCREMENT operation is at most 2 units The number of 1s in the counter never becomes negative, and thus the amount of credit stays nonnegative at all times For INCREMENT operations, the total amortized cost is (), which bounds the total actual cost MAT AADS, Fall Oct The potential method We represent the prepaid work as potential energy, or just potential, which can be released to pay for future operations Associate the potential with the data structure as a whole rather than with specic objects We will perform operations, starting with an initial data structure Let be the actual cost of the th operation and the data structure that results after applying the th operation to data structure MAT AADS, Fall Oct

27 Potential function maps data structure to a real number ( ), the potential of The amortized cost of the th operation with respect to potential function is dened by = ( ( ) The actual cost + the change in potential The total amortized cost of the operations is = ( ( ) = ( ( ) 2 nd equality follows because the terms telescope MAT AADS, Fall Oct If we can dene a potential function so that ( ( ) then the total amortized cost gives an upper bound on the total actual cost In practice, we do not always know how many operations might be performed Therefore, if we require that ( ( ) for all, then we guarantee, as in the accounting method, that we pay in advance We usually just dene ( ) to be 0 and then show that ( 0for all MAT AADS, Fall Oct

28 If the potential difference ( ( ) of the th operation is positive, then the amortized cost represents an overcharge to the th operation, and the potential of the data structure increases negative, then the amortized cost represents an undercharge to the th operation, and the decrease in the potential pays for the actual cost of the operation Different potential functions may yield different amortized costs yet still be upper bounds on the actual costs MAT AADS, Fall Oct Stack operations We dene the potential function on a stack to be the number of objects in the stack For the empty stack, we have =0 Since the number of objects in the stack is never negative, the stack that results after the th operation has nonnegative potential, and thus 0= The total amortized cost of operations with respect to therefore represents an upper bound on the actual cost MAT AADS, Fall Oct

29 If the th operation on a stack containing objects is a PUSH operation, then the potential difference is = +1 =1 The amortized cost of this PUSH operation is = ( ( )=1+1=2 Suppose that the th operation on the stack is MULTIPOP(, ), which causes = min(, ) objects to be popped off the stack The actual cost of the operation is, and the potential difference is = : MAT AADS, Fall Oct Thus, the amortized cost of the MULTIPOP operation is = ( ( )= =0 Similarly, the amortized cost of an ordinary POP operation is 0 The amortized cost of each of the three operations is (1), and thus the total amortized cost of a sequence of operations is () Since ( ( ), the total amortized cost of operations is an upper bound on the total actual cost The worst-case cost of operations is () MAT AADS, Fall Oct

30 Incrementing a binary counter We dene the potential of the counter after the thincrement to be, the number of 1s in the counter after the th operation Suppose that the thincrement operation resets bits The actual cost of the operation is therefore at most +1, since in addition to resetting bits, it sets at most one bit to 1 If =0, then the th operation resets all bits MAT AADS, Fall Oct In this situation = = If >0, then = +1 In either case, +1, and the potential difference is ( ( ( +1) =1 The amortized cost is therefore = ( ( ) =2 If the counter starts at zero, then ( )=0 ( 0for all, the total amortized cost of INCREMENTs is an upper bound on the total actual cost, and so the worst-case cost is () MAT AADS, Fall Oct

31 The potential method gives us a way to analyze the counter even when it does not start at zero The counter starts with 1s, and after INCREMENTs it has 1s, where, Rewrite total actual cost in terms of amortized cost as = ( ) + ( ) We have 2for all MAT AADS, Fall Oct Since ( )= and ( )=, the total actual cost of INCREMENTs is + Note that since, as long as = (), the total actual cost is () I.e., if we execute at least () INCREMENT operations, the total actual cost is (), no matter what initial value the counter contains MAT AADS, Fall Oct

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes Greedy Algorithms CLRS Chapters 16.1 16.3 Introduction to greedy algorithms Activity-selection problem Design of data-compression (Huffman) codes (Minimum spanning tree problem) (Shortest-path problem)

More information

COP 4531 Complexity & Analysis of Data Structures & Algorithms

COP 4531 Complexity & Analysis of Data Structures & Algorithms COP 4531 Complexity & Analysis of Data Structures & Algorithms Amortized Analysis Thanks to the text authors who contributed to these slides What is amortized analysis? Analyze a sequence of operations

More information

Text Compression through Huffman Coding. Terminology

Text Compression through Huffman Coding. Terminology Text Compression through Huffman Coding Huffman codes represent a very effective technique for compressing data; they usually produce savings between 20% 90% Preliminary example We are given a 100,000-character

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 11 Greedy Algorithms 1 Activity Selection Problem Input: a set S {1, 2,, n} of n activities s i =Start time of activity i, f i = Finish time of activity i Activity i takes place

More information

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( ) 17.4 Dynamic tables Let us now study the problem of dynamically expanding and contracting a table We show that the amortized cost of insertion/ deletion is only (1) Though the actual cost of an operation

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

18.3 Deleting a key from a B-tree

18.3 Deleting a key from a B-tree 18.3 Deleting a key from a B-tree B-TREE-DELETE deletes the key from the subtree rooted at We design it to guarantee that whenever it calls itself recursively on a node, the number of keys in is at least

More information

16.Greedy algorithms

16.Greedy algorithms 16.Greedy algorithms 16.1 An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a n } of n proposed activities that with to use a resource. Each activity a i has a start time s i and a

More information

Amortized Analysis. A Simple Analysis. An Amortized Analysis. Incrementing Binary Numbers. A Simple Analysis

Amortized Analysis. A Simple Analysis. An Amortized Analysis. Incrementing Binary Numbers. A Simple Analysis Amortized Analysis The problem domains vary widely, so this approach is not tied to any single data structure The goal is to guarantee the average performance of each operation in the worst case Three

More information

II (Sorting and) Order Statistics

II (Sorting and) Order Statistics II (Sorting and) Order Statistics Heapsort Quicksort Sorting in Linear Time Medians and Order Statistics 8 Sorting in Linear Time The sorting algorithms introduced thus far are comparison sorts Any comparison

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, )

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, ) Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 1. if >. 2. error new key is greater than current key 3.. 4.. 5. if NIL and.

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

15.4 Longest common subsequence

15.4 Longest common subsequence 15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible

More information

Analysis of Algorithms - Greedy algorithms -

Analysis of Algorithms - Greedy algorithms - Analysis of Algorithms - Greedy algorithms - Andreas Ermedahl MRTC (Mälardalens Real-Time Reseach Center) andreas.ermedahl@mdh.se Autumn 2003 Greedy Algorithms Another paradigm for designing algorithms

More information

COMP251: Greedy algorithms

COMP251: Greedy algorithms COMP251: Greedy algorithms Jérôme Waldispühl School of Computer Science McGill University Based on (Cormen et al., 2002) Based on slides from D. Plaisted (UNC) & (goodrich & Tamassia, 2009) Disjoint sets

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Optimization

More information

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced 13 Red-Black Trees A red-black tree (RBT) is a BST with one extra bit of storage per node: color, either RED or BLACK Constraining the node colors on any path from the root to a leaf Ensures that no such

More information

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms

Dynamic Programming Assembly-Line Scheduling. Greedy Algorithms Chapter 13 Greedy Algorithms Activity Selection Problem 0-1 Knapsack Problem Huffman Code Construction Dynamic Programming Assembly-Line Scheduling C-C Tsai P.1 Greedy Algorithms A greedy algorithm always

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

MCS-375: Algorithms: Analysis and Design Handout #G2 San Skulrattanakulchai Gustavus Adolphus College Oct 21, Huffman Codes

MCS-375: Algorithms: Analysis and Design Handout #G2 San Skulrattanakulchai Gustavus Adolphus College Oct 21, Huffman Codes MCS-375: Algorithms: Analysis and Design Handout #G2 San Skulrattanakulchai Gustavus Adolphus College Oct 21, 2016 Huffman Codes CLRS: Ch 16.3 Ziv-Lempel is the most popular compression algorithm today.

More information

Greedy algorithms part 2, and Huffman code

Greedy algorithms part 2, and Huffman code Greedy algorithms part 2, and Huffman code Two main properties: 1. Greedy choice property: At each decision point, make the choice that is best at the moment. We typically show that if we make a greedy

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms Instructor: SharmaThankachan Lecture 10: Greedy Algorithm Slides modified from Dr. Hon, with permission 1 About this lecture Introduce Greedy Algorithm Look at some problems

More information

Chapter 16: Greedy Algorithm

Chapter 16: Greedy Algorithm Chapter 16: Greedy Algorithm 1 About this lecture Introduce Greedy Algorithm Look at some problems solvable by Greedy Algorithm 2 Coin Changing Suppose that in a certain country, the coin dominations consist

More information

would be included in is small: to be exact. Thus with probability1, the same partition n+1 n+1 would be produced regardless of whether p is in the inp

would be included in is small: to be exact. Thus with probability1, the same partition n+1 n+1 would be produced regardless of whether p is in the inp 1 Introduction 1.1 Parallel Randomized Algorihtms Using Sampling A fundamental strategy used in designing ecient algorithms is divide-and-conquer, where that input data is partitioned into several subproblems

More information

CSci 231 Final Review

CSci 231 Final Review CSci 231 Final Review Here is a list of topics for the final. Generally you are responsible for anything discussed in class (except topics that appear italicized), and anything appearing on the homeworks.

More information

We assume uniform hashing (UH):

We assume uniform hashing (UH): We assume uniform hashing (UH): the probe sequence of each key is equally likely to be any of the! permutations of 0,1,, 1 UH generalizes the notion of SUH that produces not just a single number, but a

More information

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm G205 Fundamentals of Computer Engineering CLASS 21, Mon. Nov. 22 2004 Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm Greedy Algorithms, 1 Algorithms for Optimization Problems Sequence of steps Choices at

More information

III Data Structures. Dynamic sets

III Data Structures. Dynamic sets III Data Structures Elementary Data Structures Hash Tables Binary Search Trees Red-Black Trees Dynamic sets Sets are fundamental to computer science Algorithms may require several different types of operations

More information

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the Heap-on-Top Priority Queues Boris V. Cherkassky Central Economics and Mathematics Institute Krasikova St. 32 117418, Moscow, Russia cher@cemi.msk.su Andrew V. Goldberg NEC Research Institute 4 Independence

More information

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD Algorithms (2IL15) Lecture 2 THE GREEDY METHOD x y v w 1 Optimization problems for each instance there are (possibly) multiple valid solutions goal is to find an optimal solution minimization problem:

More information

February 24, :52 World Scientific Book - 9in x 6in soltys alg. Chapter 3. Greedy Algorithms

February 24, :52 World Scientific Book - 9in x 6in soltys alg. Chapter 3. Greedy Algorithms Chapter 3 Greedy Algorithms Greedy algorithms are algorithms prone to instant gratification. Without looking too far ahead, at each step they make a locally optimum choice, with the hope that it will lead

More information

COMP251: Amortized Analysis

COMP251: Amortized Analysis COMP251: Amortized Analysis Jérôme Waldispühl School of Computer Science McGill University Based on (Cormen et al., 2009) Overview Analyze a sequence of operations on a data structure. We will talk about

More information

So the actual cost is 2 Handout 3: Problem Set 1 Solutions the mark counter reaches c, a cascading cut is performed and the mark counter is reset to 0

So the actual cost is 2 Handout 3: Problem Set 1 Solutions the mark counter reaches c, a cascading cut is performed and the mark counter is reset to 0 Massachusetts Institute of Technology Handout 3 6854/18415: Advanced Algorithms September 14, 1999 David Karger Problem Set 1 Solutions Problem 1 Suppose that we have a chain of n 1 nodes in a Fibonacci

More information

Greedy Algorithms. Alexandra Stefan

Greedy Algorithms. Alexandra Stefan Greedy Algorithms Alexandra Stefan 1 Greedy Method for Optimization Problems Greedy: take the action that is best now (out of the current options) it may cause you to miss the optimal solution You build

More information

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n)

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n) CSE 0 Test Your name as it appears on your UTA ID Card Fall 0 Multiple Choice:. Write the letter of your answer on the line ) to the LEFT of each problem.. CIRCLED ANSWERS DO NOT COUNT.. points each. The

More information

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

CSC 373 Lecture # 3 Instructor: Milad Eftekhar Huffman encoding: Assume a context is available (a document, a signal, etc.). These contexts are formed by some symbols (words in a document, discrete samples from a signal, etc). Each symbols s i is occurred

More information

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Main Goal of Algorithm Design Design fast

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 16 Greedy algorithms Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 Overview A greedy

More information

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Lecture 5 Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Reading: Randomized Search Trees by Aragon & Seidel, Algorithmica 1996, http://sims.berkeley.edu/~aragon/pubs/rst96.pdf;

More information

ECE608 - Chapter 16 answers

ECE608 - Chapter 16 answers ¼ À ÈÌ Ê ½ ÈÊÇ Ä ÅË ½µ ½ º½¹ ¾µ ½ º½¹ µ ½ º¾¹½ µ ½ º¾¹¾ µ ½ º¾¹ µ ½ º ¹ µ ½ º ¹ µ ½ ¹½ ½ ECE68 - Chapter 6 answers () CLR 6.-4 Let S be the set of n activities. The obvious solution of using Greedy-Activity-

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms Efficient Sequential Algorithms, Comp39 Part 3. String Algorithms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press (21).

More information

DATA STRUCTURES. amortized analysis binomial heaps Fibonacci heaps union-find. Lecture slides by Kevin Wayne. Last updated on Apr 8, :13 AM

DATA STRUCTURES. amortized analysis binomial heaps Fibonacci heaps union-find. Lecture slides by Kevin Wayne. Last updated on Apr 8, :13 AM DATA STRUCTURES amortized analysis binomial heaps Fibonacci heaps union-find Lecture slides by Kevin Wayne http://www.cs.princeton.edu/~wayne/kleinberg-tardos Last updated on Apr 8, 2013 6:13 AM Data structures

More information

Huffman Coding. Version of October 13, Version of October 13, 2014 Huffman Coding 1 / 27

Huffman Coding. Version of October 13, Version of October 13, 2014 Huffman Coding 1 / 27 Huffman Coding Version of October 13, 2014 Version of October 13, 2014 Huffman Coding 1 / 27 Outline Outline Coding and Decoding The optimal source coding problem Huffman coding: A greedy algorithm Correctness

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Analysis of Algorithms

Analysis of Algorithms Algorithm An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and

More information

CSE100. Advanced Data Structures. Lecture 8. (Based on Paul Kube course materials)

CSE100. Advanced Data Structures. Lecture 8. (Based on Paul Kube course materials) CSE100 Advanced Data Structures Lecture 8 (Based on Paul Kube course materials) CSE 100 Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs

More information

Graduate Analysis of Algorithms Prof. Karen Daniels

Graduate Analysis of Algorithms Prof. Karen Daniels UMass Lowell Computer Science 9.503 Graduate Analysis of Algorithms Prof. Karen Daniels Fall, 203 Lecture 3 Monday, 9/23/3 Amortized Analysis Stack (computational geometry application) Dynamic Table Binary

More information

Introduction to Algorithms April 21, 2004 Massachusetts Institute of Technology. Quiz 2 Solutions

Introduction to Algorithms April 21, 2004 Massachusetts Institute of Technology. Quiz 2 Solutions Introduction to Algorithms April 21, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Quiz 2 Solutions Quiz 2 Solutions Do not open this quiz booklet

More information

Advanced Algorithms and Data Structures

Advanced Algorithms and Data Structures Advanced Algorithms and Data Structures Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Prerequisites A seven credit unit course Replaced OHJ-2156 Analysis of Algorithms We take things a bit further than

More information

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I MCA 312: Design and Analysis of Algorithms [Part I : Medium Answer Type Questions] UNIT I 1) What is an Algorithm? What is the need to study Algorithms? 2) Define: a) Time Efficiency b) Space Efficiency

More information

CS583 Lecture 12. Previously. Jana Kosecka. Amortized/Accounting Analysis Disjoint Sets. Dynamic Programming Greedy Algorithms

CS583 Lecture 12. Previously. Jana Kosecka. Amortized/Accounting Analysis Disjoint Sets. Dynamic Programming Greedy Algorithms CS583 Lecture 12 Jana Kosecka Amortized/Accounting Analysis Disjoint Sets Dynamic Programming Greedy Algorithms Slight digression Amortized analysis Disjoint sets Previously 1 Amortized Analysis Amortized

More information

( ) n 3. n 2 ( ) D. Ο

( ) n 3. n 2 ( ) D. Ο CSE 0 Name Test Summer 0 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to multiply two n n matrices is: A. Θ( n) B. Θ( max( m,n, p) ) C.

More information

Lecture 3 February 9, 2010

Lecture 3 February 9, 2010 6.851: Advanced Data Structures Spring 2010 Dr. André Schulz Lecture 3 February 9, 2010 Scribe: Jacob Steinhardt and Greg Brockman 1 Overview In the last lecture we continued to study binary search trees

More information

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left

10/24/ Rotations. 2. // s left subtree s right subtree 3. if // link s parent to elseif == else 11. // put x on s left 13.2 Rotations MAT-72006 AA+DS, Fall 2013 24-Oct-13 368 LEFT-ROTATE(, ) 1. // set 2. // s left subtree s right subtree 3. if 4. 5. // link s parent to 6. if == 7. 8. elseif == 9. 10. else 11. // put x

More information

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs Computational Optimization ISE 407 Lecture 16 Dr. Ted Ralphs ISE 407 Lecture 16 1 References for Today s Lecture Required reading Sections 6.5-6.7 References CLRS Chapter 22 R. Sedgewick, Algorithms in

More information

Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we ha

Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we ha Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we have to take into account the complexity of the code.

More information

CSE100. Advanced Data Structures. Lecture 12. (Based on Paul Kube course materials)

CSE100. Advanced Data Structures. Lecture 12. (Based on Paul Kube course materials) CSE100 Advanced Data Structures Lecture 12 (Based on Paul Kube course materials) CSE 100 Coding and decoding with a Huffman coding tree Huffman coding tree implementation issues Priority queues and priority

More information

CSC 373: Algorithm Design and Analysis Lecture 4

CSC 373: Algorithm Design and Analysis Lecture 4 CSC 373: Algorithm Design and Analysis Lecture 4 Allan Borodin January 14, 2013 1 / 16 Lecture 4: Outline (for this lecture and next lecture) Some concluding comments on optimality of EST Greedy Interval

More information

Lecture Notes for Advanced Algorithms

Lecture Notes for Advanced Algorithms Lecture Notes for Advanced Algorithms Prof. Bernard Moret September 29, 2011 Notes prepared by Blanc, Eberle, and Jonnalagedda. 1 Average Case Analysis 1.1 Reminders on quicksort and tree sort We start

More information

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology Introduction Chapter 4 Trees for large input, even linear access time may be prohibitive we need data structures that exhibit average running times closer to O(log N) binary search tree 2 Terminology recursive

More information

Problem Strategies. 320 Greedy Strategies 6

Problem Strategies. 320 Greedy Strategies 6 Problem Strategies Weighted interval scheduling: 2 subproblems (include the interval or don t) Have to check out all the possibilities in either case, so lots of subproblem overlap dynamic programming:

More information

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics 1 Sorting 1.1 Problem Statement You are given a sequence of n numbers < a 1, a 2,..., a n >. You need to

More information

CS301 - Data Structures Glossary By

CS301 - Data Structures Glossary By CS301 - Data Structures Glossary By Abstract Data Type : A set of data values and associated operations that are precisely specified independent of any particular implementation. Also known as ADT Algorithm

More information

Outline for Today. Why could we construct them in time O(n)? Analyzing data structures over the long term.

Outline for Today. Why could we construct them in time O(n)? Analyzing data structures over the long term. Amortized Analysis Outline for Today Euler Tour Trees A quick bug fix from last time. Cartesian Trees Revisited Why could we construct them in time O(n)? Amortized Analysis Analyzing data structures over

More information

Analysis of Algorithms Prof. Karen Daniels

Analysis of Algorithms Prof. Karen Daniels UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 2 Tuesday, 2/2/10 Design Patterns for Optimization Problems Greedy Algorithms Algorithmic Paradigm Context

More information

Computer Science 210 Data Structures Siena College Fall Topic Notes: Priority Queues and Heaps

Computer Science 210 Data Structures Siena College Fall Topic Notes: Priority Queues and Heaps Computer Science 0 Data Structures Siena College Fall 08 Topic Notes: Priority Queues and Heaps Heaps and Priority Queues From here, we will look at some ways that trees are used in other structures. First,

More information

D. Θ nlogn ( ) D. Ο. ). Which of the following is not necessarily true? . Which of the following cannot be shown as an improvement? D.

D. Θ nlogn ( ) D. Ο. ). Which of the following is not necessarily true? . Which of the following cannot be shown as an improvement? D. CSE 0 Name Test Fall 00 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to convert an array, with priorities stored at subscripts through n,

More information

Amortized Analysis. Andreas Klappenecker. [partially based on the slides of Prof. Welch]

Amortized Analysis. Andreas Klappenecker. [partially based on the slides of Prof. Welch] Amortized Analysis Andreas Klappenecker [partially based on the slides of Prof. Welch] 1 People who analyze algorithms have double happiness. First of all they experience the sheer beauty of elegant mathematical

More information

CS 6783 (Applied Algorithms) Lecture 5

CS 6783 (Applied Algorithms) Lecture 5 CS 6783 (Applied Algorithms) Lecture 5 Antonina Kolokolova January 19, 2012 1 Minimum Spanning Trees An undirected graph G is a pair (V, E); V is a set (of vertices or nodes); E is a set of (undirected)

More information

Lecture: Analysis of Algorithms (CS )

Lecture: Analysis of Algorithms (CS ) Lecture: Analysis of Algorithms (CS483-001) Amarda Shehu Spring 2017 1 The Fractional Knapsack Problem Huffman Coding 2 Sample Problems to Illustrate The Fractional Knapsack Problem Variable-length (Huffman)

More information

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1.

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1. 7.2 Binary Min-Heaps A heap is a tree-based structure, but it doesn t use the binary-search differentiation between the left and right sub-trees to create a linear ordering. Instead, a binary heap only

More information

n 2 ( ) ( ) Ο f ( n) ( ) Ω B. n logn Ο

n 2 ( ) ( ) Ο f ( n) ( ) Ω B. n logn Ο CSE 220 Name Test Fall 20 Last 4 Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. 4 points each. The time to compute the sum of the n elements of an integer array is in:

More information

Main approach: always make the choice that looks best at the moment.

Main approach: always make the choice that looks best at the moment. Greedy algorithms Main approach: always make the choice that looks best at the moment. - More efficient than dynamic programming - Always make the choice that looks best at the moment (just one choice;

More information

CS 337 Project 1: Minimum-Weight Binary Search Trees

CS 337 Project 1: Minimum-Weight Binary Search Trees CS 337 Project 1: Minimum-Weight Binary Search Trees September 6, 2006 1 Preliminaries Let X denote a set of keys drawn from some totally ordered universe U, such as the set of all integers, or the set

More information

FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard

FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 The data of the problem is of 2GB and the hard disk is of 1GB capacity, to solve this problem we should Use better data structures

More information

Figure 4.1: The evolution of a rooted tree.

Figure 4.1: The evolution of a rooted tree. 106 CHAPTER 4. INDUCTION, RECURSION AND RECURRENCES 4.6 Rooted Trees 4.6.1 The idea of a rooted tree We talked about how a tree diagram helps us visualize merge sort or other divide and conquer algorithms.

More information

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems.

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems. 2.3 Designing algorithms There are many ways to design algorithms. Insertion sort uses an incremental approach: having sorted the subarray A[1 j - 1], we insert the single element A[j] into its proper

More information

Greedy Algorithms and Huffman Coding

Greedy Algorithms and Huffman Coding Greedy Algorithms and Huffman Coding Henry Z. Lo June 10, 2014 1 Greedy Algorithms 1.1 Change making problem Problem 1. You have quarters, dimes, nickels, and pennies. amount, n, provide the least number

More information

Greedy Algorithms. Mohan Kumar CSE5311 Fall The Greedy Principle

Greedy Algorithms. Mohan Kumar CSE5311 Fall The Greedy Principle Greedy lgorithms Mohan Kumar CSE@UT CSE53 Fall 5 8/3/5 CSE 53 Fall 5 The Greedy Principle The problem: We are required to find a feasible solution that either maximizes or minimizes a given objective solution.

More information

Horn Formulae. CS124 Course Notes 8 Spring 2018

Horn Formulae. CS124 Course Notes 8 Spring 2018 CS124 Course Notes 8 Spring 2018 In today s lecture we will be looking a bit more closely at the Greedy approach to designing algorithms. As we will see, sometimes it works, and sometimes even when it

More information

Final Examination CSE 100 UCSD (Practice)

Final Examination CSE 100 UCSD (Practice) Final Examination UCSD (Practice) RULES: 1. Don t start the exam until the instructor says to. 2. This is a closed-book, closed-notes, no-calculator exam. Don t refer to any materials other than the exam

More information

Lecture 9 March 4, 2010

Lecture 9 March 4, 2010 6.851: Advanced Data Structures Spring 010 Dr. André Schulz Lecture 9 March 4, 010 1 Overview Last lecture we defined the Least Common Ancestor (LCA) and Range Min Query (RMQ) problems. Recall that an

More information

O(n): printing a list of n items to the screen, looking at each item once.

O(n): printing a list of n items to the screen, looking at each item once. UNIT IV Sorting: O notation efficiency of sorting bubble sort quick sort selection sort heap sort insertion sort shell sort merge sort radix sort. O NOTATION BIG OH (O) NOTATION Big oh : the function f(n)=o(g(n))

More information

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding Fundamentals of Multimedia Lecture 5 Lossless Data Compression Variable Length Coding Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Fundamentals of Multimedia 1 Data Compression Compression

More information

4.1 Interval Scheduling

4.1 Interval Scheduling 41 Interval Scheduling Interval Scheduling Interval scheduling Job j starts at s j and finishes at f j Two jobs compatible if they don't overlap Goal: find maximum subset of mutually compatible jobs a

More information

Self-Adjusting Heaps

Self-Adjusting Heaps Heaps-0 Self-Adjusting Heaps No explicit structure. Adjust the structure in a simple, uniform way, so that the efficiency of future operations is improved. Amortized Time Complexity Total time for operations

More information

COMP 251 Winter 2017 Online quizzes with answers

COMP 251 Winter 2017 Online quizzes with answers COMP 251 Winter 2017 Online quizzes with answers Open Addressing (2) Which of the following assertions are true about open address tables? A. You cannot store more records than the total number of slots

More information

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶

Design and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 6, 2016 洪國寶 Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 6, 2016 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing up

More information

Lecture: Analysis of Algorithms (CS )

Lecture: Analysis of Algorithms (CS ) Lecture: Analysis of Algorithms (CS583-002) Amarda Shehu Fall 2017 1 Binary Search Trees Traversals, Querying, Insertion, and Deletion Sorting with BSTs 2 Example: Red-black Trees Height of a Red-black

More information

looking ahead to see the optimum

looking ahead to see the optimum ! Make choice based on immediate rewards rather than looking ahead to see the optimum! In many cases this is effective as the look ahead variation can require exponential time as the number of possible

More information

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions U.C. Berkeley CS70 : Algorithms Midterm 2 Solutions Lecturers: Sanjam Garg and Prasad aghavra March 20, 207 Midterm 2 Solutions. (0 points) True/False Clearly put your answers in the answer box in front

More information

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS Department of Computer Science University of Babylon LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS By Faculty of Science for Women( SCIW), University of Babylon, Iraq Samaher@uobabylon.edu.iq

More information