Unit 3 Chapter 10 ABSTRACT ALGORITHMS 2GREEDY METHODS

Size: px
Start display at page:

Download "Unit 3 Chapter 10 ABSTRACT ALGORITHMS 2GREEDY METHODS"

Transcription

1 DESIGN AND ANALYSIS OF ALGORITHMS Unit 3 Chapter 10 ABSTRACT ALGORITHMS 2GREEDY METHODS WHAT IS GREEDY METHOD? There are n inputs of some values and an objective function. The method gives an optimal solution to the problem by considering the inputs one at a time, checking to see if it can be included in the set of values which give an optimal solution and then check if it is the feasible solution. All of n inputs may not be included, only those needed to form the optimal solution will be included. Each input may consume some resource, which is generally available in limited quantity. We may say that the feasible solution represents the basic or the essential requirements of the problem and the optimal solution denotes the more specific and desirable requirements of the problem. Thus the flow of data is given in Figure. 1

2 The strategy adopted for achieving the optimal solution is called Greedy method. The word greedy refers to allocating the maximum possible value of some limited resource to the first element which enters the optimal solution. The feasibility of the solution is expressed in terms of obeying the constraints of the resource. Thus Greedy method depends upon local (short range) maximum. The abstract algorithm can be represented as following algorithm: The three functions SELECT (), FEASIBLE (), and UNION () do the detailed work of this abstract algorithm. Their details will obviously vary from one problem to another: SELEC() select an element from a[] such that it has a potential for satisfying the optimality criterion or selection policy. FEASIBLE() checks if the selected element x satisfies the feasibility criterion. UNION() integrates the element x in the solution. 2

3 KNAPSACK PROBLEM There are n objects numbered i, 0 <= i <= n-1, having weights wi and contribution to profit Pi. There is a sack having a capacity of M. If a fraction xi of the object i is put in the sack, then it increases its weight by wixi and contributes pixi to the profits. Find the filling that maximizes the profits. It is obvious from the problem statement that We can assume that p and w are positive numbers. To understand the problem better, consider a numerical example for this problem: n = 3, M = 20, P = {25, 24, 15}, W = {18, 15, 10}. Some of the feasible solutions are shown in following Table 3

4 These solutions were obtained by considering different greedy strategies. 1) 2) 3) 4) The first solution is just a random solution. The selection policy we adopt is based on the approach that an object with the maximum profit should be included. We try to fit the largest Xi possible under feasibility constraints. Thus at each stage of the algorithm, the largest increase in profit is obtained. This policy leads to the solution no. 2. We try another selection policy: include an object i with the lowest Wi first. This will minimize the weight that is added to the sack at each stage and the sack will fill up slowly only. This leads to the solution no. 3. We find that both the above policies do not lead to an optimum solution and we should actually select the objects at each stage having maximum plw ratio. It can be proven that this selection policy leads to the optimal solution. This is solution no. 4 4

5 PROOF SKETCH OF GREEDY ALGORITHM Consider the most valuable object weighing w. If we remove weight w of item i from the sack, then the remaining load must contain the optimal selection of the other (n - 1) items and the weight wi - w of item i. If not, then the original load was not the most valuable. First, we show that as much as possible of the highest profit/ weight item must be included in the optimum solution. Let (wh, ph) be the weight and the profit of the item with the highest profit/weight ratio (item h). Let L(i) be the weight of the item i contained in the sack L. If some part of item h is left and wi 0 for some j h, then replacing j with h will yield a higher value. L(j)p(j)/w(j) <= L(j)p(h)/w(h) and p(j)/w(j) p(h)/w(h) are true by definition of h. There is a variation on this problem, known as the 0/1 knapsack problem. In this case, Xi can be either 1 or 0, that is, either wholly present or not (completely absent, that is, the object cannot be broken). Again, we want to maximize the total profit: 5

6 JOB SEQUENCING WITH DEADLINES Given a set of n jobs, each having a "deadline" (an integer) d[i] and a profit p[i] associated with it. For a job i the profit is earned if and only if the job is completed within its deadline. Each job takes one unit of time on a machine (processor) and only one machine is available. We want to maximize the profit. The jobs are arranged in decreasing order of profit in an array. A feasible solution to this problem is a subset J of the n jobs, such that each of them can be completed within its deadline. The value of the feasible solution is Σp[i] for all i Є J. An optimal solution is a feasible solution with maximum value. For example, let n=4, profit vector P={30,35,20,25}, deadline vector D = {2,1, 2, 1}. Assumption: Execution time of each job is one time unit. The feasible solutions and their values are: It is seen that solution no. 3 is optimal, where only jobs 1 and 2 are processed and the profit is maximum. Note that the jobs must be done in the order shown, first 2 and then 1, in order to obey the deadlines. In order to use the greedy strategy, we have to decide the selection criterion. As a first attempt we can use ΣPi as our optimisation measure. Then we select the next job which i Є J increases this measure the most, provided that the resulting J is a feasible solution. 6

7 Thus we should consider the jobs in non-increasing order of profit p. This was the reason that in the above example, the jobs are given in such an order. Considering the jobs in order of decreasing p values, the 1st can be in J because J = {1} is a feasible solution. Then the 2nd can be in J, because J = {1, 2} is a feasible solution, since the deadlines can be met. Now job 3 cannot be in J as {1, 2, 3} is not a feasible solution. The largest value of the deadline is 2 and as each included job would take 1 time unit each, we can have at most 2 jobs in J. So we have J = {1, 2}. What should be the order of execution of the jobs? They should be executed earliest deadline first, that is, in the non-decreasing order of the deadline d. Thus the jobs {1,2} should be done in the order 2, 1. The greedy method given above always obtains an optimal solution to the job sequencing problem. We give a high level algorithm for this method as follows: 7

8 We can keep the jobs in the set J in an array j[] sorted by non-decreasing order of deadlines, which will be easier to perform the feasibility check. On the other hand, we assume that job numbers are given in non-increasing order of profit p, thus their deadlines are stored in d[ ] in that order. In the C language implementation, a fictitious job no. 0 is used with d[0] = 0 and j[0] = 0 to simplify the insertion of a job i in J in the deadline order. The jobs in J with subscript larger than the new job i are to be moved. Only the jobs after the insertion point for i, say k, need be checked for their deadlines. It is to be noted that the actual values of profit do not enter into consideration in this algorithm. EXAMPLE-MINIMUM SPANNING TREES Let G=<V,E> be an undirected, connected graph, having vertices V and edges E. A sub-graph T = <V', E'> of G is a spanning tree of G if T is a tree. An undirected graph with V = n, may have as many as n(n - 1)/2 edges, but a spanning tree T will have exactly (n - 1) edges. For example if the nodes or vertices represent cities and the edges represent the roads between them, then each city may be connected by an independent road to the others. This would lead to n(n - 1)/2 such roads. That would give us a graph (see figure). But we may be able to go from any city to any other if at the minimum there are certain roads, (n - 1), such that each city is approachable. That would give us a spanning tree. There could be different possibilities here. 8

9 In a practical situation, the edges will have a weight or cost associated with it. For example, the weight of an edge in the case of the roads between cities is the length of the road. Then suppose we are planning roads between cities, we would be interested in the roads with a minimum total length. We are interested in minimizing this length. That would give us a minimum spanning tree for the problem graph. Figure shows a weighted graph and one of its minimum spanning trees. A greedy method to obtain the minimum spanning tree would construct the tree edge by edge, where each next edge is chosen according to some optimization criterion. An obvious criterion would be to choose an edge which adds a minimum weight to the total weight of the edges so far selected. There are two ways in which this criterion can be achieved: 9

10 1) 2) The set of edges selected so far always forms a tree; the next edge to be added is such that not only it adds a minimum weight, but also forms a tree with the previous edges; it can be shown that this algorithm results in a minimum cost tree; this algorithm is called Prim's algorithm. The edges are considered in non-decreasing order of weights; the set T of edges at each stage is such that it is possible to complete T into a tree; thus T may not be a tree at all stages of the algorithm; this also results in a minimum cost tree; this algorithm is called Kruskal's algorithm. PRIM'S ALGORITHM This algorithm starts with a tree that has only one edge, the minimum weight edge. Then edges (j, q) will be added one by one, such that: node j is already included, node q is NOT included and weight WT(j, q) is the minimum amongst all the edges (x, y) for which x is in the tree and y is not. See above Fig. In order to execute this algorithm efficiently, we have a node index NEAR(j) associated with each node j not yet included in the tree. NEAR(j) = 0 if a node is included in the tree. NEAR(j) is a node in the tree such that WT(j, NEAR(j)) is the minimum amongst all possible choices for NEAR( j). 10

11 The time complexity of Prim's algorithm can be seen to be O(n2). 11

12 KRUSKAL'S ALGORITHM This algorithm starts with a list of edges sorted in non-decreasing order of weights, the order in which they are considered for processing. The algorithm goes on including edges in the partial Tree till it finds an edge which would result in a cycle. It discards this edge, considers the next edge and repeats this operation till all the edges are exhausted. In the top-level algorithm KRUSKAL( ), Algorithm (in next slide), E is the set of edges already sorted in ascending order of weights. T is the resultant minimum spanning tree. 12

13 The algorithm has a worst case time complexity, excluding the time to sort the edges, of the order O(e.log.e), where e is the number of edges in the graph. This algorithm is implemented in two versions, differing in the way in which the data is stored and method of sorting the edges according to their weights. 1ST VERSION OF KRUSKAL The three operations marked by bold letters in the KRUSKAL algorithm have to be implemented as separate algorithms. In the implementation the edges are stored in the form of a linked list edges, and are sorted by Insertion sort on arrival of each node, by the function sinsert(). Thus traversing the linked list from beginning will be equivalent to selecting the lowest weight edge at each stage. Checking that a particular edge does not result in a cycle is ensured by the function nocycle (). An edge is added to the tree by the function addtree (). The data file whose name is fixed as prim. dat, is read in by read _in (). The data structures used are: See Figs. 1 and 2. EDGE <start node no.><end node no.><weight><next edge ptr.> EDGEL <edge ptr.><next edge-list elem.> NODE <number><ptr to edge-list><marker flag> 13

14 The nodes are stored in an array (the maximum size presently defined is 100, which can be changed). The user is required to use node numbers 1, 2, 3,..., n - 1. See Fig. A. The tree is defined by the union of the edges linked to the elp pointer in the NODE structure. Initially this tree is empty. As the edges are added, edge-list elements are added to corresponding nodes, see Fig. B We add an edge pointer to both the starting and ending nodes of an edge, thus at any stage, the number of edge-pointers in the tree is exactly double the number of edges added till now. The most interesting function is nocycle(). In order to find if an edge will result in a cycle, it proceeds as follows: start with the s node of the test edge. If there is no edge connected to that node, then return nocycle = 1. Otherwise, mark all the nodes connected to this node. Then check each node sequentially, if it is already marked, then mark all the nodes connected to it. If we mark any node in the process, we start from the first node again. Soon we shall reach a stage where no further nodes are marked. Then we check if the e node of the test edge is marked. If it is, then the addition of this edge will result in a cycle, and we return 0. 14

15 Figure A Figure B 15

16 These results are much better than the previous version, though not comparable to the Prim's algorithm. The culprit is the cycle testing function nocycle (). If it can be made more efficient, the Kruskal's algorithm can compete with Prim's. The theory of Union-Find data-structure, presently the most efficient known approach to resolve the problem of cycle testing, while implementing the Kruskal's algorithm. The constant column in the above table refers to the constants c1, c2 and k in the corresponding time complexity formula c1 O(n2), c O(e), and ko(e log e) respectively. Thus we see that Prim's algorithm is good when there are more edges per node, while the Kruskal's algorithm is good when the graph is less dense, that is, the graph is characterised by a few edges per node. 16

17 UNION-FIND DATA-STRUCTURE A tree is a acyclic data-structure, so obviously while building the MST, we would have to check for cycles. At each step of the Kruskal's algorithm, the composite partial solution (V, A) is a forest, where V is the set of nodes and A is the set of edges which constitute the partial solution. We note that: If nodes u and v are in the same tree, then adding the edge (u, v) to A creates a cycle. 2) If nodes u and v are not in the same tree, then adding the edge (u, v) to A does not create a cycle. 1) The obvious problem that arises is how to test whether u and v are in the same tree. This question can be answered at two levels of implementation: Higher level: we can use a Disjoint-set data structure. Vertices in a tree are considered to be in the same set. Then the test is FindSet(u) = FindSet(v). Lower level: Obviously we require to implement FindSet(). The Union-Find data structure implements this. We need the Disjoint-set data structure to implement Union-Find. A disjoint-set data structure represents a collection of sets that are disjoint-that is, no element is found in more than one set. The collection of disjoint sets is called a partition, because the elements are partitioned among the sets. Moreover, we work with a universe of elements. The universe is made up of all the elements that can be a member of a set. Every element is a member of exactly one set. For example, suppose the elements in our universe are corporations that still exist today or were acquired by other corporations. Our sets are corporations that still exist under their own name. For instance, Tata, TCS and VSNL are all members of the "Tata" set. 17

18 We will limit ourselves to two operations. The first is called a union operation, in which we merge two sets into one. The second is called a find query, in which we ask a question similar to, "What corporation does TCS belong to today?" More generally, a find query takes an element and tells us which set it is in. We will not support operations that break up a set into two or more sets (not quickly, anyway). Data structures designed to support these operations are called partition or union/find data structures. Union/find data structures start off by having with every element in a separate set, see Fig. 1. The query "find(tatasteel)" returns "TataSteel". Suppose we take the union of TataMotors and TataSteel and called the resulting corporation TataMotors. Similarly, we unite Tata with VSNL-the result being called VSNL-and TataElexi with TCS-the result being called TataElexi, see Fig

19 The query "find(tatasteel)" now returns "TataElexi". When Tata takes over TataElexi, everything will be in one set and no further mergers will be possible. The obvious data structure for disjoint sets would have the following structure: 1) 2) Each set references a list of the elements in that set. Each element references the set that contains it. With this data structure, find operations take O(1) time. Hence, we say that list-based disjoint sets use the quickfind algorithm. However, the union operations are slow, because when the two sets are united, we must walk through one set and relabel all the elements so that they reference the other set. Space limitations prevent us from analyzing this algorithm in detail (see Goodrich and Tamassia). Instead, let's move on to the less obvious but far superior quick-union algorithm. TREE-BASED DISJOINT SETS AND THE QUICKUNION ALGORITHM In tree-based disjoint sets, union operations take O(1) time at the expense of slower find operations. However, for any sequence of union and find operations, the quick-union algorithm is faster overall than the quick-find algorithm. To support fast unions, each set is maintained as a general tree. The quick-union data structure comprises a forest, in which each element is initially the root of its own tree; then trees are merged by union operations. The quick-union data structure is simpler than the general tree structures you may have studied so far, because there are no child or sibling references. Every node knows only its parent, and you can only walk up the tree. Fortunately, the true identity of each set is recorded at its root. Union is a simple O(1) time operation: we simply make the root of one set a child of the root of the other set. For example, when we form the union of TataElexi and TataMotors, as shown in Fig. 1. TataElexi becomes a set containing four members. 19

20 However, finding the set to which a given element belongs, is not a constant-time operation. The find operation is performed by following the chain of parent references from an element to the root of its tree. For example, find(tatasteel) will follow the trail of references until it reaches TataElexi. The cost of this operation is proportional to the element's depth in the tree. To prevent elements from getting too deep, we unite sets intelligently. At each root, we record the size of its tree. When we unite two trees, we make the smaller one a subtree of the larger one (breaking ties arbitrarily). This strategy is called union-by-size. 20

21 IMPLEMENTING QUICK-UNION WITH AN ARRAY Suppose the elements are non-negative integers, indexed from zero. We'll use an array to record the parent of each element. If an element has no parent, we shall record the size of the corresponding tree. To distinguish it from a parent reference, we shall record the size s as the negative number -s Initially, every element is the root of its own tree, so we set every array element to -1. See Fig. See Fig. below The forest illustrated on the left is represented by the array on the right. See Fig. below This is not a good way to implement treebased disjoint sets, but it is fast (in terms of the constant in the asymptotic notation). 21

22 Let root1 and root2 be two elements that are roots of their respective trees. The algorithm for union is given as following Algorithm The find() method is equally simple, but we need one more trick to obtain the best possible speed. Suppose a sequence of union operations create a tall tree, and we perform find() repeatedly on its deepest leaf. Each time we perform find(), we walk up the tree from leaf to root, perhaps at considerable expense. When we perform find() the first time, why not move the leaf up the tree so that it becomes a child of the root? In doing so, the next time we execute find() on the same leaf, it will run much more quickly. Furthermore, why not do the same for every node we encounter as we walk up to the root? In the example given here, Figure, find(7) walks up the tree from 7, discovers that 0 is the root, and then makes 0 the parent of 4 and 7, so that future find operations on 4, 7, or their children will be faster. This technique is called path compression. 22

23 Let x be an element whose tree we wish to identify. The algorithm for find(), presented as next Algorithm, returns the identity of the element at the root of the tree. Recall that elements are numbered starting from zero. 23

24 What if we wanted some control over the names of the sets? The solution is to maintain a separate table that maps root elements to set names, and perhaps vice-versa (depending on the application's needs). For example, the table might map 0 to Tata. The union() method must be modified so that when it unites two sets, it assigns the union an appropriate name. Often, however, we don't care about the name of a set at all; we only want to know if two elements x and y are in the same set. We need only to execute find(x) and find(y), and check if the two roots are the same. COMPLEXITY ANALYSIS OF QUICK-UNION Union operations obviously take Θ(1) time. A single find operation can take time as large as Θ(log u), where u is the number of union operations that took place prior to the find. However, the worst-case average running time of union/find operations is better because of path compression. The average running time of find and union operations in the quick-union data structure is so close to a constant that it is hardly worth mentioning, though, in a rigorous asymptotic sense, it is slightly slower. A sequence of f find and u union operations (in any order and possibly interleaved) takes Θ(u +fα(f + u, u)) time in the worst case. Here, α is an extremely slowgrowing function known as the inverse Ackermann function. The value of this function is never larger than 4 for any values of f and u we could ever use. Hence, for all practical purposes, we should think of quick-union as having find and union operations that run, on the average, in constant time. 24

25 USING UNION-FIND IN KRUSKAL ALGORITHM The UNION-FIND Data Structure supports three operations on collections of disjoint sets: Let n be the size of the universe. Create-Set(u): time complexity O(1). Create a set containing the single element u. Find-Set(u): time complexity O(log n). Find the set containing the element u. Union(u, v): time complexity O(log n). Merge the sets respectively containing u and v into a common set. From now onwards we shall treat UNION-FIND as a black box. Now we shall see how the data structures discussed above are applied to achieve a more efficient Kruskal's Algorithm, see Algorithm 25

26 EXAMPLE [SHORTEST PATH] The problem of the roads between cities that we considered while introducing the Minimum Spanning Tree above can be modified as follows. The weights of the edges can be considered to represent the lengths of the roads or cost of the travel on the roads. A traveler will be interested to know the following: 1) 2) Is there a path from city A to city B? If there are more than one paths from A to B, which is the shortest or least-cost path? When we compare this particular problem to the minimum spanning tree problem, we realize that, here we are interested in the length of a particular path and not the total length. In general, we shall have to deal with a directed graph. There is a graph G = <V, E>, a weighing function c(e) for the edges in E and a source node vo. The problem is to determine the shortest path from vo to all the remaining nodes of G. Assume that all the weights are positive. In passing one might note that we are considering the cost of travel and not the distance. Cost may not be directly proportional to the distance. For example, consider a graph denoted by Table. It can be immediately seen that as there is no edge with node 6 as destination, there is no path available to that node from any other node. A greedy strategy for this problem requires a multi-stage solution with an optimization measure. The path can be built up edge by edge. The sum of the length of the path built until now can be used as the optimisation measure. This means that each individual path should be of a minimum length. A greedy way to generate these paths would be by non-decreasing edge length. For example, for the graph given, (see Fig.) starting from node 1, the shortest path is to node 3 (c(s) = 10), thus the path {1, 3} will be the first one to be generated. The next node selected will be node 4, giving us a path 1, 3, 4 of length = 25, and so on. 26

27 The critical point to be noted is this: when a node enters the set of nodes already included in the path, it is possible that a new path shorter than the previous one may be found. For example, while finding path from node 4 to node 5, the direct path length is 35, but when node 2 enters the set of nodes being considered, the length reduces to = 30. Algorithm by E. W. Dijkstra solves this problem efficiently. Write the complete example from the video given. Video link also mention at end of the slides. 27

28 DIJKSTRA'S SHORTEST PATH ALGORITHM The algorithm described in next page is given by E.W. Dijkstra. It is well-known and a popular algorithm for solving shortest path problems. In fact, it is implemented in highspeed computer network routers and switches. The original algorithm calculates only the path lengths, though the actual paths can be obtained by simple changes to the algorithm. Informal Proof of correctness: A close examination of the inner loop of the algorithm, the one that examines edge (v, w) and updates dist, will reveal that it examines each edge in the graph exactly once, so at any stage in the algorithm, we can talk about the subgraph G' = (V,E') examined so far. It consists of all the nodes and edges that have been processed. Each pass through the inner loop adds one edge to E'. We shall show that the following property dist[ ] is an invariant of the inner loop of the algorithm: dist[w] is the minimum distance from s to w in G', and if vk is the kth vertex marked, then v1 through vk are the k closest vertices to s in G, and the algorithm has found the shortest paths to them. 28

29 We shall prove this by induction on the number of edges E' in E'. At the very beginning, before examining any edges, E =0 and E' = 0, so the correct minimum distances are dist[s]=0 and dist[w]=, as initialized in the algorithm. Now s is marked first, with the distance to it equal to 0 as desired. Now consider the next edge (v, w) to get E equal to E' U (v, w). Adding this edge means there is a new way to get to w from s in E: from s to v to w. The shortest way to do this is dist[v] + weight (v, w) by induction. Also by induction, the shortest path to get to w not using edge (v, w) is dist[w]. The algorithm then replaces dist[w] by the minimum of its old value and possible new value. Thus this w is still the shortest distance to w in E. See Fig. We still have to show that all the other dist[u] values are correct. For this we need to use the heap property, which guarantees that (v,w) is an edge coming out of the node v, in the heap, closest to the source s. To complete the induction we have to show that adding the edge (v, w) to the graph G' does not change the values of the distance of any other vertex u from s. dist[u] could only change, had the shortest path from s to u been previously through w. This is impossible, since we just decided that v was closer to s than w, since the shortest path to w is via v, so the heap would not have marked wand examined edges out of it. This algorithm has a time complexity O(n2), which can be easily derived from the consideration of the number of times each loop is executed. 29

30 OPTIMAL MERGE PATTERNS One of the usual methods of creating large sorted files is to prepare a number of smaller files, possibly from different sources, sort them and then merge them into a single large file. In fact this procedure can be executed recursively. A question arises regarding the optimum order in which the files should be merged, so that the total time to get the final file is minimized. If we have two files f1 and f2 with sizes s1 and s2 respectively, then the merging time is O(s1 + s2). Given n files, f1,f2,f3,...,fn what's the minimum time needed to merge all the n files and how to achieve it? If we could merge all the n files together in a single operation, the time taken is expected to be O(s1 + s2... sn), but multi-way merging does add its own overhead and does take more time than O(s1 + s2... sn). It is usual to merge two files at a time. EXAMPLE It may not be very obvious that the total merging time depends upon the order in which the merging is done, assuming that at a time two files are merged. For example, let us say we have 5 files (f1,f2,f3,f4,f5), with lengths (20, 30, 10, 5, 30). If we merge them in some arbitrary order, say, in the order in which the files are given to us, we get the following result: 30

31 On the other hand, if we do optimal merging, using Greedy strategy, the total time may reduce. We take the following steps: 1) 2) Sort the array giving the file lengths. Thus for our example, the sorted length array is (5, 10, 20, 30, 30). Sort the file numbers also in the same order, i.e. (f4,f3,f1,f2,f5). Now do the two-way merging as before: Just by changing the order of merging we did get a time reduction. Note that in the second case, we took up the files in an ascending sorted order keyed on the file lengths. That gives us our first observation: Observation I: We should have a fast sorting method available. Though we did get a merging speed advantage in our rather naive approach of re-arranging the files, we did not get any where near the theoretical minimum time of = 95. Actually, we missed one important point-not only should we use the files in their original length order, but the ascending order should be dynamically decided. More specifically, after we obtained m2 = m1 +f1 with length 35, we should have considered merging f2 +f5 to get m3 with length 60. Let us try this and see what happens. 31

32 We did get some reduction in time. This gives us our second observation: Observation 2: we should have a fast sorting method, which works dynamically while the merging is in progress. The file length array to be sorted goes on changing while the merging steps are being done. It is not correct to sort only once at the beginning of the merge, we have to sort the length array after each merge step and then merge the smallest two files. It will give us our required Optimal Merge Pattern. The most convenient method of achieving the required dynamic sort is to use a minheap (a heap with the minimum element at the root) or a priority queue. We need to mainly use initialize (), deletemin () and insert () functions to respectively insert the initial file length values in the heap, get the current minimal two file lengths and re-insert their sum back in the heap. 32

33 We obtained the following results: We now formalize the problem a bit and show that we have indeed followed the Greedy approach. 33

34 GREEDY OPTIMAL MERGE PATTERNS Input: n sorted arrays of length L[1], L[2],..., L[n] Problem: To merge the arrays pairwise as fast as possible. The problem is to determine which pair to merge every time. Method (the Greedy method): The selection policy, which chooses the best pair of arrays to merge next, is to select the two shortest remaining arrays. Implementation: Need a data structure to store the lengths of the arrays, to find the shortest 2 arrays at any time, to delete those lengths, and insert in a new length (for the newly merged array). The data structure has to support deletemin and insert. Clearly, a minheap is ideal. The time complexity of the algorithm: The algorithm iterates (n-1) times. At every iteration two deletemins and one insert is performed. The 3 operations take O(log n) time in each iteration. Thus the total time is O(n log n) for the while loop plus O(n) for initial heap construction. That is, the total time is O(n log n). 34

35 THE GREEDY ALGORITHM 1) 2) Store the files names in a minheap, keyed by their lengths. Repeat the following until there is only one file: a) b) Extract the two smallest elements fi and fj. Merge fi and fj and insert this new file with its length in the minheap. To show that we have followed the Greedy method, we first note its characteristics: Optimal substructure: An optimal solution contains within it optimal solutions to subproblems. Greedy choice property: A globally optimal solution can be arrived at by making a locally optimal choice. Greedy algorithm: Makes the choice that looks the best at each particular point in the algorithm execution. Then we note the problem solution design 1) 2) 3) Cast the problem as one where you make choices and each choice results in a smaller subproblem to solve. Prove that the greedy choice property holds. Demonstrate that the solution to the subproblem can be combined with the greedy choice to get an optimal solution for the original problem. 35

36 Please see previous figure for a merge tree for our example. Total merging cost for a merge tree T=sum of the internal nodes. Thus Optimal Substructure If we look at any subtree T' in the merging tree T, it represents the cost of merging the files that are at its leaves. The cost of the optimal merge tree includes the cost of this subtree C(T') plus the cost of merging the files in T' with the rest of the files in T. The merging tree T' must be optimal in the optimal solution or else we would be able to construct an even better solution. GREEDY CHOICE PROPERTY Claim: There exists an optimal merging pattern such that the two shortest files are merged together. If this is true, we can therefore greedily pick the two shortest files and merge them together and get an optimal solution. Proof: Suppose the two shortest files fi and fj are not merged together in some optimal merging pattern. Find the two deepest nodes that are merged together and swap them with fi and fj. The cost of the resulting merge pattern is at most that of the merging pattern we had before since the cost is We have now constructed an optimal merging pattern that merges the two smallest files. Greedy Algorithm is already given above. 36

37 REFERENCE VIDEOS Websites for algorithm learning Permutation using recursion Knapsack problem ithms/design_and_analysis_of_algorithms_fractional_knapsa ck.htm Shortest Path(Dijkstra Algorithm) THANK YOU Dr. Milan Vachhani Assistant Professor, Sunshine Group of Institutions, Rajkot 37

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 3 Chapter 10 ABSTRACT ALGORITHMS 2- GREEDY METHODS

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 3 Chapter 10 ABSTRACT ALGORITHMS 2- GREEDY METHODS DESIGN AND ANALYSIS OF ALGORITHMS Unit 3 Chapter 10 ABSTRACT ALGORITHMS 2- GREEDY METHODS http://milanvachhani.blogspot.in WHAT IS GREEDY METHOD? There are n inputs of some values and an objective function.

More information

Disjoint Sets. The obvious data structure for disjoint sets looks like this.

Disjoint Sets. The obvious data structure for disjoint sets looks like this. CS61B Summer 2006 Instructor: Erin Korber Lecture 30: 15 Aug. Disjoint Sets Given a set of elements, it is often useful to break them up or partition them into a number of separate, nonoverlapping groups.

More information

UNIT 3. Greedy Method. Design and Analysis of Algorithms GENERAL METHOD

UNIT 3. Greedy Method. Design and Analysis of Algorithms GENERAL METHOD UNIT 3 Greedy Method GENERAL METHOD Greedy is the most straight forward design technique. Most of the problems have n inputs and require us to obtain a subset that satisfies some constraints. Any subset

More information

2.1 Greedy Algorithms. 2.2 Minimum Spanning Trees. CS125 Lecture 2 Fall 2016

2.1 Greedy Algorithms. 2.2 Minimum Spanning Trees. CS125 Lecture 2 Fall 2016 CS125 Lecture 2 Fall 2016 2.1 Greedy Algorithms We will start talking about methods high-level plans for constructing algorithms. One of the simplest is just to have your algorithm be greedy. Being greedy,

More information

Department of Computer Science and Engineering Analysis and Design of Algorithm (CS-4004) Subject Notes

Department of Computer Science and Engineering Analysis and Design of Algorithm (CS-4004) Subject Notes Page no: Department of Computer Science and Engineering Analysis and Design of Algorithm (CS-00) Subject Notes Unit- Greedy Technique. Introduction: Greedy is the most straight forward design technique.

More information

Minimum Spanning Trees

Minimum Spanning Trees CS124 Lecture 5 Spring 2011 Minimum Spanning Trees A tree is an undirected graph which is connected and acyclic. It is easy to show that if graph G(V,E) that satisfies any two of the following properties

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Representations of Weighted Graphs (as Matrices) Algorithms and Data Structures: Minimum Spanning Trees. Weighted Graphs

Representations of Weighted Graphs (as Matrices) Algorithms and Data Structures: Minimum Spanning Trees. Weighted Graphs Representations of Weighted Graphs (as Matrices) A B Algorithms and Data Structures: Minimum Spanning Trees 9.0 F 1.0 6.0 5.0 6.0 G 5.0 I H 3.0 1.0 C 5.0 E 1.0 D 28th Oct, 1st & 4th Nov, 2011 ADS: lects

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

6.1 Minimum Spanning Trees

6.1 Minimum Spanning Trees CS124 Lecture 6 Fall 2018 6.1 Minimum Spanning Trees A tree is an undirected graph which is connected and acyclic. It is easy to show that if graph G(V,E) that satisfies any two of the following properties

More information

looking ahead to see the optimum

looking ahead to see the optimum ! Make choice based on immediate rewards rather than looking ahead to see the optimum! In many cases this is effective as the look ahead variation can require exponential time as the number of possible

More information

CS 6783 (Applied Algorithms) Lecture 5

CS 6783 (Applied Algorithms) Lecture 5 CS 6783 (Applied Algorithms) Lecture 5 Antonina Kolokolova January 19, 2012 1 Minimum Spanning Trees An undirected graph G is a pair (V, E); V is a set (of vertices or nodes); E is a set of (undirected)

More information

CMSC 341 Lecture 20 Disjointed Sets

CMSC 341 Lecture 20 Disjointed Sets CMSC 341 Lecture 20 Disjointed Sets Prof. John Park Based on slides from previous iterations of this course Introduction to Disjointed Sets Disjoint Sets A data structure that keeps track of a set of elements

More information

Graphs and Network Flows ISE 411. Lecture 7. Dr. Ted Ralphs

Graphs and Network Flows ISE 411. Lecture 7. Dr. Ted Ralphs Graphs and Network Flows ISE 411 Lecture 7 Dr. Ted Ralphs ISE 411 Lecture 7 1 References for Today s Lecture Required reading Chapter 20 References AMO Chapter 13 CLRS Chapter 23 ISE 411 Lecture 7 2 Minimum

More information

CSE 100: GRAPH ALGORITHMS

CSE 100: GRAPH ALGORITHMS CSE 100: GRAPH ALGORITHMS Dijkstra s Algorithm: Questions Initialize the graph: Give all vertices a dist of INFINITY, set all done flags to false Start at s; give s dist = 0 and set prev field to -1 Enqueue

More information

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

More information

CMSC 341 Lecture 20 Disjointed Sets

CMSC 341 Lecture 20 Disjointed Sets CMSC 341 Lecture 20 Disjointed Sets Prof. John Park Based on slides from previous iterations of this course Introduction to Disjointed Sets Disjoint Sets A data structure that keeps track of a set of elements

More information

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

CMSC 341 Lecture 20 Disjointed Sets

CMSC 341 Lecture 20 Disjointed Sets CMSC 341 Lecture 20 Disjointed Sets Prof. John Park Based on slides from previous iterations of this course Introduction to Disjointed Sets Disjoint Sets A data structure that keeps track of a set of elements

More information

February 24, :52 World Scientific Book - 9in x 6in soltys alg. Chapter 3. Greedy Algorithms

February 24, :52 World Scientific Book - 9in x 6in soltys alg. Chapter 3. Greedy Algorithms Chapter 3 Greedy Algorithms Greedy algorithms are algorithms prone to instant gratification. Without looking too far ahead, at each step they make a locally optimum choice, with the hope that it will lead

More information

1 Shortest Paths. 1.1 Breadth First Search (BFS) CS 124 Section #3 Shortest Paths and MSTs 2/13/2018

1 Shortest Paths. 1.1 Breadth First Search (BFS) CS 124 Section #3 Shortest Paths and MSTs 2/13/2018 CS Section # Shortest Paths and MSTs //08 Shortest Paths There are types of shortest paths problems: Single source single destination Single source to all destinations All pairs shortest path In today

More information

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS

LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS Department of Computer Science University of Babylon LECTURE NOTES OF ALGORITHMS: DESIGN TECHNIQUES AND ANALYSIS By Faculty of Science for Women( SCIW), University of Babylon, Iraq Samaher@uobabylon.edu.iq

More information

CSE373: Data Structures & Algorithms Lecture 17: Minimum Spanning Trees. Dan Grossman Fall 2013

CSE373: Data Structures & Algorithms Lecture 17: Minimum Spanning Trees. Dan Grossman Fall 2013 CSE373: Data Structures & Algorithms Lecture 7: Minimum Spanning Trees Dan Grossman Fall 03 Spanning Trees A simple problem: Given a connected undirected graph G=(V,E), find a minimal subset of edges such

More information

Minimum-Spanning-Tree problem. Minimum Spanning Trees (Forests) Minimum-Spanning-Tree problem

Minimum-Spanning-Tree problem. Minimum Spanning Trees (Forests) Minimum-Spanning-Tree problem Minimum Spanning Trees (Forests) Given an undirected graph G=(V,E) with each edge e having a weight w(e) : Find a subgraph T of G of minimum total weight s.t. every pair of vertices connected in G are

More information

Dijkstra s algorithm for shortest paths when no edges have negative weight.

Dijkstra s algorithm for shortest paths when no edges have negative weight. Lecture 14 Graph Algorithms II 14.1 Overview In this lecture we begin with one more algorithm for the shortest path problem, Dijkstra s algorithm. We then will see how the basic approach of this algorithm

More information

In this chapter, we consider some of the interesting problems for which the greedy technique works, but we start with few simple examples.

In this chapter, we consider some of the interesting problems for which the greedy technique works, but we start with few simple examples. . Greedy Technique The greedy technique (also known as greedy strategy) is applicable to solving optimization problems; an optimization problem calls for finding a solution that achieves the optimal (minimum

More information

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, )

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, ) Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 1. if >. 2. error new key is greater than current key 3.. 4.. 5. if NIL and.

More information

Greedy Approach: Intro

Greedy Approach: Intro Greedy Approach: Intro Applies to optimization problems only Problem solving consists of a series of actions/steps Each action must be 1. Feasible 2. Locally optimal 3. Irrevocable Motivation: If always

More information

Optimization I : Brute force and Greedy strategy

Optimization I : Brute force and Greedy strategy Chapter 3 Optimization I : Brute force and Greedy strategy A generic definition of an optimization problem involves a set of constraints that defines a subset in some underlying space (like the Euclidean

More information

CIS 121 Data Structures and Algorithms Minimum Spanning Trees

CIS 121 Data Structures and Algorithms Minimum Spanning Trees CIS 121 Data Structures and Algorithms Minimum Spanning Trees March 19, 2019 Introduction and Background Consider a very natural problem: we are given a set of locations V = {v 1, v 2,..., v n }. We want

More information

Disjoint Sets. Based on slides from previous iterations of this course.

Disjoint Sets. Based on slides from previous iterations of this course. Disjoint Sets Based on slides from previous iterations of this course Today s Topics Exam Discussion Introduction to Disjointed Sets Disjointed Set Example Operations of a Disjointed Set Types of Disjointed

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Optimization

More information

Algorithms (IX) Yijia Chen Shanghai Jiaotong University

Algorithms (IX) Yijia Chen Shanghai Jiaotong University Algorithms (IX) Yijia Chen Shanghai Jiaotong University Review of the Previous Lecture Shortest paths in the presence of negative edges Negative edges Dijkstra s algorithm works in part because the shortest

More information

CS521 \ Notes for the Final Exam

CS521 \ Notes for the Final Exam CS521 \ Notes for final exam 1 Ariel Stolerman Asymptotic Notations: CS521 \ Notes for the Final Exam Notation Definition Limit Big-O ( ) Small-o ( ) Big- ( ) Small- ( ) Big- ( ) Notes: ( ) ( ) ( ) ( )

More information

Greedy algorithms is another useful way for solving optimization problems.

Greedy algorithms is another useful way for solving optimization problems. Greedy Algorithms Greedy algorithms is another useful way for solving optimization problems. Optimization Problems For the given input, we are seeking solutions that must satisfy certain conditions. These

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 3 Definitions an undirected graph G = (V, E)

More information

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions Introduction Chapter 9 Graph Algorithms graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 2 Definitions an undirected graph G = (V, E) is

More information

CSE 373 MAY 10 TH SPANNING TREES AND UNION FIND

CSE 373 MAY 10 TH SPANNING TREES AND UNION FIND CSE 373 MAY 0 TH SPANNING TREES AND UNION FIND COURSE LOGISTICS HW4 due tonight, if you want feedback by the weekend COURSE LOGISTICS HW4 due tonight, if you want feedback by the weekend HW5 out tomorrow

More information

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G.

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G. 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G. A shortest s-t path is a path from vertex to vertex, whose sum of edge weights is minimized. (b) Give the pseudocode

More information

DESIGN AND ANALYSIS OF ALGORITHMS GREEDY METHOD

DESIGN AND ANALYSIS OF ALGORITHMS GREEDY METHOD 1 DESIGN AND ANALYSIS OF ALGORITHMS UNIT II Objectives GREEDY METHOD Explain and detail about greedy method Explain the concept of knapsack problem and solve the problems in knapsack Discuss the applications

More information

Greedy Algorithms. At each step in the algorithm, one of several choices can be made.

Greedy Algorithms. At each step in the algorithm, one of several choices can be made. Greedy Algorithms At each step in the algorithm, one of several choices can be made. Greedy Strategy: make the choice that is the best at the moment. After making a choice, we are left with one subproblem

More information

CS 310 Advanced Data Structures and Algorithms

CS 310 Advanced Data Structures and Algorithms CS 0 Advanced Data Structures and Algorithms Weighted Graphs July 0, 07 Tong Wang UMass Boston CS 0 July 0, 07 / Weighted Graphs Each edge has a weight (cost) Edge-weighted graphs Mostly we consider only

More information

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}. Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

n 2 C. Θ n ( ) Ο f ( n) B. n 2 Ω( n logn)

n 2 C. Θ n ( ) Ο f ( n) B. n 2 Ω( n logn) CSE 0 Name Test Fall 0 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to find the maximum of the n elements of an integer array is in: A.

More information

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Main Goal of Algorithm Design Design fast

More information

Union/Find Aka: Disjoint-set forest. Problem definition. Naïve attempts CS 445

Union/Find Aka: Disjoint-set forest. Problem definition. Naïve attempts CS 445 CS 5 Union/Find Aka: Disjoint-set forest Alon Efrat Problem definition Given: A set of atoms S={1, n E.g. each represents a commercial name of a drugs. This set consists of different disjoint subsets.

More information

( D. Θ n. ( ) f n ( ) D. Ο%

( D. Θ n. ( ) f n ( ) D. Ο% CSE 0 Name Test Spring 0 Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to run the code below is in: for i=n; i>=; i--) for j=; j

More information

( ) n 3. n 2 ( ) D. Ο

( ) n 3. n 2 ( ) D. Ο CSE 0 Name Test Summer 0 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to multiply two n n matrices is: A. Θ( n) B. Θ( max( m,n, p) ) C.

More information

2 A Template for Minimum Spanning Tree Algorithms

2 A Template for Minimum Spanning Tree Algorithms CS, Lecture 5 Minimum Spanning Trees Scribe: Logan Short (05), William Chen (0), Mary Wootters (0) Date: May, 0 Introduction Today we will continue our discussion of greedy algorithms, specifically in

More information

( ) D. Θ ( ) ( ) Ο f ( n) ( ) Ω. C. T n C. Θ. B. n logn Ο

( ) D. Θ ( ) ( ) Ο f ( n) ( ) Ω. C. T n C. Θ. B. n logn Ο CSE 0 Name Test Fall 0 Multiple Choice. Write your answer to the LEFT of each problem. points each. The expected time for insertion sort for n keys is in which set? (All n! input permutations are equally

More information

CSci 231 Final Review

CSci 231 Final Review CSci 231 Final Review Here is a list of topics for the final. Generally you are responsible for anything discussed in class (except topics that appear italicized), and anything appearing on the homeworks.

More information

Minimum spanning trees

Minimum spanning trees Carlos Moreno cmoreno @ uwaterloo.ca EI-3 https://ece.uwaterloo.ca/~cmoreno/ece5 Standard reminder to set phones to silent/vibrate mode, please! During today's lesson: Introduce the notion of spanning

More information

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm

G205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm G205 Fundamentals of Computer Engineering CLASS 21, Mon. Nov. 22 2004 Stefano Basagni Fall 2004 M-W, 1:30pm-3:10pm Greedy Algorithms, 1 Algorithms for Optimization Problems Sequence of steps Choices at

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Kruskal s MST Algorithm

Kruskal s MST Algorithm Kruskal s MST Algorithm CLRS Chapter 23, DPV Chapter 5 Version of November 5, 2014 Main Topics of This Lecture Kruskal s algorithm Another, but different, greedy MST algorithm Introduction to UNION-FIND

More information

Building a network. Properties of the optimal solutions. Trees. A greedy approach. Lemma (1) Lemma (2) Lemma (3) Lemma (4)

Building a network. Properties of the optimal solutions. Trees. A greedy approach. Lemma (1) Lemma (2) Lemma (3) Lemma (4) Chapter 5. Greedy algorithms Minimum spanning trees Building a network Properties of the optimal solutions Suppose you are asked to network a collection of computers by linking selected pairs of them.

More information

Chapter 5. Greedy algorithms

Chapter 5. Greedy algorithms Chapter 5. Greedy algorithms Minimum spanning trees Building a network Suppose you are asked to network a collection of computers by linking selected pairs of them. This translates into a graph problem

More information

FINAL EXAM SOLUTIONS

FINAL EXAM SOLUTIONS COMP/MATH 3804 Design and Analysis of Algorithms I Fall 2015 FINAL EXAM SOLUTIONS Question 1 (12%). Modify Euclid s algorithm as follows. function Newclid(a,b) if a

More information

Algorithm Design (8) Graph Algorithms 1/2

Algorithm Design (8) Graph Algorithms 1/2 Graph Algorithm Design (8) Graph Algorithms / Graph:, : A finite set of vertices (or nodes) : A finite set of edges (or arcs or branches) each of which connect two vertices Takashi Chikayama School of

More information

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 8 Lecturer: David Wagner February 20, Notes 8 for CS 170

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 8 Lecturer: David Wagner February 20, Notes 8 for CS 170 UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 8 Lecturer: David Wagner February 20, 2003 Notes 8 for CS 170 1 Minimum Spanning Trees A tree is an undirected graph that is connected

More information

CSE 431/531: Analysis of Algorithms. Greedy Algorithms. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 431/531: Analysis of Algorithms. Greedy Algorithms. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo CSE 431/531: Analysis of Algorithms Greedy Algorithms Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Main Goal of Algorithm Design Design fast algorithms to solve

More information

CS Final - Review material

CS Final - Review material CS4800 Algorithms and Data Professor Fell Fall 2009 October 28, 2009 Old stuff CS 4800 - Final - Review material Big-O notation Though you won t be quizzed directly on Big-O notation, you should be able

More information

n 2 ( ) ( ) + n is in Θ n logn

n 2 ( ) ( ) + n is in Θ n logn CSE Test Spring Name Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to multiply an m n matrix and a n p matrix is in: A. Θ( n) B. Θ( max(

More information

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n)

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n) CSE 0 Test Your name as it appears on your UTA ID Card Fall 0 Multiple Choice:. Write the letter of your answer on the line ) to the LEFT of each problem.. CIRCLED ANSWERS DO NOT COUNT.. points each. The

More information

) $ f ( n) " %( g( n)

) $ f ( n)  %( g( n) CSE 0 Name Test Spring 008 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to compute the sum of the n elements of an integer array is: # A.

More information

Chapter 23. Minimum Spanning Trees

Chapter 23. Minimum Spanning Trees Chapter 23. Minimum Spanning Trees We are given a connected, weighted, undirected graph G = (V,E;w), where each edge (u,v) E has a non-negative weight (often called length) w(u,v). The Minimum Spanning

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

Greedy algorithms. Given a problem, how do we design an algorithm that solves the problem? There are several strategies:

Greedy algorithms. Given a problem, how do we design an algorithm that solves the problem? There are several strategies: Greedy algorithms Input Algorithm Goal? Given a problem, how do we design an algorithm that solves the problem? There are several strategies: 1. Try to modify an existing algorithm. 2. Construct an algorithm

More information

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 7 Lecturer: David Wagner February 13, Notes 7 for CS 170

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 7 Lecturer: David Wagner February 13, Notes 7 for CS 170 UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 7 Lecturer: David Wagner February 13, 003 Notes 7 for CS 170 1 Dijkstra s Algorithm Suppose each edge (v, w) of our graph has a

More information

1 Format. 2 Topics Covered. 2.1 Minimal Spanning Trees. 2.2 Union Find. 2.3 Greedy. CS 124 Quiz 2 Review 3/25/18

1 Format. 2 Topics Covered. 2.1 Minimal Spanning Trees. 2.2 Union Find. 2.3 Greedy. CS 124 Quiz 2 Review 3/25/18 CS 124 Quiz 2 Review 3/25/18 1 Format You will have 83 minutes to complete the exam. The exam may have true/false questions, multiple choice, example/counterexample problems, run-this-algorithm problems,

More information

CSE 521: Design and Analysis of Algorithms I

CSE 521: Design and Analysis of Algorithms I CSE 521: Design and Analysis of Algorithms I Greedy Algorithms Paul Beame 1 Greedy Algorithms Hard to define exactly but can give general properties Solution is built in small steps Decisions on how to

More information

Chapter 9. Greedy Technique. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

Chapter 9. Greedy Technique. Copyright 2007 Pearson Addison-Wesley. All rights reserved. Chapter 9 Greedy Technique Copyright 2007 Pearson Addison-Wesley. All rights reserved. Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

CSC 373 Lecture # 3 Instructor: Milad Eftekhar Huffman encoding: Assume a context is available (a document, a signal, etc.). These contexts are formed by some symbols (words in a document, discrete samples from a signal, etc). Each symbols s i is occurred

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

Notes on Minimum Spanning Trees. Red Rule: Given a cycle containing no red edges, select a maximum uncolored edge on the cycle, and color it red.

Notes on Minimum Spanning Trees. Red Rule: Given a cycle containing no red edges, select a maximum uncolored edge on the cycle, and color it red. COS 521 Fall 2009 Notes on Minimum Spanning Trees 1. The Generic Greedy Algorithm The generic greedy algorithm finds a minimum spanning tree (MST) by an edge-coloring process. Initially all edges are uncolored.

More information

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION DESIGN AND ANALYSIS OF ALGORITHMS Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION http://milanvachhani.blogspot.in EXAMPLES FROM THE SORTING WORLD Sorting provides a good set of examples for analyzing

More information

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph. Trees 1 Introduction Trees are very special kind of (undirected) graphs. Formally speaking, a tree is a connected graph that is acyclic. 1 This definition has some drawbacks: given a graph it is not trivial

More information

tree follows. Game Trees

tree follows. Game Trees CPSC-320: Intermediate Algorithm Design and Analysis 113 On a graph that is simply a linear list, or a graph consisting of a root node v that is connected to all other nodes, but such that no other edges

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD Algorithms (2IL15) Lecture 2 THE GREEDY METHOD x y v w 1 Optimization problems for each instance there are (possibly) multiple valid solutions goal is to find an optimal solution minimization problem:

More information

(Refer Slide Time: 00:18)

(Refer Slide Time: 00:18) Programming, Data Structures and Algorithms Prof. N. S. Narayanaswamy Department of Computer Science and Engineering Indian Institute of Technology, Madras Module 11 Lecture 58 Problem: single source shortest

More information

Dist(Vertex u, Vertex v, Graph preprocessed) return u.dist v.dist

Dist(Vertex u, Vertex v, Graph preprocessed) return u.dist v.dist Design and Analysis of Algorithms 5th September, 2016 Practice Sheet 3 Solutions Sushant Agarwal Solutions 1. Given an edge-weighted undirected connected chain-graph G = (V, E), all vertices having degree

More information

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 11 Greedy Algorithms 1 Activity Selection Problem Input: a set S {1, 2,, n} of n activities s i =Start time of activity i, f i = Finish time of activity i Activity i takes place

More information

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics 1 Sorting 1.1 Problem Statement You are given a sequence of n numbers < a 1, a 2,..., a n >. You need to

More information

Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi

Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 20 Priority Queues Today we are going to look at the priority

More information

managing an evolving set of connected components implementing a Union-Find data structure implementing Kruskal s algorithm

managing an evolving set of connected components implementing a Union-Find data structure implementing Kruskal s algorithm Spanning Trees 1 Spanning Trees the minimum spanning tree problem three greedy algorithms analysis of the algorithms 2 The Union-Find Data Structure managing an evolving set of connected components implementing

More information

Algorithms and Data Structures: Minimum Spanning Trees (Kruskal) ADS: lecture 16 slide 1

Algorithms and Data Structures: Minimum Spanning Trees (Kruskal) ADS: lecture 16 slide 1 Algorithms and Data Structures: Minimum Spanning Trees (Kruskal) ADS: lecture 16 slide 1 Minimum Spanning Tree Problem Given: Undirected connected weighted graph (G, W ) Output: An MST of G We have already

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

Lecture 13. Reading: Weiss, Ch. 9, Ch 8 CSE 100, UCSD: LEC 13. Page 1 of 29

Lecture 13. Reading: Weiss, Ch. 9, Ch 8 CSE 100, UCSD: LEC 13. Page 1 of 29 Lecture 13 Connectedness in graphs Spanning trees in graphs Finding a minimal spanning tree Time costs of graph problems and NP-completeness Finding a minimal spanning tree: Prim s and Kruskal s algorithms

More information

Algorithms (VI) Greedy Algorithms. Guoqiang Li. School of Software, Shanghai Jiao Tong University

Algorithms (VI) Greedy Algorithms. Guoqiang Li. School of Software, Shanghai Jiao Tong University Algorithms (VI) Greedy Algorithms Guoqiang Li School of Software, Shanghai Jiao Tong University Review of the Previous Lecture Lengths on Edges BFS treats all edges as having the same length. It is rarely

More information

Midterm 1 Solutions. (i) False. One possible counterexample is the following: n n 3

Midterm 1 Solutions. (i) False. One possible counterexample is the following: n n 3 CS 170 Efficient Algorithms & Intractable Problems, Spring 2006 Midterm 1 Solutions Note: These solutions are not necessarily model answers. Rather, they are designed to be tutorial in nature, and sometimes

More information

Union Find and Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 8

Union Find and Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 8 Union Find and Greedy Algorithms CSE 101: Design and Analysis of Algorithms Lecture 8 CSE 101: Design and analysis of algorithms Union find Reading: Section 5.1 Greedy algorithms Reading: Kleinberg and

More information

Parallel Graph Algorithms

Parallel Graph Algorithms Parallel Graph Algorithms Design and Analysis of Parallel Algorithms 5DV050 Spring 202 Part I Introduction Overview Graphsdenitions, properties, representation Minimal spanning tree Prim's algorithm Shortest

More information

Greedy Algorithms. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms

Greedy Algorithms. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms Greedy Algorithms A greedy algorithm is one where you take the step that seems the best at the time while executing the algorithm. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms Coin

More information

Lecture 8: The Traveling Salesman Problem

Lecture 8: The Traveling Salesman Problem Lecture 8: The Traveling Salesman Problem Let G = (V, E) be an undirected graph. A Hamiltonian cycle of G is a cycle that visits every vertex v V exactly once. Instead of Hamiltonian cycle, we sometimes

More information

1 Shortest Paths. 1.1 Breadth First Search (BFS) CS 124 Section #3 Shortest Paths and MSTs 2/13/2018

1 Shortest Paths. 1.1 Breadth First Search (BFS) CS 124 Section #3 Shortest Paths and MSTs 2/13/2018 CS 4 Section # Shortest Paths and MSTs //08 Shortest Paths There are types of shortest paths problems: Single source single destination Single source to all destinations All pairs shortest path In today

More information

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI. Department of Computer Science and Engineering CS6301 PROGRAMMING DATA STRUCTURES II

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI. Department of Computer Science and Engineering CS6301 PROGRAMMING DATA STRUCTURES II DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI Department of Computer Science and Engineering CS6301 PROGRAMMING DATA STRUCTURES II Anna University 2 & 16 Mark Questions & Answers Year / Semester: II / III

More information

Undirected Graphs. DSA - lecture 6 - T.U.Cluj-Napoca - M. Joldos 1

Undirected Graphs. DSA - lecture 6 - T.U.Cluj-Napoca - M. Joldos 1 Undirected Graphs Terminology. Free Trees. Representations. Minimum Spanning Trees (algorithms: Prim, Kruskal). Graph Traversals (dfs, bfs). Articulation points & Biconnected Components. Graph Matching

More information