Algorithms and Data Structures

Size: px
Start display at page:

Download "Algorithms and Data Structures"

Transcription

1 IARCS Instructional Course in Computer Science Sacred Hearts College, Tirupattur Algorithms and Data Structures December 11-12, 2003 Venkatesh Raman The Institute of Mathematical Sciences C. I. T. Campus Chennai vraman@imsc.res.in

2 1 Introduction to Algorithm Design and Analysis 1.1 Introduction An algorithm is a receipe or a systematic method to solve a problem. It takes some inputs (some algorithms may not take any input at all), performs a well defined sequence of steps, and produces some output. Once we design an algorithm, we need to know how well it performs on any input. In particular we would like to know whether there are better algorithms for the problem. An answer to this first demands a way to analyze an algorithm in a machine-independent way. Algorithm design and analysis form a central theme in computer science. We illustrate various tools required for algorithm design and analysis through some examples. 1.2 Select Consider the problem of finding the smallest element from a given list A of n integers. Assuming that the elements are stored in an array A, the pseudocode 1 Select(A[1..n]) below returns the smallest element in the array locations 1 to n. function Select(A[1..n]) begin Min := A[1] for i=2 to n do if A[i] < Min then Min := A[i] endfor return (Min) end It is easy to see that finally the variable Min will contain the smallest element in the list. Now let us compute the number of steps taken by this algorithm on a list of n elements in the worst case. The algorithm mainly performs two operations: a comparison (of the type A[i] < Min) and a move (of the type Min := A[i]). It is easy to see that the algorithm performs n 1 comparisons and at most n moves. Remarks: 1. If we replace Min := A[i] by Min := i and A[i] < Min by A[i] < A[Min] after replacing the initial statement to Min := 1 and finally return A[Min], the algorithm will still be correct. Now we save on the kind of moves Min := A[i] which may be expensive in reality if the data stored in each location is huge. 2. Someone tells you that he has an algorithm to select the smallest element that uses at most n 2 comparisons always. Can we believe him? No. Here is why. Take any input of n integers and run his algorithm. Draw a graph with n vertices representing the given n array locations as follows. Whenever algorithm 1 A pseudocode gives a language-independent description of an algorithm. It will have all the essence of the method without worrying about the syntax and declarations of the language. It can be easily converted into a program in any of your favourite language. 2

3 detects that A[i] < A[j] for some pair of locations i and j, draw a directed edge i j between i and j. Now at the end of the execution of the algorithm on the given input, we will have a directed graph with at most n 2 edges. Since the graph has at most n 2 edges, the underlying undirected graph is disconnected. Due to the transitivity of the integers, each connected component of the graph does not have a directed cycle. Hence each component has a sink vertex, a vertex with outdegree 0. Each such vertex corresponds to a location having the smallest element in its component. Note also that the algorithm has detected no relation between elements in these sink locations (otherwise there will be edges between them). Let x and y be integers in two such locations. If the algorithm does not output x or y as the answer, then the algorithm is obviously wrong as these are the smallest elements in their components. If the algorithm outputs x as the answer, we decrease the value of y to some arbitrarily small number. Even now the execution of the algorithm would be along the same path with the same output x which is a wrong answer. In a similar way, we can prove the algorithm wrong even if it outputs y as the answer. An alternate way to see this is that an element has to win a comparison (i.e. must be larger than the other element) before it can be out of consideration for the smallest. In each comparison, at most one element can win, and to have the final answer, n 1 elements must have won comparisons. So n 1 comparisons are necessary to find the smallest element. This fact also indicates that proving a lower bound for a problem is usually a nontrivial task. This problem is one of the very few problems for which such an exact lower bound is known. 1.3 Selectionsort Now suppose we want to sort in increasing order, the elements of the array A. Consider the following pseudocode which performs that task. The algorithm first finds the smallest element and places it in the first location. Then repeatedly selects the smallest element from the remaining elements and places it in the first remaining location. This process is repeated n 1 times at which point the list is sorted. Procedure Selectsort(A[1..n]) begin for i=1 to n-1 do Min := i for j= i+1 to n do if A[j] < A[Min] then Min := j endfor swap (A[i], A[min]) endfor end Here swap(x, y) is a procedure which simply swaps the two elements x and y and which can be implemented as follows. Procedure swap(x,y) 3

4 begin temp := x x := y y := temp end Note that the inner for loop of the above sorting procedure is simply the Select procedure outlined above. So the code can be compactly written as Procedure Selectsort(A[1..n]) begin for i=1 to n-1 do min := Select(A[i,n]) swap(a[i], A[min]) endfor end Now let us analyse the complexity of the algorithm in the worst case. It is easy to see that the number of comparisons performed by the above sorting algorithm is n 1 i=1 (n i) as a call to Select(A[i, n]) takes n i comparisons. Hence the number of comparisons made by the above algorithm is n(n 1)/2. It is also easy to see that the number of moves made by the algorithm is O(n) 2. Thus Selectsort is an O(n 2 ) algorithm. 1.4 Merge In this section, we look at another related problem, the problem of merging two sorted lists to produce a sorted list. You are given two arrays A[1..n] and B[1..n] of size n each, each containing a sequence of n integers sorted in increasing order. The problem is to merge them to produce a sorted list of size 2n. Of course, we can use the Selectsort procedure above to sort the entire sequence of 2n elements in O(n 2 ) steps. But the goal is to do better using the fact that the sequence of elements in each array is in sorted order. The following procedure merges the two arrays A[1..n] and B[1..n] and produces the sorted list in the array C[1..2n]. Procedure Merge(A[1..n], B[1..n], C[1..2n]) begin i := 1; j := 1; k := 1; while i <= n and j <= n do if A[i] < B[j] then C[k] := A[i]; i := i+1 else C[k] := B[j]; j := j+1 endif k := k+1 endwhile if i > n then while j <= n do C[k] := B[j]; k := k+1; j := j+1 2 A function f(n) is O(g(n)) if there exists constants c and N such that f(n) cg(n) for all n N 4

5 end endwhile else while i <= n do C[k] := A[i]; k := k+1; i := i+1 endwhile endif It is easy to see that finally the array C[1..2n] contains the sorted list of elements from A and B. It is also easy to see that the algorithm performs at most 2n 1 comparisons and 2n moves. Can you find out when the algorithm will actually perform 2n 1 comparisons? Recall that initially we were toying around with an O(n 2 ) algorithm (Selectsort) and now we have managed to complete the task in O(n) time simply using the fact that each list is sorted already. 1.5 Mergesort The following algorithm gives another method to sort an array of n integers. This method uses the Merge routine described above. The algorithm calls itself several times (with different arguments) and hence it is a recursive algorithm. Contrast with Selectsort which called some other algorithm. Recursion is a powerful tool both in algorithm design and also in algorithm description. As we will see later, the following recursive algorithm is significantly better than the Selectsort algorithm described earlier. Procedure Mergesort(A[1..n]) begin if n <> 1 then Mergesort(A[1..n div 2]) Mergesort(A[n div n]) Merge(A[1..n div 2], A[n div n], B[1..n]) for i=1 to n do A[i] := B[i] endif The algorithm basically divides the given array into two halves. Recursively sorts the two halves and then uses the Merge routine to complete the sort. The Merge routine actually produces the merged list into a different array B. Hence in the last statement, we copy back the elements of B into the array A. This method is also an example of the classic divide and conquer algorithm design technique. The idea here is to divide the problem into many parts, solve each part separately and then combine the solutions of the many parts. Note that though the actual execution of the recursive calls go through many more recursive calls, we don t have to worry about them in the description. Thus a recursive program usually has a compact description. Note also that the first step in the program is the end (or tail) condition. It is important to have tail condition in a recursive program. Otherwise the program can get into an infinite loop. Now let us get into the analysis of the algorithm. Again for simplicity let us count the number of comparisons made by the algorithm in the worst case. Note that the recursion introduces some complication as we cannot do the analysis as we did so far. But it is easy 5

6 to write what is called a recurrence relation for the number of comparisons performed. Let T (n) be the number of comparisons made by the algorithm. Then it is easy to see that and T (1) = 0 T (n) = T ( n/2 ) + T ( n/2 ) + n 1, for n 2 since the Merge routine performs at most n 1 comparisons to merge two lists one of size n/2 and the other of size n/2 (Note that the way we described, the Merge routine merges two lists each of size n using at most 2n 1 comparisons. But it can be easily modified to merge a list of size m and one of size n using at most m + n 1 comparisons.) Now, how do we solve such a recurrence relation? The study of recurrence relations forms a major part in the analysis of algorithms. Note that, for now, we are not interested in solving the recurrence relation exactly. We will be happy with a tight enough estimate which will help us compare with Selectsort. To solve the recurrence relation, let us first assume that n = 2 k for some integer k, i.e. n is an exact power of 2. Then happily, we will have T (n) = 2T (n/2) + n 1, for n 2. Now expand the recurrence relation using the fact that T (n/2) = 0 if n = 2 or it is 2T (n/4)+ n/2 1 otherwise. If we keep expanding the relation all the way until we get the term in the right hand side to be T (1) for which we know the answer, we will have T (n) = (n 1) + (n 2) + (n 4) +...(n 2 k 1 ) Since k = log 2 n 3, we have T (n) = nk n + 1 = n log n n + 1. What if n is not a power of 2? Let 2 k < n < 2 k+1. Using our Select routine, first find the largest element in the given array. Then add 2 k+1 n dummy elements equal to a number larger than the largest to the array make the number of elements in the array 2 k+1. Now we apply the Mergesort routine to this array. Clearly the first n elements of the sorted list form the required sorted output. Now from our earlier analysis, we have that the algorithm performs at most 2 k+1 (k + 1) 2 k n 1 comparisons (the n 1 comparisons are to find initially the largest element in the list). Now since k < log n, we have that the algorithm performs at most 2n(log n + 1) n n 1 which is 2n(log n + 1) comparisons. So Mergesort is an O(n log n) algorithm. Recall that Selectsort performed n(n 1)/2 comparisons in the worst case. It is easy to see that Mergesort outperforms Selectsort for n 32. You should note that Selectsort performs significantly fewer moves than M ergesort. Also M ergesort uses extra storage space of up to n locations (for the B array) whereas Selectsort uses very few extra variables. Despite all this, M ergesort performs significantly better compared to Selectsort. It is also known that any sorting algorithm by comparisons must perform at least n log n comparisons in the worst case. So Mergesort is, in some sense, an optimal sorting algorithm. Exercise 1: 3 hereafter we will omit the base 2 when we talk about log n and assume that all logarithms are to the base 2 6

7 1. Given an array of n integers, suppose you want to find the largest as well as the smallest in the array. Give an algorithm (a simple pseudocode) to perform this task. How many comparisons does your algorithm use in the worst case? [There is a simple divide and conquer algorithm that performs 3 n/2 1 comparsons in the worst case. But as a warm up, you may want to start with a simpler algorithm which could perform more comparisons.] 2. Given a sorted array A of integers and an integer x, suppose you want to search whether x is in the given array. (a) One method (which does not use the fact that the given array is sorted) is to simply scan the entire array checking whether any element is equal to x. Write a simple pseudocode for this method. How many comparisons does your algorithm use? (b) Another popular method called binary search works as follows. 1.6 Summary Compare x with the middle element A[n div 2]. If x A[n div 2] then recursively search in the first half of the array. Otherwise recursively search for x in the second half of the array. Write a proper pseudocode (with tail conditions etc) for this method, and find the number of comparisons performed by this algorithm. Can you modify the algorithm into a non-recursive one? In this section, we have dealt with various algorithm design and analysis tools through some simple but important data processing problems: Selecting the largest element from a list, Sorting a list and Merging two sorted lists. Through these examples, we introduced the big Oh (O) notation, recursive algorithms, divide and conquer algorithm design technique, recurrence relations, algorithm analysis and lower bounds. 2 Introduction to Data Structures The way in which the data used by the algorithm is organized will have an effect on how the algorithm performs. In some algorithms a certain kind of operations on the input are more frequently used. So a clever method to organize the data to support those operations efficiently will result in the overall improvement of the algorithms. What is the efficient way to organize the data to support a given set of operatons is the question addressed in the study of Data Structures. 2.1 Arrays and Linked Lists Throughout the last section, we assumed that the input is an array where the elements are in contiguous locations and any location can be accessed in constant time. Sometimes it may be an inconvenience that the elements have to be in contiguous locations. Consider the binary search example in the last section. Suppose you discover that the given element x is not in the sorted array, and you want to insert it in the array. Suppose you also find that x has to be in the fourth location to maintain the sorted order. Then you have to move all elements from the fourth location onwards one position to their right to make room for x. It 7

8 would have been nice if we can simply grab an arbitrary empty cell of the array, insert the new element and somehow logically specify that that element comes after the third location and before the present fourth location. A linked list has the provision to specify next and previous elements in such a way. A linked list is a data structure in which the objects are arranged in a linear order. Unlike in an array where the linear order is determined by the array indices, the order in a linked list is determined by a pointer in each object. So each element x of a linked list has two fields, a data field x.key and a pointer field x.next. The pointer field points to the location containing (i.e. gives the address of) the next element in the linear order. The next pointer of the last element is NIL. An attribute head[l] points to the first element of the list L. If head[l] = NIL then the list is empty. The following psuedocode gives a procedure to search for an element x in a given linked list L. Procedure listsearch(l,x) begin p := head[l] while p <> NIL and p.key <> x p := p.next endwhile if p.key = x return(p) else return( x not found ) end The following psuedocode gives a procedure to insert an element x not present in the given sorted list of elements arranged in a linked list. Procedure sortedlistinsert(l,x) begin p := head[l] if p.key >= x then new(cell) cell.key := x; cell.next := p; head[l] := cell else while p <> NIL and p.key < x q := p p := p.next endwhile new(cell) cell.key := x; cell.next := p; q.next := cell endif end It is easy to see that the insert procedure takes only constant time after the position to be inserted is found by the initial scan (the while loop). Coming back to the binary search example, we observed that if the sorted list is arranged in a linked list then insertion is easy after the position to be inserted is found. But note that doing a binary search in a linked list is significantly harder. This is because the access 8

9 to any element in the linked list is always through the header head[l]. So if we have to search as well as insert, then a more sophisticated data structure is required to perform these operations in O(log n) time. We will see more about this later. With each element x in the list, we could also have another pointer x.prev pointing to the location of the previous element. Such a list is called doubly linked list. If we have previous pointers available, we need not have used the variable q in the above code. We could have obtained the final q using p.prev Graph Representations In this section we will give two representations to represent a graph and discuss their advantages and disadvantages for various problems. Adjacency Matrix The standard way to represent a graph G is by its adjacency matrix A. If G has n vertices 1 to n, then A is an n by n matrix with A[i, j] = 1 if the vertex i and j are adjacent and 0 otherwise. If there are m edges in the graph, then the adjacency matrix will have 1 in 2m locations. Adjacency matrix can be represented easily as a two dimensional array as an array [1..n] of arrays [1..n] of integers or equivalently as an array [1..n, 1..n] of integers. We can access A i,j by calling A[i, j]. Adjacency List In an adjacency list, we simply have an array A[1..n] of linked lists. The linked list A[i] has a list of vertices adjacent to the vertex i, arranged in some order (say, increasing order). Essentially each linked list in an adjacency list corresponds to the list of entries having 1 in the corresponding row of the adjacency matrix. There are several graph operations for which an adjacency matrix representation of a graph is better and there are some for which an adjacency list representation is better. For example, suppose we want to know whether vertex i is adjacent to vertex j. In an adjacency matrix, this can be performed in constant time by simply probing A[i, j]. In an adjacency list, however, we have to scan the i-th list to see whether vertex j is present or scan the j-th list to see whether vertex i is present. This could potentially take O(n) time (if for example, the graph is n/2-regular). On the other hand, suppose you want to find all the neighbours of a vertex in a graph. In an adjacency matrix, this takes O(n) time whereas in an adjacency list this takes time proportional to the degree of that vertex. This could amount to a big difference in a sparse graph (where there are very few edges) especially when the neighbours of all vertices have to be traversed. We will see more examples of the use of adjacency list over an adjacency matrix later as well as in the exercises below. Exercise 2.1: 1. It is well known that a graph is Eulerian if and only if every vertex has even degree. Give an algorithm to verify whether a given graph on n vertices and m edges is Eulerian. How much time does your algorithm take in the worst case if (a) the graph is given by its adjacency matrix and (b) graph is given by its adjacency list? 2. A Directed graph can be represented by an adjacency matrix A in a similar way: 9

10 A[i, j] = 1 if there is an edge from i to j = 1 if there is an edge from j to i and = 0 if there is no edge between i and j. Tournaments form a special class of directed graphs where there is a directed edge between every pair of vertices. So we can modify the definition of an adjacency matrix of a tournament as A[i, j] = 1 if there is an edge from i to j = 0 if there is an edge from j to i In a very similar way, we can also represent a tournament by an adjacency list as an array of linked lists where list i contains all vertices j to which there is a directed edge from i. Given a tournament, your problem is to find a hamiltonian path (a directed path passing through all vertices exactly once) in the tournament by the following three algorithms. For each of the three algorithms, write a psuedocode and analyse the running time of your algorithm assuming (a) the tournament is represented by an adjacency matrix as above and (b) the tournament is represented by an adjacency list. (a) (By induction and linear search) Remove any vertex x of the tournament. Find a hamiltonian path P in the resulting tournament recursively. Find, in P, two consecutive vertices i and j such that there is an edge from i to x and from x to j by finding the edge direction between x and every vertex of P. Insert x in P between i and j to obtain a hamiltonian path in the original graph. (b) (By induction and binary search) The algorithm is the same as above except to find the consecutive vertices i and j, try to do a binary search. (c) (By divide and conquer technique) It is known that every tournament on n vertices has a mediocre vertex, a vertex whose outdegree and indegree are at least n/4. Find a mediocre vertex v in the given tournament (use any naive and simple method for this). Find the set of vertices I from which there is an edge to v and the set of vertices O to which there is an edge from v. I and O individually induce a tournament. Find hamiltonian path P and Q recursively in I and in O respectively. Insert v between P and Q to obtain a hamiltonian path in the given tournament. How much time does your algorithm take if a mediocre vertex can be found in a tournament in O(n) time? 2.2 Trees In a linked list, we saw that though the actual memory locations may be scattered, pointers can be used to specify a linear order among them. It turns out that using pointers one can represent structures that form more than a linear order. A binary tree, or more specifically a rooted binary tree is another fundamental and useful data structure. A binary tree T has a root[t ]. Each node or an element x in the tree has several fields including the data field key[x] along with several pointers. The pointer p[x] refers to the parent of the node x (p[x] is NIL for the root node), l[x] is a pointer to the left child of the node x, and similarly r[x] is a pointer to the right child of the node x. If node x 10

11 has no left child then l[x] is NIL and similarly for the right child. If root[t ] is NIL then the tree is empty. There are also several ways of representing a rooted tree where each node can have more than two children. Since we will not be needing them in these lectures, we won t deal with them here. 2.3 Stacks and Queues From the fundamental data structures arrays and linked list, one can build more sophisticated data structures to represent the input based on the operations to be performed on the input. Stacks and Queues are data structures to represent a set of elements where elements can be inserted or deleted but the location at which an element is inserted or deleted is prespecified. Stack: In a stack, the element to be deleted is the one which is recently inserted. Insert into a stack is often called a push operation and deletion from a stack is called a pop operation. Essentially the push and the pop operation are performed at one end of the stack. The stack implements the last-in-first-out or LIFO policy. Queue: In a queue, the element to be deleted is the one which has been in the set for the longest time. Insert into a queue is often called an enqueue operation and deletion from a queue is called a dequeue operation. If the enqueue operation is performed at one of the queue the dequeue operation is performed at the other end. The queue implements the first-in-first-out or FIFO policy. Implementation of Queues and Stacks: It is easy to implement Queues and Stacks using arrays or linked list. To implement a stack, we will use a special variable to keep track of the top of the list. During a push operation, the top pointer is incremented and the new element is added. Similarly during a pop operation, the element pointed to by the top pointer is deleted and the top pointer is decremented. Similarly to implement a queue, we need to keep track the head and tail of the list. During an enqueue operation, the tail pointer is incremented and the new element is inserted. During a dequeue operation, the element pointed to by the head pointer is deleted and the head pointer is decremented. 2.4 Binary Search trees and Heaps The structures we have defined so far simply give an implementation of a specific arrangement of memory locations. In a typical usage of these structures, there is also some order among the data fields of the elements in those locations. Binary Search Trees and Heaps are binary trees where the data fields of the nodes in the binary tree satisfy some specific properties Binary Search Trees A binary search tree is a binary tree where each node x has a key key[x] with the property that key[x] is larger than or equal to key[y] for all nodes y in the left subtree of x, and less than or equal to key[z] for all nodes z in the right subtree of x. There is a pointer to the root of the binary search tree. 11

12 Suppose all the given n integers are arranged in a binary search tree, there is a clever way to search for an element y. We compare with y with the key[root]. If y = key[root] we stop. Otherwise if y < key[root] then we can abandon searching in the right subtree of root and search only in the left subtree and similarly if y > key[root]. This procedure can be applied recursively for the appropriate subtree of the root and the entire search will involve traversing a root to leaf path. If we reach a NIL pointer without finding y, then y is not in the list. So the time taken by the above algorithm is proportional to a root to leaf path. This is the height of the tree in the worst case. If the binary tree is a complete binary tree where the tree is filled on all levels except possibly the lowest, which is filled from the left up to a point, then it is easy to see that such a binary tree on n nodes has height O(log n). So if the given n elements are arranged in a binary search tree on a complete binary tree, then searching for an element y in that list takes O(log n) time. In fact, if we reach a NIL pointer without finding y, the NIL pointer specifies the location where y has to be if it were there in the list. So we can simply insert y in that location (updating appropriate pointers) as we would insert an element in a linked list. So insert also takes O(log n) time in such a binary search tree. There is one catch. Suppose we start from a binary search tree where the underlying binary tree is complete and keep inserting elements. After a few insertions, the tree will no longer remain complete and so height may no longer be O(log n). The solution is to keep the tree balanced after every insertion. There are several balanced binary search trees in the literature implementations of insert or delete on these trees involve more operations to keep the tree balanced after a simple insert or delete Heaps A (min) heap is a complete binary tree (where all levels of the tree are filled except possibly the bottom which is filled from the left up to a point) where each node x has a key key[x] with the property that key[x] is less than or equal to the key value of its children. There is a pointer to the root as well as the first empty (NIL) location at the bottom level of the heap. It follows that the root of the heap contains the smallest element of the list. The key values in the tree are said to satisfy the heap order. Since a heap is stored in a complete binary tree, the key values of all the nodes can be stored in a simple array by reading the key values level by level. To insert an element into a heap, insert it at the first empty location at the bottom level. Compare it with the parent and swap it with the parent if it doesn t satisfy the heap property with its parent. Then this procedure is recursively applied to the parent until the heap property is satisfied with the node s parent. The execution simply involves traversing a root to leaf path and hence takes O(log n) time since the height of the tree is O(log n). To delete the minimum element from the heap, simply delete the root and replace it with the element at the last nonempty node of the bottom level. Also the pointer to first empty location at the bottom level is pointed to this node. Now there may be a heap order violation at the root. Again this is fixed by traversing down the tree up to a leaf. This process also takes O(log n) time. Thus heap is also an efficient data structure to support a priority queue where elements are inserted and only the smallest element is deleted. We will come across graph problems where the only operations performed on the data list are to insert an integer into the list and to delete the smallest element in the list. For such applications a heap will be a useful data structure to represent the data list. 12

13 Exercise 2.2: 1. Given a binary tree, there are various ways in which we can traverse all nodes of the tree. A preorder traversal of a binary tree visits the root, followed by a preorder traversal of the left subtree of the root, followed by a preorder traversal of the right subtree. An inorder traversal of a binary tree recursively performs an inorder traversal of the left subtree of the root, followed by the root, followed by an inorder traversal of the right subtree of the root. A postorder traversal of a binary tree recursively performs a postorder traversal of the left subtree of the root, followed by a postorder traversal of the right subtree of the root, followed by the root. Write psuedocodes for each of the three traversals of a binary tree (with proper end condition etc), and analyse their running time. 2. Given n integers arranged in a binary search tree, how can we produce the integers in sorted order in O(n) time? 3. Given an array of n integers, it can be considered as the key values of a complete binary tree read level by level from left to right. Such an arrangement in a complete binary tree can be made into a heap as follows: Starting from the bottom level to the root level, make all subtrees rooted at every node in that level a heap. The subtree rooted at a node x in a level can be made into a heap as follows. Both subtrees of x form heaps by themselves. If key[x] is smaller than or equal to the key values of its children, then the subtree rooted at x is already a heap. Otherwise, swap key[x] with the key value of the smaller child and continue down this process from the node containing the smaller child. Write a proper psuedocode for this algorithm. Prove that this algorithm takes O(n) time in the worst case. (Note that in the array representation of the complete binary tree, the left child of a node at position x is at position 2x and the right child at position 2x + 1. Similarly the parent of a node at position x is at position x/2.) 4. Give an O(log n) algorithm to perform the following operations on a heap. (a) To delete an arbitrary element specified by its location in the tree. (b) To replace the value of an element by a smaller value. 5. Given an array of n integers, give an algorithm to sort them into increasing order using the heap data structure in O(n log n) time. (Hint: In the first step, convert the array into a representation of a heap in O(n) time using the previous exercise. The smallest element will be in the first location. Then if you perform some heap operation n times, you get a sorted list.) 13

14 2.5 Summary In this section, we had an introduction to fundamental data structures like arrays, linked lists, trees, stacks, queues, heaps and binary search trees. In graph representations, we saw an application of arrays versus linked lists. We will see applications of the other data structures in subsequent sections. 3 Graph Search Algorithms In this section, we will look at algorithms for several fundamental problems on graphs. In the initial step of these algorithms, the vertices and edges of the graph are traversed in a specific way. There are two basic methods to systematically visit the vertices and edges of a graph: breadth-first-search and depth-first-search. We begin with the description of these methods. 3.1 Breadth First Search Here, we visit the vertices of the graph in a breadthwise fashion. The idea is to start from a vertex, and then visit all its neighbours and then visit the neighbours of its neighbours and so on in a systematic way. A queue will be an ideal data structure to record the neighbours of a vertex (at the other end) once a vertex is visited. See pages of [3] for a more detailed description. Note that the way the BFS is described in those pages, the search will visit only vertices reachable from the vertex s. If the graph is disconnected, then one can pick an arbitrary vertex from another component (i.e. a vertex which is still coloured white at the end of BFS(s)) and grow a BFS starting at that vertex. If this process is repeated until no vertex is coloured white, the search would have visited all the vertices and edges of the graph Applications The following properties of the BFS algorithm are useful. 1. For vertices v reachable from s, d[v] gives the length of the shortest path from s to v. Furthermore, a shortest path from s to v can be obtained by extending a shortest path from s to π[v] using the edge (π[v], v). Though this result is quite intuitive from the way a BFS algorithm works, it can be proved rigorously using induction. 2. For a graph G = (V, E) with source s, the predecessor graph G π = (V π, E π ) defined by V π = {v V : π[v] NIL} {s} and E π = {(π[v], v) E : v V π {s}} is a tree called the breadth-first tree. Furthermore, for all vertices reachable from s, and for all vertices v V π, the unique simple path from s to v in G π is a shortest path from s to v in G. 3. If (i, j) E is not an edge in the breadth first tree, then either d[i] = d[j] or d[i] = d[j] + 1 or d[i] = d[j] 1. Connectivity: To verify whether the given graph G is connected, simply grow a BFS starting at any vertex s. If any vertex is still coloured white after BF S(s), then G is disconnected. Otherwise G is connected. Note that by growing a BFS starting at white vertices, we can 14

15 obtain also the connected components of the graph. Thus in O(n+m) time it can be verified whether a given graph on n vertices and m edges is connected and if it is not, the connected components of the graph can also be obtained in the same time. Shortest Paths: As we have observed in properties 1 and 2 above, the length of the shortest path 4 from a given vertex s can be obtained simply from a BFS search. In addition, BFS also outputs a breadth-first tree from which one can also obtain a shortest path from s to every vertex reachable from s. Spanning Forest: To find a spanning tree of a connected graph, simply perform a breadthfirst search starting at any vertex. The breadth first tree gives a spanning tree of the graph if the graph is connected. If the graph is disconnected, we obtain a breadth-first forest which is a spanning forest. Exercise 3.1: 1. What is the running time of BFS if its input graph is represented by an adjacency matrix and the algorithm is modified to handle this form of input? 2. Give an efficient algorithm to determine if an undirected graph is bipartite. (Hint: Property 3 discussed above may be useful here.) 3. Give an efficient algorithm to find the shortest cycle in the graph. How much time does your algorithm take? 3.2 Depth First Search Here the idea is to visit the vertices in a depthwise fashion. Starting from a vertex, we first visit ONE of its neighbours, then one of ITS neighbours and so on until the current vertex has no unvisited neighbours. At that point, we backtrack and continue visiting the next neighbour of the earlier vertex and so on. A stack (Last in First out) is a good data structure to record the neighbours of the visted vertices by this procedure. See pages of [3] for more details and some applications of DFS (pages ). One of the early applications of DFS is to obtain strongly connected components of a graph. The readers are directed to the books in the References section for details of this. Exercise 3.2: 1. Give an algorithm that determines whether or not a given undirected graph on n vertices contains a cycle. Your algorithm should run in O(n) time (independent of the number of edges in the graph). 2. Another way to perform topological sorting on a directed acyclic graph on n vertices and m edges is to repeatedly find a vertex of in-degree 0, output it, and remove it all of its outgoing edges from the graph. Explain how to implement this idea so that the algorithm runs in O(n + m) time. What happens to this algorithm if the graph has cycles? 4 Here, we assume that the edges of the graph are unweighted. The shortest path problem on weighted graphs is discussed in the next section. 15

16 3.3 Summary In this section, we looked at the basic search procedures in a graph, Breadth First Search (BFS) and Depth First Search (DFS). We saw applications of the data structures Stacks and Queues in these procedures. As applications of BFS and DFS, we described algorithms to test whether a given graph is connected, and if so to find the spanning tree of the graph; to find the connected components and/or the spanning forest of the graph if the graph is disconnected; to find the length of the shortest path from a source vertex to every vertex reachable from it; and to perform a topological sort of a directed acyclic graph. Using BFS and DFS, we observed that these problems can be solved in O(m + n) time on a graph with n vertices and m edges. 4 Efficient Algorithms on Weighted graphs Graph theory, and graph algorithms in particular, have several real-world applications. In modeling some of these applications as graphs, typically the edges of the graph have some real weights - these could denote distances between two points and/or some cost. The goal is to find some optimum structure (with respect to the weights) in the weighted graph. For example, in the last section we saw that in a connected graph, a spanning tree as well as the shortests paths to all vertices from a source can be found in O(m + n) time. However, when the edges have weights, the shortest path problem gets a little complicated. Similarly, instead of any spanning tree, a spanning tree with minimum weight (the total weight of all edges in the tree) is of interest in some applications. We will look at algorithms for these problems in this section. Though we will not specifically mention about the representation of the graph, it sufficies to represent it by an adjacency list where the weight of an edge (i, j) is kept with the vertex j in the adjacency list i as well as with the vertex i in the adjacency list j. 4.1 Minimum Spanning Tree Given an undirected graph with weights on the edges, the problem here is to find, among all spanning trees, one with minimum (total) weight. Note that there may be several spanning trees with the same weight. We give two classical algorithms which follow a greedy approach to find the minimum spanning tree. Greedy heuristic is a well known heuristic to solve optimization problems. It advocates making the choice that is the best at the moment - i.e. a locally optimum choice. Such a strategy is not generally guaranteed to find globally optimum solution. For the minimum spanning tree problem, however, we can prove that certain greedy heuristics do yield a spanning tree with minimum weight Kruskal s algorithm An obvious greedy strategy is to sort the edges by their weights and to construct a spanning tree by considering edges in that order and adding those that do not form a cycle to the already constructed graph (forest). This is the essence of Kruskal s algorithm. This algorithm actually produces a minimum spanning tree due to the following reason (which can be easily proved rigorously). Let G = (V, E) be a connected undirected graph with each edge having a real valued integer as its weight. Let A be a subset of E that is in some minimum spanning tree for 16

17 G. Let C be a connected component (a tree) in the forest G A = (V, A), and let (u, v) be an edge with minimum weight among those connecting C to some other component in G A. Then A {(u, v)} is included in some minimum spanning tree for G. The first sorting step can be performed in O(m log m) time using any of the efficient sorting algorithms described in the first section. After that at any stage we have a forest of edges spread over several components. When a new edge is considered, it is added to the forest only if it joins vertices from two different components of the forest. There are efficient data structures to maintain such a structure (forest) in which one can check whether or not an edge is inside one component in O(log m) time. Thus the overall algorithm can be implemented in O(m log m) time Prim s algorithm Kruskal s algorithm constructs a spanning tree starting from a forest of n trees (vertices), by adding at every iteration a minimum weight edge connecting two components. At any stage of the Kruskal s algorithm, there is a forest. Prim s algorithm, on the other hand, always has a connected component (a tree) C of edges and grows the tree by adding a minimum weight edge connecting C to the rest of the graph. Initially C is a single vertex s from which the tree is grown. The correctness of Prim s algorithm also follows from the reason mentioned above for the correctness of Kruskal s algorithm. To implement Kruskal s algorithm, at every stage we need to find a minimum weight edge connecting C to the rest of the graph. Initially this is the edge that has the minimum weight among those incident on the starting vertex s. But at any intermediate stage C has several vertices, and each of those vertices may have several edges incident with it. Suppose, for every vertex u not in C, we keep track of the weight d[u] (and also the other end point) of the minimum weight edge connecting u to a vertex in C (d[u] will be if u is not adjacent to any vertex in C). Then the vertex p (and the corresponding edge) with the minimum d[p] among all those vertices not in C is the next vertex (and the corresponding edge) to be included in C according to Kruskal s algorithm. Once the vertex p is included in C, the d[] values of some vertices (those adjacent to p) need to be updated (more specifially decreased). Thus we need a data structure to maintain a set of at most n integers where the operations we will perform at any stage are a delete the minimum element - deletemin, and decrease the value - decreasekey for some elements. It is easy to see that on a graph on n vertices and m edges, overall there will be n 1 deletemin operations and m decreasekey operations (why?). A heap is an appropriate data structure to implement these operations. As we have already seen a deletemin or a decreasekey operation can be performed in O(log n) time in a heap. Thus Kruskal s algorithm can be implemented to take O(m log n) time. There are more sophisticated variations of the heap (Fibonacci Heap) that takes O(m) time to perform a sequence of m decreasekey operations while performing O(n log n) time to perform n deletemin operations. Thus Kruskal s algorithm can be made to run in O(m + n log n) time. Note that for a dense graph where m is, say, n 2 /4, the former implementation takes O(n 2 log n) time whereas the latter implementation takes O(n 2 ) algorithm. Notice the role played by the data structures in the implementation of algorithms. Both implementations above implement the same algorithm, but improvements are obtained by using more efficient data structures. 17

18 4.2 Shortest Paths As we saw in the last section, the breadth-first-search method gives the shortest path from a source vertex in an unweighted graph (we could view the weight of each edge to be unit). There the length of a path is simply the number of edges in the path. In a weighted graph, however, the length of a path is the sum of the weights of the edges in the path. It turns out that Prim s algorithm to find the minimum spanning tree starting from the vertex s, actually constructs the shortest path tree from the vertex s (if the edge weights are non-negative). That is, the unique path from s to a vertex v in the minimum spanning tree constructed by Prim s algorithm is the shortest This is essentially Dijkstra s algorithm to find the shortest paths from a source. There are several variants to the problem. If some edge weights are negative, there is an algorithm due to Bellman and Ford that finds the shortest paths from a single source. Sometime we may be interested in the shortest paths between every pair of vertices in the graph. This can be solved by applying the single source shortest path algorithm once for each vertex; but it can usually be solved faster, and its structure is of interest on its own right. See the references for more details of the algorithms for the variants. 4.3 Summary In this section, we looked at algorithms for Shortest Paths and Minimum Spanning Tree on weighted graphs. Along the way we also saw the role of data structures in designing efficient algorithms; in particular, we saw an application of heaps. We also came across the greedy heuristic - an algorithm technique useful for designing exact or approximate solutions for optimization problems. 5 NP-completeness and Coping with it Each of the graph (and even other) problems we have seen so far has a very efficient algorithm - an algorithm whose running time is a polynomial in the size of the input. Are there problems that don t have such an efficient algorithm? For example, we saw in the second section that we can test whether a graph on n vertices and m edges is Eulerian in O(m + n) time. How about checking whether a given graph is Hamiltonian? I.e. whether the graph has a simple path which visits all vertices exactly once? The lack of a nice characterization (as for Eulerian graphs) for Hamiltonian graphs apart, there is no efficient algorithm for testing whether a graph is hamiltonian. All known algorithms are essentially brute force, checking all possible paths in the graph, and take roughly O(n n ) time. This essentially means that to solve this problem for a graph even on 50 vertices on today s powerful computers will take several centuries. Hence exponential time algorithms are considered infeasible or impractical in Computer Science. In practice, it also turns out that problems having polynomial time algorithms have a very low degree (< 4) in the runtime of their algorithms. So one is always interested in designing polynomial time algorithms. Can we show that the Hamiltonicity problem cannot have a polynomial time algorithm? Answering such questions takes us to the arena of lower bounds which we briefly alluded to in the first section. Unfortunately proving that a problem does not have an efficient algorithm is more difficult than coming up with an efficient algorithm. Fortunately some evidence to the fact the Hamiltonian problem cannot have a polynomial 18

19 time algorithm is known. In 1971, Cook[2] identified a class of decision problems 5 called NP-complete problems. The class NP consists of decisions problems for which there is a polynomial time algorithm to verify the YES answer given a witness/proof for the answer. The class of NP-complete problems are those NP problems for which if there is a polynomial time algorithm to compute the answer, then all problems in the class NP have polynomial time algorithms. Cook identified the (first) problem of Boolean CNF formula satisfiability as an NP-complete problem proving from the first principles. Karp[9] in 1972 proved several graph problems NP-complete reducing from the Satisfiability problem as well as from the problems he proved NP-complete. Now there is a list of thousands of NP-complete problems. A polynomial time algorithm for any one of them would imply a polynomial time algorithm for all NP problems. Similarly a proof that any one of the NP-complete problems is not polynomial time solvable would imply a proof that all the NP-complete problems are not polynomial time solvable. Whether all NP problems have polynomial time algorithms is an important open problem in the area of Algorithms and Complexity. Proving a problem NP-complete is considered as an evidence that the problem is not efficiently solvable. It is known that the decision version of the Hamiltonian Path problem (given an integer k, is there a path of length at least k in the graph), along with Independent Set problem (given an integer k, is there an independent set of size k?), Clique problem (is there a clique of size k in a graph) and Dominating Set problem, is NP-complete. Since the decision version of the Hamiltonian Path problem and that of the longest path problem are the same, it follows that there is no efficient algorithm to find the longest path in the graph. Contrast this to the problem of finding the shortest path in the graph (Lecture 2). An Example Reduction: Proving a problem NP-complete involves first proving that the problem is in NP and then efficiently (in polynomial time) reducing an instance of a known NP-complete problem to an instance of this problem in such a way that the original instance has a YES answer if and only if the instance of this problem has a YES answer. Suppose the problem, Given a graph and an integer k, does it have a clique of size at least k is already proved to be NP-complete. Using that result we will show that the Vertex Cover problem - Given a graph and an integer k, does it have a vertex cover (a set S of vertices such that for every edge in the graph, one of its end points is in S) of size at most k is NP-complete. The fact that the vertex cover problem is in NP is easy to show. To verify that the given graph has a vertex cover of size at most k, the witness is a set S of at most k vertices and the verification algorithm has to simply go through all edges to make sure at least one of its end points is in S. To prove it is NP-complete, we reduce the Clique problem to this. Given a graph G on n vertices (in which we are interested in solving the clique problem), construct its complement G c. G has a clique of size at least k if and only if G c has an independent set of size at least k. G c has an independent set of size at least k if and only if G c has a vertex cover of size at most n k (since the complement of an independent set is a vertex cover). Thus G has a clique of size at least k if and only if G c has a vertex cover of size at most n k. This shows that the Vertex Cover problem is NP-complete. Essentially we have shown that if the Vertex Cover problem has a polynomial time algorithm, then we can use that as a subroutine to solve the clique problem which has been 5 For some technical reason, this entire NP-completeness theory deals only with decisions problems - problems for which only a YES or NO answer is required. This is really not a constraint as most of the optimization problems can be solved by calling an appropriate decision version a few times. 19

CS521 \ Notes for the Final Exam

CS521 \ Notes for the Final Exam CS521 \ Notes for final exam 1 Ariel Stolerman Asymptotic Notations: CS521 \ Notes for the Final Exam Notation Definition Limit Big-O ( ) Small-o ( ) Big- ( ) Small- ( ) Big- ( ) Notes: ( ) ( ) ( ) ( )

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 3 Definitions an undirected graph G = (V, E)

More information

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions Introduction Chapter 9 Graph Algorithms graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 2 Definitions an undirected graph G = (V, E) is

More information

Data Structure. IBPS SO (IT- Officer) Exam 2017

Data Structure. IBPS SO (IT- Officer) Exam 2017 Data Structure IBPS SO (IT- Officer) Exam 2017 Data Structure: In computer science, a data structure is a way of storing and organizing data in a computer s memory so that it can be used efficiently. Data

More information

Algorithm Design (8) Graph Algorithms 1/2

Algorithm Design (8) Graph Algorithms 1/2 Graph Algorithm Design (8) Graph Algorithms / Graph:, : A finite set of vertices (or nodes) : A finite set of edges (or arcs or branches) each of which connect two vertices Takashi Chikayama School of

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures Chapter 9 Graph s 2 Definitions Definitions an undirected graph is a finite set

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures 3 Definitions an undirected graph G = (V, E) is a

More information

Course Review for Finals. Cpt S 223 Fall 2008

Course Review for Finals. Cpt S 223 Fall 2008 Course Review for Finals Cpt S 223 Fall 2008 1 Course Overview Introduction to advanced data structures Algorithmic asymptotic analysis Programming data structures Program design based on performance i.e.,

More information

Dijkstra s algorithm for shortest paths when no edges have negative weight.

Dijkstra s algorithm for shortest paths when no edges have negative weight. Lecture 14 Graph Algorithms II 14.1 Overview In this lecture we begin with one more algorithm for the shortest path problem, Dijkstra s algorithm. We then will see how the basic approach of this algorithm

More information

CSE 431/531: Analysis of Algorithms. Greedy Algorithms. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 431/531: Analysis of Algorithms. Greedy Algorithms. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo CSE 431/531: Analysis of Algorithms Greedy Algorithms Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Main Goal of Algorithm Design Design fast algorithms to solve

More information

Reference Sheet for CO142.2 Discrete Mathematics II

Reference Sheet for CO142.2 Discrete Mathematics II Reference Sheet for CO14. Discrete Mathematics II Spring 017 1 Graphs Defintions 1. Graph: set of N nodes and A arcs such that each a A is associated with an unordered pair of nodes.. Simple graph: no

More information

CS 161 Lecture 11 BFS, Dijkstra s algorithm Jessica Su (some parts copied from CLRS) 1 Review

CS 161 Lecture 11 BFS, Dijkstra s algorithm Jessica Su (some parts copied from CLRS) 1 Review 1 Review 1 Something I did not emphasize enough last time is that during the execution of depth-firstsearch, we construct depth-first-search trees. One graph may have multiple depth-firstsearch trees,

More information

Algorithms and Data Structures 2014 Exercises and Solutions Week 9

Algorithms and Data Structures 2014 Exercises and Solutions Week 9 Algorithms and Data Structures 2014 Exercises and Solutions Week 9 November 26, 2014 1 Directed acyclic graphs We are given a sequence (array) of numbers, and we would like to find the longest increasing

More information

Lecture 13. Reading: Weiss, Ch. 9, Ch 8 CSE 100, UCSD: LEC 13. Page 1 of 29

Lecture 13. Reading: Weiss, Ch. 9, Ch 8 CSE 100, UCSD: LEC 13. Page 1 of 29 Lecture 13 Connectedness in graphs Spanning trees in graphs Finding a minimal spanning tree Time costs of graph problems and NP-completeness Finding a minimal spanning tree: Prim s and Kruskal s algorithms

More information

CSCE f(n) = Θ(g(n)), if f(n) = O(g(n)) and f(n) = Ω(g(n)).

CSCE f(n) = Θ(g(n)), if f(n) = O(g(n)) and f(n) = Ω(g(n)). CSCE 3110 Asymptotic Notations Let f and g be functions on real numbers. Then: f(n) = O(g(n)), if there are constants c and n 0 so that f(n) cg(n)), for n n 0. f(n) = Ω(g(n)), if there are constants c

More information

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Main Goal of Algorithm Design Design fast

More information

CS 310 Advanced Data Structures and Algorithms

CS 310 Advanced Data Structures and Algorithms CS 31 Advanced Data Structures and Algorithms Graphs July 18, 17 Tong Wang UMass Boston CS 31 July 18, 17 1 / 4 Graph Definitions Graph a mathematical construction that describes objects and relations

More information

R10 SET - 1. Code No: R II B. Tech I Semester, Supplementary Examinations, May

R10 SET - 1. Code No: R II B. Tech I Semester, Supplementary Examinations, May www.jwjobs.net R10 SET - 1 II B. Tech I Semester, Supplementary Examinations, May - 2012 (Com. to CSE, IT, ECC ) Time: 3 hours Max Marks: 75 *******-****** 1. a) Which of the given options provides the

More information

Topics. Trees Vojislav Kecman. Which graphs are trees? Terminology. Terminology Trees as Models Some Tree Theorems Applications of Trees CMSC 302

Topics. Trees Vojislav Kecman. Which graphs are trees? Terminology. Terminology Trees as Models Some Tree Theorems Applications of Trees CMSC 302 Topics VCU, Department of Computer Science CMSC 302 Trees Vojislav Kecman Terminology Trees as Models Some Tree Theorems Applications of Trees Binary Search Tree Decision Tree Tree Traversal Spanning Trees

More information

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions Dr. Amotz Bar-Noy s Compendium of Algorithms Problems Problems, Hints, and Solutions Chapter 1 Searching and Sorting Problems 1 1.1 Array with One Missing 1.1.1 Problem Let A = A[1],..., A[n] be an array

More information

Representations of Weighted Graphs (as Matrices) Algorithms and Data Structures: Minimum Spanning Trees. Weighted Graphs

Representations of Weighted Graphs (as Matrices) Algorithms and Data Structures: Minimum Spanning Trees. Weighted Graphs Representations of Weighted Graphs (as Matrices) A B Algorithms and Data Structures: Minimum Spanning Trees 9.0 F 1.0 6.0 5.0 6.0 G 5.0 I H 3.0 1.0 C 5.0 E 1.0 D 28th Oct, 1st & 4th Nov, 2011 ADS: lects

More information

Recitation 9. Prelim Review

Recitation 9. Prelim Review Recitation 9 Prelim Review 1 Heaps 2 Review: Binary heap min heap 1 2 99 4 3 PriorityQueue Maintains max or min of collection (no duplicates) Follows heap order invariant at every level Always balanced!

More information

TIE Graph algorithms

TIE Graph algorithms TIE-20106 1 1 Graph algorithms This chapter discusses the data structure that is a collection of points (called nodes or vertices) and connections between them (called edges or arcs) a graph. The common

More information

CSE 100: GRAPH ALGORITHMS

CSE 100: GRAPH ALGORITHMS CSE 100: GRAPH ALGORITHMS Dijkstra s Algorithm: Questions Initialize the graph: Give all vertices a dist of INFINITY, set all done flags to false Start at s; give s dist = 0 and set prev field to -1 Enqueue

More information

UNIT 5 GRAPH. Application of Graph Structure in real world:- Graph Terminologies:

UNIT 5 GRAPH. Application of Graph Structure in real world:- Graph Terminologies: UNIT 5 CSE 103 - Unit V- Graph GRAPH Graph is another important non-linear data structure. In tree Structure, there is a hierarchical relationship between, parent and children that is one-to-many relationship.

More information

D. Θ nlogn ( ) D. Ο. ). Which of the following is not necessarily true? . Which of the following cannot be shown as an improvement? D.

D. Θ nlogn ( ) D. Ο. ). Which of the following is not necessarily true? . Which of the following cannot be shown as an improvement? D. CSE 0 Name Test Fall 00 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to convert an array, with priorities stored at subscripts through n,

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 1, Winter 201 Design and Analysis of Algorithms Lecture 7: Bellman-Ford, SPs in DAGs, PQs Class URL: http://vlsicad.ucsd.edu/courses/cse1-w1/ Lec. Added after class Figure.: Single-Edge Extensions

More information

Graphs and Network Flows ISE 411. Lecture 7. Dr. Ted Ralphs

Graphs and Network Flows ISE 411. Lecture 7. Dr. Ted Ralphs Graphs and Network Flows ISE 411 Lecture 7 Dr. Ted Ralphs ISE 411 Lecture 7 1 References for Today s Lecture Required reading Chapter 20 References AMO Chapter 13 CLRS Chapter 23 ISE 411 Lecture 7 2 Minimum

More information

CSci 231 Final Review

CSci 231 Final Review CSci 231 Final Review Here is a list of topics for the final. Generally you are responsible for anything discussed in class (except topics that appear italicized), and anything appearing on the homeworks.

More information

2.1 Greedy Algorithms. 2.2 Minimum Spanning Trees. CS125 Lecture 2 Fall 2016

2.1 Greedy Algorithms. 2.2 Minimum Spanning Trees. CS125 Lecture 2 Fall 2016 CS125 Lecture 2 Fall 2016 2.1 Greedy Algorithms We will start talking about methods high-level plans for constructing algorithms. One of the simplest is just to have your algorithm be greedy. Being greedy,

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Lecture Notes. char myarray [ ] = {0, 0, 0, 0, 0 } ; The memory diagram associated with the array can be drawn like this

Lecture Notes. char myarray [ ] = {0, 0, 0, 0, 0 } ; The memory diagram associated with the array can be drawn like this Lecture Notes Array Review An array in C++ is a contiguous block of memory. Since a char is 1 byte, then an array of 5 chars is 5 bytes. For example, if you execute the following C++ code you will allocate

More information

Data Structures Question Bank Multiple Choice

Data Structures Question Bank Multiple Choice Section 1. Fundamentals: Complexity, Algorthm Analysis 1. An algorithm solves A single problem or function Multiple problems or functions Has a single programming language implementation 2. A solution

More information

Design and Analysis of Algorithms - - Assessment

Design and Analysis of Algorithms - - Assessment X Courses» Design and Analysis of Algorithms Week 1 Quiz 1) In the code fragment below, start and end are integer values and prime(x) is a function that returns true if x is a prime number and false otherwise.

More information

Your favorite blog : (popularly known as VIJAY JOTANI S BLOG..now in facebook.join ON FB VIJAY

Your favorite blog :  (popularly known as VIJAY JOTANI S BLOG..now in facebook.join ON FB VIJAY Course Code : BCS-042 Course Title : Introduction to Algorithm Design Assignment Number : BCA(IV)-042/Assign/14-15 Maximum Marks : 80 Weightage : 25% Last Date of Submission : 15th October, 2014 (For July

More information

R13. II B. Tech I Semester Supplementary Examinations, May/June DATA STRUCTURES (Com. to ECE, CSE, EIE, IT, ECC)

R13. II B. Tech I Semester Supplementary Examinations, May/June DATA STRUCTURES (Com. to ECE, CSE, EIE, IT, ECC) SET - 1 II B. Tech I Semester Supplementary Examinations, May/June - 2016 PART A 1. a) Write a procedure for the Tower of Hanoi problem? b) What you mean by enqueue and dequeue operations in a queue? c)

More information

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]:

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]: CSE 101 Homework 1 Background (Order and Recurrence Relations), correctness proofs, time analysis, and speeding up algorithms with restructuring, preprocessing and data structures. Due Thursday, April

More information

COMP 251 Winter 2017 Online quizzes with answers

COMP 251 Winter 2017 Online quizzes with answers COMP 251 Winter 2017 Online quizzes with answers Open Addressing (2) Which of the following assertions are true about open address tables? A. You cannot store more records than the total number of slots

More information

Outline. Graphs. Divide and Conquer.

Outline. Graphs. Divide and Conquer. GRAPHS COMP 321 McGill University These slides are mainly compiled from the following resources. - Professor Jaehyun Park slides CS 97SI - Top-coder tutorials. - Programming Challenges books. Outline Graphs.

More information

CSI 604 Elementary Graph Algorithms

CSI 604 Elementary Graph Algorithms CSI 604 Elementary Graph Algorithms Ref: Chapter 22 of the text by Cormen et al. (Second edition) 1 / 25 Graphs: Basic Definitions Undirected Graph G(V, E): V is set of nodes (or vertices) and E is the

More information

Lecture Summary CSC 263H. August 5, 2016

Lecture Summary CSC 263H. August 5, 2016 Lecture Summary CSC 263H August 5, 2016 This document is a very brief overview of what we did in each lecture, it is by no means a replacement for attending lecture or doing the readings. 1. Week 1 2.

More information

Scribes: Romil Verma, Juliana Cook (2015), Virginia Williams, Date: May 1, 2017 and Seth Hildick-Smith (2016), G. Valiant (2017), M.

Scribes: Romil Verma, Juliana Cook (2015), Virginia Williams, Date: May 1, 2017 and Seth Hildick-Smith (2016), G. Valiant (2017), M. Lecture 9 Graphs Scribes: Romil Verma, Juliana Cook (2015), Virginia Williams, Date: May 1, 2017 and Seth Hildick-Smith (2016), G. Valiant (2017), M. Wootters (2017) 1 Graphs A graph is a set of vertices

More information

FINAL EXAM SOLUTIONS

FINAL EXAM SOLUTIONS COMP/MATH 3804 Design and Analysis of Algorithms I Fall 2015 FINAL EXAM SOLUTIONS Question 1 (12%). Modify Euclid s algorithm as follows. function Newclid(a,b) if a

More information

Undirected Graphs. DSA - lecture 6 - T.U.Cluj-Napoca - M. Joldos 1

Undirected Graphs. DSA - lecture 6 - T.U.Cluj-Napoca - M. Joldos 1 Undirected Graphs Terminology. Free Trees. Representations. Minimum Spanning Trees (algorithms: Prim, Kruskal). Graph Traversals (dfs, bfs). Articulation points & Biconnected Components. Graph Matching

More information

Week 12: Minimum Spanning trees and Shortest Paths

Week 12: Minimum Spanning trees and Shortest Paths Agenda: Week 12: Minimum Spanning trees and Shortest Paths Kruskal s Algorithm Single-source shortest paths Dijkstra s algorithm for non-negatively weighted case Reading: Textbook : 61-7, 80-87, 9-601

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

Solutions to Exam Data structures (X and NV)

Solutions to Exam Data structures (X and NV) Solutions to Exam Data structures X and NV 2005102. 1. a Insert the keys 9, 6, 2,, 97, 1 into a binary search tree BST. Draw the final tree. See Figure 1. b Add NIL nodes to the tree of 1a and color it

More information

CS301 - Data Structures Glossary By

CS301 - Data Structures Glossary By CS301 - Data Structures Glossary By Abstract Data Type : A set of data values and associated operations that are precisely specified independent of any particular implementation. Also known as ADT Algorithm

More information

CS8391-DATA STRUCTURES QUESTION BANK UNIT I

CS8391-DATA STRUCTURES QUESTION BANK UNIT I CS8391-DATA STRUCTURES QUESTION BANK UNIT I 2MARKS 1.Define data structure. The data structure can be defined as the collection of elements and all the possible operations which are required for those

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

CS 341: Algorithms. Douglas R. Stinson. David R. Cheriton School of Computer Science University of Waterloo. February 26, 2019

CS 341: Algorithms. Douglas R. Stinson. David R. Cheriton School of Computer Science University of Waterloo. February 26, 2019 CS 341: Algorithms Douglas R. Stinson David R. Cheriton School of Computer Science University of Waterloo February 26, 2019 D.R. Stinson (SCS) CS 341 February 26, 2019 1 / 296 1 Course Information 2 Introduction

More information

End-Term Examination Second Semester [MCA] MAY-JUNE 2006

End-Term Examination Second Semester [MCA] MAY-JUNE 2006 (Please write your Roll No. immediately) Roll No. Paper Code: MCA-102 End-Term Examination Second Semester [MCA] MAY-JUNE 2006 Subject: Data Structure Time: 3 Hours Maximum Marks: 60 Note: Question 1.

More information

DESIGN AND ANALYSIS OF ALGORITHMS

DESIGN AND ANALYSIS OF ALGORITHMS DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK Module 1 OBJECTIVE: Algorithms play the central role in both the science and the practice of computing. There are compelling reasons to study algorithms.

More information

DATA STRUCTURES AND ALGORITHMS

DATA STRUCTURES AND ALGORITHMS DATA STRUCTURES AND ALGORITHMS UNIT 1 - LINEAR DATASTRUCTURES 1. Write down the definition of data structures? A data structure is a mathematical or logical way of organizing data in the memory that consider

More information

& ( D. " mnp ' ( ) n 3. n 2. ( ) C. " n

& ( D.  mnp ' ( ) n 3. n 2. ( ) C.  n CSE Name Test Summer Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to multiply two n " n matrices is: A. " n C. "% n B. " max( m,n, p). The

More information

Analysis of Algorithms, I

Analysis of Algorithms, I Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, March 8, 2016 Outline 1 Recap Single-source shortest paths in graphs with real edge weights:

More information

Department of Computer Science and Technology

Department of Computer Science and Technology UNIT : Stack & Queue Short Questions 1 1 1 1 1 1 1 1 20) 2 What is the difference between Data and Information? Define Data, Information, and Data Structure. List the primitive data structure. List the

More information

UNIT IV -NON-LINEAR DATA STRUCTURES 4.1 Trees TREE: A tree is a finite set of one or more nodes such that there is a specially designated node called the Root, and zero or more non empty sub trees T1,

More information

Basic Data Structures (Version 7) Name:

Basic Data Structures (Version 7) Name: Prerequisite Concepts for Analysis of Algorithms Basic Data Structures (Version 7) Name: Email: Concept: mathematics notation 1. log 2 n is: Code: 21481 (A) o(log 10 n) (B) ω(log 10 n) (C) Θ(log 10 n)

More information

DDS Dynamic Search Trees

DDS Dynamic Search Trees DDS Dynamic Search Trees 1 Data structures l A data structure models some abstract object. It implements a number of operations on this object, which usually can be classified into l creation and deletion

More information

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

CSC 373 Lecture # 3 Instructor: Milad Eftekhar Huffman encoding: Assume a context is available (a document, a signal, etc.). These contexts are formed by some symbols (words in a document, discrete samples from a signal, etc). Each symbols s i is occurred

More information

Figure 1: A directed graph.

Figure 1: A directed graph. 1 Graphs A graph is a data structure that expresses relationships between objects. The objects are called nodes and the relationships are called edges. For example, social networks can be represented as

More information

8. Write an example for expression tree. [A/M 10] (A+B)*((C-D)/(E^F))

8. Write an example for expression tree. [A/M 10] (A+B)*((C-D)/(E^F)) DHANALAKSHMI COLLEGE OF ENGINEERING DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING EC6301 OBJECT ORIENTED PROGRAMMING AND DATA STRUCTURES UNIT IV NONLINEAR DATA STRUCTURES Part A 1. Define Tree [N/D 08]

More information

END-TERM EXAMINATION

END-TERM EXAMINATION (Please Write your Exam Roll No. immediately) Exam. Roll No... END-TERM EXAMINATION Paper Code : MCA-205 DECEMBER 2006 Subject: Design and analysis of algorithm Time: 3 Hours Maximum Marks: 60 Note: Attempt

More information

Test 1 Last 4 Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. 2 points each t 1

Test 1 Last 4 Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. 2 points each t 1 CSE 0 Name Test Fall 00 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each t. What is the value of k? k=0 A. k B. t C. t+ D. t+ +. Suppose that you have

More information

Optimization I : Brute force and Greedy strategy

Optimization I : Brute force and Greedy strategy Chapter 3 Optimization I : Brute force and Greedy strategy A generic definition of an optimization problem involves a set of constraints that defines a subset in some underlying space (like the Euclidean

More information

Analysis of Algorithms

Analysis of Algorithms Algorithm An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Introduction and Overview

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Introduction and Overview Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Introduction and Overview Welcome to Analysis of Algorithms! What is an Algorithm? A possible definition: a step-by-step

More information

Theory of Computing. Lecture 4/5 MAS 714 Hartmut Klauck

Theory of Computing. Lecture 4/5 MAS 714 Hartmut Klauck Theory of Computing Lecture 4/5 MAS 714 Hartmut Klauck How fast can we sort? There are deterministic algorithms that sort in worst case time O(n log n) Do better algorithms exist? Example [Andersson et

More information

( ) ( ) C. " 1 n. ( ) $ f n. ( ) B. " log( n! ) ( ) and that you already know ( ) ( ) " % g( n) ( ) " #&

( ) ( ) C.  1 n. ( ) $ f n. ( ) B.  log( n! ) ( ) and that you already know ( ) ( )  % g( n) ( )  #& CSE 0 Name Test Summer 008 Last 4 Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time for the following code is in which set? for (i=0; i

More information

Selection, Bubble, Insertion, Merge, Heap, Quick Bucket, Radix

Selection, Bubble, Insertion, Merge, Heap, Quick Bucket, Radix Spring 2010 Review Topics Big O Notation Heaps Sorting Selection, Bubble, Insertion, Merge, Heap, Quick Bucket, Radix Hashtables Tree Balancing: AVL trees and DSW algorithm Graphs: Basic terminology and

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 4 Graphs Definitions Traversals Adam Smith 9/8/10 Exercise How can you simulate an array with two unbounded stacks and a small amount of memory? (Hint: think of a

More information

Minimum Spanning Trees

Minimum Spanning Trees Minimum Spanning Trees Problem A town has a set of houses and a set of roads. A road connects 2 and only 2 houses. A road connecting houses u and v has a repair cost w(u, v). Goal: Repair enough (and no

More information

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I MCA 312: Design and Analysis of Algorithms [Part I : Medium Answer Type Questions] UNIT I 1) What is an Algorithm? What is the need to study Algorithms? 2) Define: a) Time Efficiency b) Space Efficiency

More information

) $ f ( n) " %( g( n)

) $ f ( n)  %( g( n) CSE 0 Name Test Spring 008 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to compute the sum of the n elements of an integer array is: # A.

More information

FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard

FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 The data of the problem is of 2GB and the hard disk is of 1GB capacity, to solve this problem we should Use better data structures

More information

Graph Algorithms. Chapter 22. CPTR 430 Algorithms Graph Algorithms 1

Graph Algorithms. Chapter 22. CPTR 430 Algorithms Graph Algorithms 1 Graph Algorithms Chapter 22 CPTR 430 Algorithms Graph Algorithms Why Study Graph Algorithms? Mathematical graphs seem to be relatively specialized and abstract Why spend so much time and effort on algorithms

More information

Dynamic-Programming algorithms for shortest path problems: Bellman-Ford (for singlesource) and Floyd-Warshall (for all-pairs).

Dynamic-Programming algorithms for shortest path problems: Bellman-Ford (for singlesource) and Floyd-Warshall (for all-pairs). Lecture 13 Graph Algorithms I 13.1 Overview This is the first of several lectures on graph algorithms. We will see how simple algorithms like depth-first-search can be used in clever ways (for a problem

More information

CS 8391 DATA STRUCTURES

CS 8391 DATA STRUCTURES DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING QUESTION BANK CS 8391 DATA STRUCTURES UNIT- I PART A 1. Define: data structure. A data structure is a way of storing and organizing data in the memory for

More information

A6-R3: DATA STRUCTURE THROUGH C LANGUAGE

A6-R3: DATA STRUCTURE THROUGH C LANGUAGE A6-R3: DATA STRUCTURE THROUGH C LANGUAGE NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF

More information

Algorithms: Lecture 10. Chalmers University of Technology

Algorithms: Lecture 10. Chalmers University of Technology Algorithms: Lecture 10 Chalmers University of Technology Today s Topics Basic Definitions Path, Cycle, Tree, Connectivity, etc. Graph Traversal Depth First Search Breadth First Search Testing Bipartatiness

More information

Dijkstra s Algorithm Last time we saw two methods to solve the all-pairs shortest path problem: Min-plus matrix powering in O(n 3 log n) time and the

Dijkstra s Algorithm Last time we saw two methods to solve the all-pairs shortest path problem: Min-plus matrix powering in O(n 3 log n) time and the Dijkstra s Algorithm Last time we saw two methods to solve the all-pairs shortest path problem: Min-plus matrix powering in O(n 3 log n) time and the Floyd-Warshall algorithm in O(n 3 ) time. Neither of

More information

CHENNAI MATHEMATICAL INSTITUTE M.Sc. / Ph.D. Programme in Computer Science

CHENNAI MATHEMATICAL INSTITUTE M.Sc. / Ph.D. Programme in Computer Science CHENNAI MATHEMATICAL INSTITUTE M.Sc. / Ph.D. Programme in Computer Science Entrance Examination, 5 May 23 This question paper has 4 printed sides. Part A has questions of 3 marks each. Part B has 7 questions

More information

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G.

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G. 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G. A shortest s-t path is a path from vertex to vertex, whose sum of edge weights is minimized. (b) Give the pseudocode

More information

Chapter 3 Trees. Theorem A graph T is a tree if, and only if, every two distinct vertices of T are joined by a unique path.

Chapter 3 Trees. Theorem A graph T is a tree if, and only if, every two distinct vertices of T are joined by a unique path. Chapter 3 Trees Section 3. Fundamental Properties of Trees Suppose your city is planning to construct a rapid rail system. They want to construct the most economical system possible that will meet the

More information

CIS 121 Data Structures and Algorithms Midterm 3 Review Solution Sketches Fall 2018

CIS 121 Data Structures and Algorithms Midterm 3 Review Solution Sketches Fall 2018 CIS 121 Data Structures and Algorithms Midterm 3 Review Solution Sketches Fall 2018 Q1: Prove or disprove: You are given a connected undirected graph G = (V, E) with a weight function w defined over its

More information

CSE373: Data Structures & Algorithms Lecture 28: Final review and class wrap-up. Nicki Dell Spring 2014

CSE373: Data Structures & Algorithms Lecture 28: Final review and class wrap-up. Nicki Dell Spring 2014 CSE373: Data Structures & Algorithms Lecture 28: Final review and class wrap-up Nicki Dell Spring 2014 Final Exam As also indicated on the web page: Next Tuesday, 2:30-4:20 in this room Cumulative but

More information

Graph Algorithms. Definition

Graph Algorithms. Definition Graph Algorithms Many problems in CS can be modeled as graph problems. Algorithms for solving graph problems are fundamental to the field of algorithm design. Definition A graph G = (V, E) consists of

More information

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph. Trees 1 Introduction Trees are very special kind of (undirected) graphs. Formally speaking, a tree is a connected graph that is acyclic. 1 This definition has some drawbacks: given a graph it is not trivial

More information

Course Review. Cpt S 223 Fall 2009

Course Review. Cpt S 223 Fall 2009 Course Review Cpt S 223 Fall 2009 1 Final Exam When: Tuesday (12/15) 8-10am Where: in class Closed book, closed notes Comprehensive Material for preparation: Lecture slides & class notes Homeworks & program

More information

Tutorial. Question There are no forward edges. 4. For each back edge u, v, we have 0 d[v] d[u].

Tutorial. Question There are no forward edges. 4. For each back edge u, v, we have 0 d[v] d[u]. Tutorial Question 1 A depth-first forest classifies the edges of a graph into tree, back, forward, and cross edges. A breadth-first tree can also be used to classify the edges reachable from the source

More information

Solving problems on graph algorithms

Solving problems on graph algorithms Solving problems on graph algorithms Workshop Organized by: ACM Unit, Indian Statistical Institute, Kolkata. Tutorial-3 Date: 06.07.2017 Let G = (V, E) be an undirected graph. For a vertex v V, G {v} is

More information

Module 2: Classical Algorithm Design Techniques

Module 2: Classical Algorithm Design Techniques Module 2: Classical Algorithm Design Techniques Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Module

More information

CS8391-DATA STRUCTURES

CS8391-DATA STRUCTURES ST.JOSEPH COLLEGE OF ENGINEERING DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERI NG CS8391-DATA STRUCTURES QUESTION BANK UNIT I 2MARKS 1.Explain the term data structure. The data structure can be defined

More information

Introduction to Graph Theory

Introduction to Graph Theory Introduction to Graph Theory Tandy Warnow January 20, 2017 Graphs Tandy Warnow Graphs A graph G = (V, E) is an object that contains a vertex set V and an edge set E. We also write V (G) to denote the vertex

More information

looking ahead to see the optimum

looking ahead to see the optimum ! Make choice based on immediate rewards rather than looking ahead to see the optimum! In many cases this is effective as the look ahead variation can require exponential time as the number of possible

More information

Elementary Graph Algorithms. Ref: Chapter 22 of the text by Cormen et al. Representing a graph:

Elementary Graph Algorithms. Ref: Chapter 22 of the text by Cormen et al. Representing a graph: Elementary Graph Algorithms Ref: Chapter 22 of the text by Cormen et al. Representing a graph: Graph G(V, E): V set of nodes (vertices); E set of edges. Notation: n = V and m = E. (Vertices are numbered

More information

CS2 Algorithms and Data Structures Note 10. Depth-First Search and Topological Sorting

CS2 Algorithms and Data Structures Note 10. Depth-First Search and Topological Sorting CS2 Algorithms and Data Structures Note 10 Depth-First Search and Topological Sorting In this lecture, we will analyse the running time of DFS and discuss a few applications. 10.1 A recursive implementation

More information

ROOT: A node which doesn't have a parent. In the above tree. The Root is A.

ROOT: A node which doesn't have a parent. In the above tree. The Root is A. TREE: A tree is a finite set of one or more nodes such that there is a specially designated node called the Root, and zero or more non empty sub trees T 1, T 2...T k, each of whose roots are connected

More information