CS126 Final Exam Review Fall 2007 1 Asymptotic Analysis (Big-O) Definition. f(n) is O(g(n)) if there exists constants c, n 0 > 0 such that f(n) c g(n) n n 0 We have not formed any theorems dealing with big-o, so whenever we attempt to prove or disprove anything to do with big-o, we must use the definition. Note the following derivations: f(n) c g(n) n n 0 f(n) g(n) c g(n) 0, n n 0 f(n) lim n g(n) c Any polynomial a k n k + a k 1 n k 1 + + a 1 n + a 0 is O(n k ) 1.1 General Approaches to Big-O Proofs Never state that a proof is obvious by the graph. We are looking for mathematical proofs, not plausibility arguments. You must use the definition in any proofs about Big-O. 1.1.1 Proving f(n) is O(g(n)) Prove the inequality by finding a suitable c and n 0 1.1.2 Proving f(n) is not O(g(n)) Always use a proof by contradiction Can use the limit derivation above: show that the left-hand side limit is unbounded, thus it cannot be bounded by any constant c 1.1.3 Inequalities that can be stated without proof c log(n) n n k k n for any constants c, k where k > 1 Problem 1. Prove 3 sin 2 ( n π ) is O(log(n)) Problem 2. Prove n 2 is not O(2 n ) Problem 3. Prove (n 2 1) 1 2 is O(3n) Problem 4. Use the definition of Big-O to prove the following statement: If f(x) is O(h(x)) and g(x) is O(h(x)) then af(x) + bg(x) is O(h(x)) also, for any constants a > 0, b > 0 1
1.2 Determining Big-O Runtimes of Code You can always determine the Big-O runtime of a line of code by summing the total number of times that line is run. The Big-O runtime of a method is the maximum runtime of all the lines of code in the method. The line that runs the maximum number of times is typically in the inner-most loop. When we have nested loops, if the loops are independent of each other, we can multiply together the Big-O runtimes of the two loops to determine the number of times the code inside the loops will be run. If the inner loop s runtime is dependent on the outer loop, then we cannot simply multiply the two runtimes together; we must sum the total number of times that we will reach the inside of the inner loop. Example 1. for(int i = 0; i < n*n; i++) { for (int j = i; j < n; j++) { // (***) Determine runtime of this line In this case, it would be a mistake to say that the outer loop has runtime O(n 2 ) and the inner loop has runtime O(n), thus the total runtime is O(n 3 ). The problem with this reasoning is that the number of times that the inner loop runs depends on the iteration of the outer loop. Note that we are referring to i in the inner loop. Thus we must sum the total runtime of the line inside the inner loop. This is done as follows: Values of i 0 1... n 1 n... n 2 1 Values of j 0... n 1 1... n 1... n 1 n... n 2 1 # of times (***) runs n n 1... 1 0 0 0 It would seem then, that since (***) is not run when i = n..n 2, the runtime must be n + (n 1) + + 1 + 0 + + 0 = n(n + 1) 2 which is O(n 2 ). However, when i = n..n 2, j is still assigned to be equal to i, and the comparison j < n is still made (in the inner for-loop s definition). These operations are O(1), and are run whenever i = n..n 2, which is n 2 n times. Thus, the total runtime is. O(n 2 ) + (n 2 n)o(1) = O(n 2 ) + O(n 2 ) O(n) = O(n 2 ) Some examples to remember (let c be a constant, and assume there is no way to exit the loop): for(int i = 0; i < n; i++) runs O(n) times for(int i = 0; i < n; i+=c) runs O(n) times for(int i = 0; i < n*n; i+=c) runs O(n 2 ) times for(int i = 0; i < n; i=i*c) runs O(log(n)) times (for c > 1) for(int i = n; i > 1; i=i/c) runs O(log(n)) times (for c > 1) 1.3 Code Analysis Problem 5. Find the run-time complexity on input of size n: for(double i = 1; i <= n; i*=4.0/3.0) { System.out.println ( i ); 2
Problem 6. Find the run-time complexity on input of size n: for(int i = n; i > 1; i = i - i/2) { for (int j = 1; j < Math.pow(n, 2); j = j + j) { System.out.println ( i ); Problem 7. Find the run-time complexity on input of size n: for(int i = 1; i < n; i = i*2) { for (int j = 0; j < Math.pow(n, 3); j = j + i) { System.out.println ( i ); Problem 8. Find the auxiliary space and run-time complexity on input of size n. int num1 = 0; int num2 = 0; int[] array = new int[n]; for(int i = 1; i <= n; i++) { if ( i == 1 ) { for ( int j = i; j <= n; j=j*3 ) { System.out.println ( j ); else { System.out.println ( i ); num1++; num2++; 3
2 Recursion 2.1 Recursive Definitions For a recursive definition, we need a base case (or cases) to define the smallest possible instance, and a recursive case (or cases) to define all other possible instances. Problem 9. Give a recursive definition of an ordered sublist. Note. <4, <5, <1, <2, <9, <>>>>>> is not ordered. <1, <2, <4, <5, <9, <>>>>>> is ordered. 2.2 Recursive Programs Problem 10. Provide a postcondition for the following recursive method. //pre: q is not null //post:??? public static void mystery(queueinterface q) { if (q.isempty()) return; Object temp = q.dequeue(); mystery(q); q.enqueue(temp); 4
3 Table ADT 3.1 Properties An ADT Table consists of KeyedItems. A KeyedItem consists of a Comparable object, and an Object. The Comparable object is the KeyedItem s key and the Object is the KeyedItem s item or value. Comparable objects have a compareto method. In general, tables cannot be traversed. However, tables may be traversed using an iterator if it exists or if you know the implementation of the table. In this latter case it is necessary to write an iterator method as part of the table s implementation. Problem 11. Implement the following method, which merges the contents of two tables into one: public static void mergetables(tableinterface t1, TableInterface t2) { // pre: t1, t2!= null // post: All KeyedItems from t2 that are not already in t1 are added // to t1. t2 remains unchanged. Iterator iterator = t2.getiterator(); 5
4 Trees and Searching 4.1 Binary Tree ADT Note. A binary tree s left subtree is empty if leftsubtree().isempty() is true, not if leftsubtree() == null. The same applies for the right subtree. Problem 12. Given a binary tree (BinaryTreeInterface), write a recursive method to count its number of empty subtrees. 4.2 Tree Traversals It is important to be familiar with the four different types of traversals that were shown in lectures and tutorials. Problem 13. State the level-order and the pre-order traversal for the following tree: Problem 14. Using the following post- and in-order traversals, construct the binary tree that they represent. In-order Traversal: 1 2 4 8 16 32 64 128 Post-order Traversal: 1 4 2 64 32 16 128 8 6
4.3 Binary Search Suppose that we want to improve on the binary search algorithm. Instead of repetitively dividing our array in halves, we will divide it into thirds, and call the method recursively on the appropriate third. We will call this method trisearch. Implement the trisearch method below. What is the Big-O runtime for this method? Is it more efficient than binary search? Problem 15. Implement the following method trisearch : public int trisearch(int lo, int hi, Comparable key, Comparable[] data) { // pre: 0 <= lo <= hi <= data.length, key!= null, data!= null, // data is filled with non-null values // post: returns ideal position of key in data from index lo // to index hi // your code goes here 7
4.4 Binary Search Trees Recall that Binary Search Trees are special Binary Trees such that for every subtree, the following property holds: Consider the following Binary Search Tree: Problem 16. Show the tree after 5, 15, 20 and 25 are inserted Problem 17. Show the tree after 8 and 10 are removed from the resulting tree above 5 Sorting 5.1 Selection Sort Suppose we have the following array of ints: Problem 18. What will the array look like after one pass of Selection Sort, if we choose to select the smallest item each time instead of the largest? Problem 19. After 2 passes? Problem 20. After 6 passes? Problem 21. What is the best case run-time for Selection Sort? Problem 22. What is the worst case run-time? 8
5.2 Insertion Sort Problem 23. Give an array of length n which will cause insertion sort to run in O(n) time, assuming that elements are to be sorted in ascending order. Problem 24. Trace the Insertion Sort algorithm on the following array of length 4, showing the state of the array after each insertion. Also count the number of positions of the array that are considered sorted. Iterations a[0] a[1] a[2] a[3] Amount Sorted 0 3 2 4 1 1 2 3 5.3 Mergesort and Quicksort Consider the following input array: Problem 25. Using Mergesort: What does the array look like just before the final call to merge? Problem 26. Using Quicksort: What does the array look like just after the first call to partition, using the first element as pivot? 5.3.1 Efficiency We know that Mergesort and Quicksort both run in O(n log n) time in the best/average case. Assume that Quicksort is implemented such that the pivot element is the first element in the array. Problem 27. When would Quicksort be preferred over Mergesort? Problem 28. When would Mergesort be preferred over Quicksort? 5.4 Sorting Demos See the links for the last tutorial for the PowerPoint slide demonstractions of the different sorting algorithms. 9