pre cond & code alg post cond

Size: px
Start display at page:

Download "pre cond & code alg post cond"

Transcription

1 Part I: Iterative Algorithms & Loop Invariants Chapter: 1: Measures of Progress and Loop Invariants Section: 1.1: The Steps to Develop an Iterative Algorithm 1. Loop Invariants (a) What is a loop invariant? A picture of what must be true at the top of the loop (b) What is the formal statement that an algorithm is correct for a problem. If for every legal input instance, the required output is produced. If the input instance does not meet the preconditions, then all bets are off. Formally, we express this as pre cond & code alg post cond This correctness is only with respect to this specifications. (c) What are the formal mathimatical things involving loop invariants that must be proved to prove if your program exits then it obtains the post condition? i. ii. iii. pre cond & code pre loop loop invariant loop invariant & not exit cond & code loop loop invariant loop invariant & exit cond & code post loop post cond (d) What is the loop invariant for the GCD alg? On input a, b, the algorithm maintains two variables x and y such that GCD(x, y) = GCD(a, b). (e) What is the loop invariant for a dynamic programming algorithm? (f) What is the loop invariant for the binary search? If the thing being searched for is anywhere, then then it is in this narrowed sublist. (g) What is the loop invariant for insertion sort? 2. Loop Invariants: In high school you would have learned the algorithm Gaussian Elimination for solving systems of linear equations and for inverting a matrix. Here we are to think of this algorithm as an iterative algorithm with a loop invariant. (Don t panic. No linear algebra or memory of the algorithm is needed.) The input in an n n-matrix M that is invertible and the output is its inverse N, i.e. the n n-matrix such that M N = I. Here I is the identity matrix, i.e. that for which M I = M. As a huge hint, the loop invariant is that the algorithm maintains matrices M and N for which M = M N. (I recall drawing these two matrices next to each other with vertical line between them.) (a) Initializing: i. For the iterative algorithm meta-algorithm, what is the formal statement that needs to be proved about the code before the loop? You must prove that the initial code establishes the loop invariant. The formal statement that must be true is pre cond & code pre loop loop invariant ii. Give the code that goes before the loop for this matrix inverting algorithm. M = M and N = I.

2 iii. Prove this formal statement for this matrix inverting algorithm. (b) The main loop: The loop invariant M = M N follows trivially because M = M = M I = M N. i. For the meta-algorithm, what are the two things required of each iteration? Make progress and maintain the loop invariant. ii. For the iterative algorithm meta-algorithm, what is the formal statement that needs to be proved about the code within the loop? You must maintain the loop invariant. The formal statement is loop invariant & not exit cond & code loop loop invariant iii. Give a few sentences giving the intuition about what the code within the loop for this matrix inverting algorithm must accomplish and what needs to be proved about it. You do not need to specify any details of the linear algebra. (c) Ending: The code in the loop must take two matrices M and N and produce two new matrices M and N. We must prove that if the loop invariant M = M N is true for M and N, then it is true for M and N, namely M = M N. You do not need to get this, but M is produced from M by adding a multiple of one row to another row in such a way that M has one more zero elements than M. N is produced by doing the same row operation on N. Also progress must be made. You do not need to get this, but the measure of progress will be the number of zero of elements of M. This number will go up by at least one each iteration. i. For the meta-algorithm, what is the formal statement that needs to be proved about the code after the loop? In this step, you must ensure that once the loop has exited you will be able to solve the problem. loop invariant & exit cond & code post loop post cond ii. What should the exit condition be for this matrix inverting algorithm? Exit when M = I (or when M has an all zero row). iii. Give the code that goes after the loop for this matrix inverting algorithm. Set N = N and return it. iv. Prove this formal statement for this matrix inverting algorithm. By the loop invariant M = M N, by the exit condition M = I, and by the post code N = N. It follows that M N = M N = M = I. Hence, as required, the inverse N of M is returned. Section: 1.2: More About The Steps Section: 1.3: Different Types of Iterative Algorithms Section: 1.4: Typical Errors End of Chapter Questions 3. Sums 2

3 algorithm Eg(n) pre cond : n is an integer. post cond : Prints Hi s. begin s = 0 i = 0 loop loop invariant exit when s n i = i + 1 s = s + 1 i put Hi end loop end algorithm (a) Give a good loop invariant. s = i j=1 1 j. Initially true with s = i = 0 and maintained with i = i + 1 and s = s + i = [ i j=1 1 j ] + i = i j=1 1 j. (b) How is the value of s not a reasonable measure of progress for Eg? How is it reasonable? If you can think of one, what would be a better measure of progress? Officially, we said that the measure of progress should decrease by at least one each step. This one increases by only 1 i. The increasing is not a problem. Increasing by 1 i would 2 be a problem, but it seems increasing by 1 i is not. We will in fact use s to prove the program halts. If you want an officially good measure of progress note that because s = log e i, we have i = e s and hence, M(i) = n e s = n i decreases from n to 0 by one each iteration. (c) Give the theta of the running time of Eg. Size = log n = s. n = 2 s. When the program exists s = sum i j=1 1 j = log e i+θ(1) n. Hence, Time = i = e n+θ(1) = e 2s +Θ(1) = 2 2Θ(s), which is double exponential. 4. (25 marks) The Carnival Coin Game: You are running a game booth at your local village carnival. In your game you lay out an array of $1 coins on a table. For a $5 charge, anyone can challenge you to a game. In this game, you and your customer alternately pick coins from the table, either 1 or 2 at a time. If your customer can make you pick up the last coin, he walks away with all the coins. You graciously allow your customer to go first. Being an enterprising sort, you want to arrange the game so that you will always win. To do this, you write an iterative algorithm CarniCoin(n) that tells you how many coins to pick up on every turn. The algorithm takes one parameter n, the number of coins on the table at the beginning of the game. The algorithm is while there is more than one coin left on the table The customer picks up 1 or 2 coins. You do the opposite: if the customer picked up 1, you pick up 2, and vice-versa. end while (a) (5 marks) What is the postcondition for your algorithm? One coin remains on the table, and it is the customer s turn. (b) (5 marks) What is the loop invariant for your algorithm? 3

4 The number i of coins on the table satisfies i mod 3 = 1. (c) (5 marks) What precondition must you place on n in order to guarantee that you will win every time? n > 0 and n = 1 mod 3. (d) (10 marks) Prove the correctness of your algorithm. Be sure to explicitly include all steps required to prove correctness. The algorithm is i = n while i > 1 i = i 3 end while The loop invariant is maintained because 3 coins are picked up on each iteration, so that the number of coins on the table, mod 3, is maintained. The loop invariant is true on entry by virtue of the precondition. As a measure of progress, we can use the coins remaining on the table, which, as shown above, decreases by 3 on every iteration, and will therefore meet the exit condition in finite time. On exit, i 1 and i mod 3 = 1. Since i decreases by 3 every iteration, i > 2. Thus i = 1 and the postcondition is satisfied. Chapter: 2: Examples Using More of The Input Loop Invariant Section: 2.1: Colouring the Plane Section: 2.2: Deterministic Finite Automaton Section: 2.3: More of the Input vs More of the Output 5. Loop Invariants (32 marks) Longest Contiguous Increasing Subsequence (LCIS): The input consists of a sequence A[1..n] of integers and we want to find a longest contiguous subsequence A[k 1..k 2 ] such that the elements are strictly increasing. For example, the optimal solution for [5, 3, 1, 3, 7, 7, 9, 8] is [1, 3, 7]. (a) (5 marks) Provide a tight lower bound on the running time for this problem. Prove that your answer is indeed a tight lower bound. The problem takes at least Ω(n) time because the input needs to be read. Let A be an arbitrary algorithm. Give it the input consisting of all zero. If it does not read some integer, change this integer to a 1 (If it is the first integer change it to -1). It answers the same longest length in both cases, however, in the first the longest has length one an in the second two. Therefore, it is wrong in at least one of these cases. (b) (5 marks) Specify an appropriate loop-invariant, measure of progress and exit condition for an iterative solution to this problem. No explanations are necessary. LI: We maintain an over all optimal LCIS found so far and the LSIS that we are currently working on. A[k 1... k 2 ] is an LCIS of A[1...i] and A[p... i] is the LCIS of A[1...i] that ends at i. Note the first LI is More of the Input, namely if we consider the prefix A[1... i] to be the entire input then we have the solution. Most people missed the second one, but it is needed if the LI is to be maintained. Measure of Progress: the number i of elements read. Many gave that i increases. This is progress, but not a measure. Exit Condition: When all the elements have been read because i = n. 4

5 (c) (10 marks) Provide concise pseudo-code for an algorithm LCIS(A,n) that returns the indices k 1, k 2 of the LCIS of A[1...n] uses the loop-invariant, measure of progress and exit condition you specified above. Assume n 1. I thought more about why LI are important to me. It is a life philosophy. It is about feeling grounded. Most of the code I marked today makes me feel ungrounded. It cycles, but I dont know what the variables mean, how they fit together, where the algorithm is going, or how to start thinking about it. Loop invariants are about starting my day at home, where I know what is true and what things mean. Loop invariants are also about returning back full circle back to my safe home at the end of my day algorithm [k 1, k 2 ] =LCIS(A,n) pre cond : A[1... n] is an array of integers post cond : A[k 1... k 2 ] is an LCIS of A[1...n] begin k 1 = k 2 = p = i = 1 loop loop invariant : A[k1...k 2 ] is an LCIS of A[1...i] and A[p...i] is the LCIS of A[1... i] that ends at i i = i + 1 if( A[i 1] A[i] ) then p = i end if if (q p) > (k 2 k 1 ) k 1 = p k 2 = q end if end if end algorithm (d) (6 marks) Provide an informal argument for how your code is able to maintain the loop invariant. A formal proof is not required. loop invariant & not exit cond & code loop loop invariant Suppose at the beginning of an iteration, the LI is true for the current values, i.e. A[k 1...k 2] is an LCIS of A[1... i ] and A[p... i ] is the LCIS of A[1...i ] that ends at i. The first block of code either extends the LCIS that we are currently working on from A[p...i ] to A[p... i +1] or shrinks it to contain only the one element A[i +1] depending on whether or not A[i +1] is bigger than A[i ]. This is achieved by moving its end point from i to i = i +1 and either leaving its begging at p = p or moving it to p = i. Either way, the LI is maintained that A[p... i ] is the LCIS of A[1...i ] that ends at i. The second block of code updates the over all LCIS found so far simply by moving it to the current one, if the current one has grown to be larger. Either way, the LI is maintained that A[k 1... k 2] is an LCIS of A[1...i ] (e) (6 marks) What are the other key steps in proving your algorithm correct? List and provide a concise proof for each. Establishing the LI: pre cond & code pre loop loop invariant. On entry, k 1 = k 2 = p = i = 1. Thus the first part of the loop-invariant requires that A[1] be an LCIS of A[1], which is trivially true. The second part of the loop invariant requires that A[1] is the LCIS of A[1] that ends at location 1, which is also trivially true. Making Progress: i increments by 1 on each iteration. Program exits when i = n. Establishing Post-Condition: loop invariant & exit cond & code post loop post cond. When i = n, the first LI gives that A[k 1... k 2 ] is an LCIS of A[1... n], which is the post condition. 5

6 6. Suppose you start with the number 1 and it costs you one to add two numbers together that you already have. (Numbers can be reused.) Once you have produced a number, you can use it many times. Given an integer k as input, design an algorithm to produce k as cheaply as possible. What does it cost? The space requirement is how many numbers you need to keep at any given time. Try to get an algorithm with minimum space. Hint 1: Think of the input k as an n = log 2 (k) bit binary string. (Consider reading it forward and backward). Hint 2: Try More of the Input, More of the Output, and Narrowing the Search Space. One of these three works fairly easily. (10 bonus point if you can do it in time T(k) = n + Θ( n log n ).) You definitely don t want to produce 1, 2, 3, 4,..., k by adding 1 each time, but this would cost k 1 = 2 n. The More of the Input technique has the loop invariant state that if we consider the first i bits of the input instance to be the entire input instance, then for this we have a solution. As an example, suppose k was in binary and i = 4. Then the prefix would be k = in binary. The loop invariant says that somehow we have produced that value. Each step of this algorithm is to extend the prefix by one bit and then to produce the solution for the longer instance from the solution for the shorter instance. In our example, the next bit is a zero. Including this gives the longer instance k = Note that this change doubled the value of the instance, namely k = 2 k = k + k. The loop invariant says that we have produced k. We can produce k simply by adding this k to itself. On the other hand, if the next bit is a 1, then k = would extend to k = This gives k = k + k + 1. Given that we have k and we can keep 1 around, we can easily produce k. This algorithm does at most two additions for each bit of k. The space is only two because it only needs to keep 1 and the current k. This same algorithm could be thought of recursively. If k is even, then recursively produce k 2 and then add this to itself to give k. If k is odd, then produce k 2 and then compute k = k 2 + k Another useful technique is to simply try to build big numbers as quickly as possible. From 1, produce 2, 4, 8,... by adding the last number to itself. Then to produce a arbitrary k, look at the value k in binary. For example, k = 180 is in binary because 180 = This tells us how to combine the powers of 2 that we have produced to give us k. The time of this algorithm is still at most two times the number of bits in k. However, it requires log k space. As said, these three algorithms use the same amount amount of time, namely T(n) = 2 log 2 k = 2n. No algorithm can do better then n = log 2 k additions because no algorithm can more than double the largest value, and hence after i iterations, the largest value is at most 2 i. If you care about the extra factor of 2 between the upper and the lower bound, lets try a little harder. Let b be a parameter that we will later set to 1 2 log n. The algorithm first produces 1, 2, 3, 4,..., 2 b by adding 1 each time. This takes time 2 b. All of the intermediate values are kept. Note that thinking of each of these values as a binary string, we have produced every possible b bit string. The next stage of the algorithm breaks the string of n bits representing k into n b blocks of length b. The algorithm is the same as the more of the input algorithm given above except that each iteration considers not just the next bit but the next block. The loop invariant says that we have produced the value k formed from the first i blocks of the instance. To produce value k formed from the first i + 1 blocks of the instance, we must first multiply k by 2 b to add b zeros and then we must add on the next block. Multiplying by two just requires adding the number by itself. Hence, multiplying by 2 b just requires repeating this b times. Adding on the next block just requires one more addition. This is because in the first stage of the algorithm we produced every possible b bit string. We need only add on the correct one. There are n b iterations in this second stage of the algorithm and each iteration requires b + 1 additions. The total time of the algorithm is T(n) = 2 b + n b (b + 1) = n + n b + 2b. Setting b = 1 2 log n gives T(n) = n + 2 n log n + n n + Θ( n log n ). 7. Let G = (V, E) be a directed weighted graph with possible negative weights and k n some integer. Find an algorithm that finds for each pair of nodes u, v V a path from u to v with the smallest total 6

7 weight from amongst those paths that contains exactly k edges. Here a path may visit a node more than once. Do the best you can, but ideally the running time is O(n 3 log n). A hint is that a smallest weighted path with i+j edges must be composed of a smallest weighted path with i edges and one with j edges. In order to solve the problem on input instance k, the More of the Input technique suggests solving it first for smaller integers first. Suppose for each pair of nodes u, v V, we already have that w u,v,i is the weight of the smallest weighted path from u to v containing exactly i edges and that w u,v,j the same for those containing exactly j edges. Then a smallest weighted path from u to v containing exactly i+j edges must be composed of a smallest weighted path with i edges from u to some node b and then a smallest weighted path with j edges from b onto v. We need to only try all nodes b and see which one works best. This gives the recurrence w u,v,i+j = min b (w u,b,i + w b,v,i ). Doing this for each pair u and v takes O(n 3 ) time. We can then build up paths with k edges in the same way as done in the previous exercise. The total time is then O(n 3 log n). Chapter: 3: Abstract Data Types Section: 3.1: Specifications and Hints at Implementations Section: 3.2: Link List Implementation Section: 3.3: Merging With a Queue Section: 3.4: Parsing With a Stack Chapter: 4: Narrowing the Search Space: Binary Search Section: 4.1: Binary Search Trees Section: 4.2: Magic Sevens End of Chapter Questions 8. (For the Engineers, By Jeff) Imagine a wire that over time carries a 0/1 signal. A front-end processor has sampled it at a rate of 1GHz (ie every nano second) and given you an array A of length n. A[i] = 0 if there is a 0 on the wire at time i nsecs and A[i] = 1 if there is a 1 on the wire at that time. A transition (or edge ) is defined to be a place in time where the signal transitions from one value to another. This may be of positive polarity, i.e. from 0 to 1 or of negative polarity meaning from 1 to 0. The time of the transition is the average of these two indexes. For example, if A[7] = 0 and A[8] = 1 then we say that we time-stamp this transition with the time t = 7.5 nsecs. Our goal, given such an array A is to find the time-stamp of one such transition (if it exists). If there are many such transitions in A, then we can return any one of them. (a) Give a lower bound that any algorithm that solves this problem, on the worst case input, requires n time to find the answer. Imagine this as a game in which some adversary gives you an algorithm and you must find an input array A for which this algorithm queries A n times before finding a transition. Your solution will describe how you will use the actions of the algorithm to produce the input. (We did this in class to prove that adding two n bit numbers requires Ω(n) time.) Be clear how you handle the following circular argument. You dont know what input instance to give the algorithm until you know what the algorithm does. But you dont know what the algorithm does until you give it an input instance and run it. The input instance we consider will either be all zeros or have exactly one one. We run the algorithm without actually giving it a fixed input instance. Each time the algorithm queries A at some index i, we respond by saying telling it that A[i] = 0. With this information, the algorithm can continue on just as if it did have an actual input instance. If the algorithm stops before reading each array location then it can t know whether or not there is a transition because we will ether give him the all zero instance or the instance with a one in one of the locations he has not read. 7

8 (b) Now let us add the additional constraint that at the beginning and end of the time interval, the wire has a different signal, i.e. A[1] A[n]. Note that this assumption ensures that there is at least one transition, though it could still have many transitions. Design a sublinear time recursive algorithm which given the 0/1 array A returns the time t of a transition. Be very clear about your pre and post conditions and that you are giving your friend a smaller instance that meets the precondition. Though you must be clear what the algorithm is, actual code is not needed. Give and solve the recurrence relation for its running time. The precondition is that we are given an array A and two indexes p and q for which A[p] A[q]. We query A[m] for m = p+q 2. Then we recurse on A[p..m] if A[p] A[m] and recurse on A[m..q] if A[m] A[q]. By the precondition on our instance, one of these must be true (otherwise A[p] = A[m] and A[m] = A[q] giving A[p] = A[q].) When q = p + 1, we have a transition A[p] A[p + 1] at t = p The recurrence relation for the running time is T(n) = T(n/2) + 1 = Θ(log n). It is not needed, but the master theorem can be used as follows. The time for the top level is f(n) = 1 = n0. Hence c = 0. The time for the base cases is Θ(n log a log1 log b ) = Θ(n log 2 ) = Θ(n0) = 1. Because the top and bottom are the same, we know that T(n) = Θ(log(n) f(n)) = Θ(log n). Chapter: 5: Iterative Sorting Algorithms Section: 5.1: Bucket Sort by Hand Section: 5.2: Counting Sort (A Stable Sort) Section: 5.3: Radix Sort 9. (10 marks) RadixSort sorts d-digit numbers: loop i = 1 to d stable sort cards with respect to the i th least significant digit. end loop At every iteration the following assertion is true: The input is stable-sorted with respect to the i th least-significant digit. Is this a good loop-invariant? Why or why not? No this is not a good loop invariant. It is true but not enough to know that the input is stable-sorted with respect to JUST the i th least-significant digit. It is a more of the input type of loop invariant. If you consider each number to be just its least-significant i digits then the list is sorted stably. Section: 5.4: Radix/Counting Sort Chapter: 6: Euclid s GCD Algorithm 10. (Amortized Analysis) Suppose we have a binary counter such that the cost to increment the counter is equal to the number of bits that need to be flipped. For example, incrementing from 100, 111, to 101, 000, costs 7. Consider a sequence of n increments increasing the counter from zero to n. Some of these increments, like that from 101, 111, to 101, 111, costs only one. Others, like that from = 2 log 2 n 1 n to costs O(log n). This one is the worst case. Given that we generally focus on worst case, one would say that operations cost O(log n). It might be more fair, however, to consider the average case. If a sequence of n operations requires at most T(n) time, then we say that each of the operations has amortized cost T(n) n. (a) Suppose the counter begins at zero and we increment it n times. Show that the amortized cost per increment is just O(1). Hint: Let T x,j = 1 if incrementing value x requires flipping the j th bit. Note n x=1 j T x,j = n j x=1 T x,j. 8

9 The j th bit flips every 2 j iterations for a total of n 2 flips. The total cost is j T(n) = n x=1 j T x,j = n j x=1 T x,j = log n j=0 n 2 = log n j j =0 2j. This sum is geometric and hence is dominated by the largest term. The largest term comes from the first bit being flipped each iteration for a total of n times. This gives that T(n) = O(n) and the amortized cost per operation is T(n) n = O(1). (b) Suppose that we want to be able to both increment and decrement the counter. Starting with a counter of zero, give a sequence of n of operations where each is either an increment and decrement operation, that gives the highest amortized cost per operation (within Θ). (Don t have the counter go negative.) Compute this amortized cost. First increment the counter from zero to = 2 log 2 (n/2) 1 n 2. Then have n 2 operations alternating between incrementing and decrementing the counter between and Each of these costs log 2 (n/2). Hence, the total cost is T(n) = Θ( n 2 ) + n 2 log 2(n/2) = Θ(n logn) giving an amortized cost of Θ(log n). (c) To fix the problem from part (b), consider the following redundant ternary number system. A number is represented by a sequence of trits X = x n 1...x 1 x 0, where each x i is 0, +1, or - 1. The value of the number is n 1 i=0 x i 2 i. For example, X = is 5 because = 5. However, X = (1)(1)( 1) 2 is also 5 because = 5. Also X = (1)( 1)(0)( 1)(1)(1)(1) = = 31. The process of incrementing a ternary number is analogous to that operation on binary numbers. One is added to the low order trit. If the result is 2, then it is changed to 0, and a carry is propagated to the next trit. This process is repeated until no carry results. For example, X = (1)( 1)(0)( 1)(1)(1)(1) = 31 increments to X +1 = (1)( 1)(0)(0)(0)(0)(0) = = 32. Note this is a change of = 1. Decrementing a number is similar. One is subtracted from the low order trit. If it becomes -2 then it is replaced by 0, and a borrow is propagated. For example, X + 1 = (1)( 1)(0)(0)(0)(0)(0) decrements to X = (1)( 1)(0)(0)(0)(0)( 1) = = 31. Note that this increment followed by a decrement resulted in a different representation of the number 31 than the one we started with. The cost of an increment or a decrement is the number of trits that change in the process. Our goal is to prove that for any sequence n of operations where each is either an increment and decrement operations, starting with a counter of zero, the amortized cost per operation is at most O(1). Let T(t) be the total number of trit changes that have occurred during the first t operations. Let Z(t) be the number of non-zero trits in our counter after the t operations. For example, if the resulting counter is X = (1)( 1)(0)( 1)(0)(1)(1) then Z(t) = 5. We will define P(t) = T(t) + Z(t) to be our measure of progress or potential function at time t. Bound the maximum amount that P(t) can change in any one increment or decrement operation. We will prove that each operation increases P(t) by at most 2. Consider any one operation. Any time a trit changes and causes a carry increases T(t) by one because another trit has changed and decreases Z(t) by one because this trit then changes from a non-zero to a zero trit. The total effect is that P(t) = T(t) + Z(t) does not change at all. Only the last and final trit change does not cause a carry. It increases T(t) by 1 and can increase Z(t) by at most 1, because of a possible change from a zero to a non-zero trit. Hence, the total change in P(t) resulting from one operation is at most 2. (d) Use induction (or loop invariants) to bound P(t). In doing so, prove that for any sequence n of operations where each is either an increment and decrement operations, starting with a counter of zero, the amortized cost per operation is at most O(1). The induction hypothesis or loop invariant is that P(t) 2t. Note that P(0) = 0 because no changes have been made yet and all trits are zero. This is maintained because P(t) increases by at most 2 each operation, namely P(t + 1) P(t) + 2 2t + 2 = 2(t + 1). Z(n) is not negative. Hence, T(n) = P(n) Z(n) P(n) 2n and the amortized cost is at most T(n) n 2. 9

10 11. Shifted Streams: You receive two streams of data, A[1], A[2], A[3],... and B[1], B[2], B[3],.... Once a number arrives you cannot request it again. You must output a stream of data containing the values A[1] + B[1], A[2] + B[2], A[3] + B[3],.... Once you output it you do not need to store it. There are three things that make this more challenging. 1) The B data streams is shifted in time by some amount s. 2) You have only a limited amount of memory M[0], M[1], M[2], M[3],..., M[m] of which you want to use as little as possible. 3) You have only a limited amount of time between data numbers arriving. (a) Give the loop invariant. (I give the first point.) i. The numbers A[1], A[2], A[3],..., A[t] have arrived. The numbers B[1], B[2], B[3],...,B[t s] have arrived. ii. What values have been outputted? The numbers A[1] + B[1], A[2] + B[2], A[3] + B[3],..., A[t s] + B[t s] have been outputted. iii. Which values do you want to be remembering in memory? (Use as little memory as possible.) The last s values of A are saved. Specifically, A[t s + 1], A[t s + 2],... A[t]. iv. In which memory cell do you save each such value? (You want to move things as little as possible.) The A data stream is stored in cyclically in s memory cells so that the last s values are saving. Specifically, for i = t s + 1,.., t, value A[i] is stored in M[i mod s]. (b) I taught a few types of loop invariants? Which type is this? Why? Include the definition. More of the Input because if you consider a prefix of the input stream as the input, we have produced the output. Similarly, it could be More of the Output. (c) What code establishes this loop invariant? For i = 1,..s, value A[i] is stored in M[i mod s]. With t = s, the loop invariant is true. (d) What code maintains this loop invariant? At time t, Read values A[t] and B[t s] arrive. Get value A[t s] from memory M[t s mod s] = M[t mod s]. Output the value A[t s] + B[t s]. Store value A[t] is stored in M[t mod s]. (e) In order to prove that the loop invariant has been maintained, what is the formal statement of what you prove? (No need to prove it.) loop invariant & not exit cond & code loop loop invariant Part II: Recursion Chapter: 7: Abstractions, Techniques, and Theory Section: 7.1: Thinking about Recursion Section: 7.2: Looking Forwards vs Backwards Section: 7.3: With a Little Help from Your Friends Section: 7.4: The Towers of Hanoi Section: 7.5: Check List for Recursive Algorithms Section: 7.6: The Stack Frame Section: 7.7: Proving Correctness with Strong Induction End of Chapter Questions 10

11 12. (15 marks) Recursion: Define a Recursive Integer List (RIL) to be a list of a finite number of objects, where each object is either an integer or is an RIL. For example, L = [3, 2, [4, 3], [[3, 9], [], 8], 7] is an RIL. You are allowed the following operations: (a) If L is an RIL and i is an integer, then L[i] is the i th object in the list L. For our above example, L[1] = 3, L[2] = 2, L[3] = [4, 3], L[4] = [[3, 9], [], 8], L[5] = 7, and L[6] returns an error. More over L[4][1] = [3, 9] and L[4][1][2] = 9. (b) Similary, if L is an RIL, then L[i..j] is the RIL consisting of the i th through the j th objects in the list L. For our above example, L[2..4] = [2, [4, 3], [[3, 9], [], 8]]. More over, L[2..2] and L[2..1] return the RILs [2] and []. (c) If L is an RIL, then L returns the number of objects in the list L. For our above example, L = 5. More over, L[4][2] = 0. (d) If Obj is an object, then (Obj == Integer) returns true if the object is an integer and (Obj == RIL) returns true if it is an RIL. In our example, (L[1] == Integer) is true and (L[3] == Integer) is false. Note that 2 is an integer, while [2] is a RIL containing one object which happens to be the integer 2. Your task is to take as input an RIL L and return the sum of all integers any where in L. In our example, SUM(L) should return = 39. You may use a loop if you really must, but it is better if you don t. algorithm Sum (L) pre cond : L is an RIL. post cond : Returns the sum of all integers in L begin if( L = 0) return(0) else if( L[1] == Integer ) first = L[1] else first = Sum(L[1]) endif return(f irst + Sum(L[2.. L ])) end algorithm algorithm Sum (Obj) pre cond : Obj is either an integer or is an RIL. post cond : Returns the sum of all integers in Obj begin if( Obj == Integer ) return( Obj ) else L = Obj sum = 0 loop i = 1... L sum = sum + Sum(L[i]) end loop return(sum) endif end algorithm algorithm Sum (L) pre cond : L is an RIL. post cond : Returns the sum of all integers in L begin sum = 0 loop i = 1... L if( L[i] == Integer ) sum = sum + L[i] else sum = sum + Sum(L[i]) endif end loop return(sum) end algorithm 11

12 13. (13 marks) Recursion algorithm A(a, b) pre cond : a and b are integers. post cond : Returns?? begin if( a = b ) then return( 0 ) elseif( a < b ) then return( A(b, a)) else return(1 + A(a 1, b) ) endif end algorithm algorithm A(a, b) pre cond : a and b are integers. post cond : Returns?? begin if( a = b ) then return( 0 ) elseif( a < b ) then return( A(b, a)) else return(a(a 1, b 1) ) endif end algorithm (a) Circle the output of this program. i. a + b ii. a b iii. max(a, b); iv. Does not halt. a b (b) Prove either that this program halts or that it does not. Consider the size of the input to be a b if a b and b a+1 otherwise. Every friend gets a smaller instance and if the size is zero than a base case is reached. (c) If the program halts, prove that its output is what you claim. (a) Circle the output of this program. i. a + b ii. a b iii. max(a, b); iv. Does not halt. Does not halt. (b) Prove either that this program halts or that it does not. On instance a, b = 2, 1, the subinstances will be 1, 0, 0, 1, 1, 2,.... (c) If the program halts, prove that its output is what you claim. Consider instance a, b. By way of strong induction, assume that A returns a b for every instance smaller. If a = b than the result is 0, which is a b. If a < b than the result is A(b, a) = [b a] = a b. else the result is 1 + A(a 1, b) = 1 + [(a 1) b] = a b. Either way, our result is a b. Chapter: 8: Some Simple Examples of Recursive Algorithms Section: 8.1: Sorting and Selecting Algorithms Section: 8.2: Operations on Integers 14. Recursion: Suppose you are given two n digit numbers (base 10), X = x 1 x 2... x n and Y = y 1 y 2... y n and you want to multiply them. (a) In one sentence explain what is the running time of the kindergarten algorithm is and why. It adds X times which is 10 n. (b) In one sentence explain what is the running time of the high school algorithm is and why. i, j [n] it computes x i y j. Hence the time is O(n 2 ). 12

13 (c) In three short sentences describe the slow recursive algorithm for multiplying. (The one whose time is the same as the high school algorithm.) Break X and Y into pieces X = X 1 X 2 and Y = Y 1 Y 2. Get four friends to multiply X 1 Y 1, X 1 Y 2, X 2 Y 1, and X 2 Y 2. Shift their answers and add. (d) In one sentence, what is the key trick used to reducing the time. Reduce the number of friends from four to three. (e) Give the recurrence relation for this recursive algorithm. Solve it. T(n) = 3T(n/2) + O(n) = O(n c ) where c = log(a)/ log(b) = log(3)/ log(2). (f) Give all the key ideas in thinking abstractly about a recursive program? If your input is sufficiently small solve it yourself. Otherwise, construct instances for friends that are smaller then your instance and that meet the preconditions. Without you worrying how, assume that they can solve their instance. Combine the solutions to their instances to obtain a solution to your instance. 15. GCD and Finite Fields. (a) (2 marks) Given integers a and b, what does the generalized GCD algorithm return and with what post condition. a s + b t = g = gcd(a, b) (b) (2 marks) How is this used to find the inverse of u mod p, where p is a prime. Let a = u and b = p. This gives u s + p t = gcd(a, p) = 1 or u s = 1 mod p. Hence s is the inverse. (c) (2 marks) Why are the integers mod n not a field when n is not prime? 16. Finite Fields. If a b = n, then a b = 0 mod n. In a field this should only be true if a or b are zero. (a) Find the inverse of 20 in Z 67. To show your work, make a table with columns u, v, r, s, and t and a row for each stack Sorry for not doing the table, but the inverse is 10 = 57 because 20 ( 10) = 200 = = (b) Given as input I = a, b, p compute a b mod p. The algorithm is in the notes. Do not copy the algorithm. The section reference (and necessary changes) is sufficient. What is the number of bit operations needed as a function of n = I = log 2 (a) + log 2 (b) + log 2 (p)? The algorithm is in Operations on Integers. It requires Θ(log b) multiplications. All multiplications are mod p and hence require Θ(log p) bit operations for a total of T(n) = Θ(log b) Θ(log p) = Θ(n 2 ). (c) Given as input I = a, c, p solve a b mod p c for b. This is called discrete log. What is the best algorithm that you can come up with in 15min. (Do not cheat and spend more time than this.) What is the number of bit operations needed as a function of n = I = log 2 (a)+log 2 (c)+log 2 (p)? The best know algorithm requires Θ(b) = Θ(2 n ) Θ(b) = Θ(2 n ) multiplications. It is the bases of multiplications. It is the bases of various cryptographic systems. Exercise In friends level of abstracting recursion, you can give your friend any legal instance that is smaller than yours according to some measure as long as you solve on your own any instance that is sufficiently small. For which of these algorithms has this been done? If so what is your measure of the size of the instance? On input instance n, m, either bound the depth to which the algorithm 13

14 recurses as a function of n and m or prove that there is at least one path down the recursion tree that is infinite. algorithm R a (n, m) pre cond : n & m ints. post cond : Say Hi begin if(n 0 and m 0) Print( Hi ) else R a (n 1, m) R a (n, m 1) R a (n 2, m 2) end if end algorithm Change base case condition in the R a to each of the following. (b) if(n m 0) (c) if(n m 10) (a) This does not halt following only first friend because m never gets small. (b) If n = m are initially odd, then this does not halt following only third friend because n = m = 1, 1, 3, 5... and the n m stays positive. (c) Halts. Size(n, m) = n + m gets smaller by at least one and at most 4 for each friend. When 0 Size(n + m) 3 at most one of n and m are negative so we cant get a negative times a negative gives us a positive and n m is at most 10. Initially, the size is n + m so it will be 0 in at most n+m time steps. Hence, this is a bound on the depth. Note the time is not n m, not n, and not m. Section: 8.3: Ackermann s Function End of Chapter Questions 17. Quick Sort: (a) [4 apples] Describe the quick sort algorithm using the friends paradigm. Assume that you have pivot available as a subroutine. (b) [2 apples] What is the time complexity of the algorithm when the pivot is selected randomly. Give your best approximation of its recurrence relation. What does it evaluate to? 18. You have a bag of n nuts and a bag of n bolts. The nuts and bolts are all of different sizes, but for each nut there is a matching bolt (and vice versa). What you want to do is to match them. Unfortunately, you cant compare a nut to a nut or a bolt to a bolt (your eyes are too weak). Instead, what you can do is to compare a nut to a bolt and see if the bolt is too small, too big, or just right. The problem can be solved by comparing each nut to each bolt but this would require O(n 2 ). (a) I want you to describe a simple algorithm which uses randomness and recursion. It should be very similar to Quick Sort which partitions with a pivot. Be sure to mention all the key requirements of the friends analogy described in class. (Hint: Use the fact that every nut matches some bolt). (Nine sentences.) Choose a random nut. Compare it to each bolt, partitioning them into those that are smaller, a perfect match, and larger. At least one bolt is a perfect match. Compare it to each nut, partitioning them into those that are smaller and larger. Match this nut and bolt. The nuts and bolts that are smaller then this pair must meet the precondition, namely you have a bag of n nuts and a bag of n bolts and for each nut there is a matching bolt (and 14

15 vice versa). This subinstance is also smaller. Hence, get a friend to match them. Similarly, those that are larger. (b) Explain intuitively what the recurrence relation for the expected running time of this algorithm is. Explain intuitively what the solution for this recurrence relation is. We expect each of the two subinstance to be half the size. We expect the larger of the two to be of size 3 4n. If this was always the case, the recurrence relation would be T(n) = T( 3 4 ) + T(1 4 ) + n. This evaluates to T(n) = Θ(n log n) because there will be Θ(log n) levels of recursion and the sum work in each level will be Θ(n). (c) Explain intuitively what the worst cast running time of this algorithm is and why. In the worst case, the subinstances are always of size n 1 and zero. This gives T(n) = T(n 1) + n = Θ(n 2 ). 19. For many computational problems there is a trade off between the time and the amount of memory needed. There are algorithms that use minimal time but lots of memory and algorithms that use minimal memory but lots of time. One way of modeling this is by a game of pebbling the nodes of a directed acyclic graph (DAG). Each node represents a value that needs to be computed. Having a pebble on a node represents the fact that that value is stored in memory. Note the game has one pebble for each of the memory registers that is available to the algorithm. Pebbles can be removed from a node and placed else where. This corresponds to replacing the value in a register with a newly computed value. However, there is a restriction. If there are edges from each of the nodes In(v) = {u 1, u 2,...,u d } to v, then there needs to be a pebble on these nodes u 1, u 2,..., u d before you can put a pebble on v. This is modeling the fact that you need the values associated with u 1, u 2,...,u d in order to be able to compute the value associated with v. In contrast, a pebbles can be placed at any time on a leaf of the DAG because these nodes have no incoming edges. These correspond to the easy to compute values. The input to the pebbling problem specifies a DAG and one of its nodes. The goal is to place a pebble on this node. Get a pebble to the top. Have pebbles on both childeren uand 1 u 2 can place pebble on parent v. v u 1 u 2 You will be required to solve the problem corresponding to pebbling the following DAG. The DAG in consideration will have a rectangle of nodes w wide and h high. Here h much smaller than w, but w is much smaller than 2 h. For each node v (not on the bottom row), there will be d nodes In(v) = {u 1, u 2,...,u d } on the previous row with directed edges to v. Here, d is some small value. You are to write both a iterative and a recursive algorithm for this problem. (a) Iterative: d=2 i. Describe an extremely simple iterative algorithm that puts a pebble on the specified node as quickly as possible. The algorithm may use as many pebbles (memory) as needed, but does not redo any work. Being an iterative algorithm, what is the algorithm s loop invariant? This algorithm starts by placing a pebble on each of the nodes on the bottom row. This can be done because each such node has in-degree zero. The loop invariant is that after r iterations, there is a pebble on each of the nodes on the r th row from the bottom. The step of the next iteration is to place pebble on each of the nodes on the r+1 st row from the bottom. This can be done because each such node v has pebbles on all of its in-neighbors In(v) = {u 1, u 2,...,u d } on the previous row. The step ends by removing the pebbles from the r th row so that they can be reused. Note that progress has been made while maintaining the loop invariant. The exit condition is when r = h at which time the task of placing a pebble on the top row has been accomplished. 15

16 ii. How much time do you for this iterative programming algorithm? Given that you are able to reuse pebbles, how many do you really need? The time required is O(hw) and the number of pebbles is 2w. (b) Recursive: i. Suppose your task is to place a pebble on some node v which is r rows from the bottom. Describe a recursive backtracking algorithm to accomplish this task that re-does work as necessary, but uses as few pebbles as possible. Being a recursive algorithm, describe this algorithm using the friends paradigm. My task is to place a pebble on some node v which is r rows from the bottom. For each of the d nodes u i In(v) with an edge to v, I ask a friend to place a pebble on node u i. Once, there is a pebble on each node in In(v), I place a pebble on node v. ii. Let T ime(r) and P ebbles(r) be the time and the numbers of pebbles used by your algorithm to go from there being no pebbles on the DAG to placing a pebble on one node r rows from the bottom. Give and solve a recurrence relation for each of these. The hardest part of this pebble question is the recurrence relation for the number of pebbles. Remember to reuse the pebbles. The total time used to complete my task is Time(r) = d Time(r 1) + 1, because each of my d friends uses Time(r 1) time and I move one pebble. This evaluates to Time(r) d Time(r 1) = d r 1. The number of pebbles I will need is P ebbles(r) = P ebbles(r 1) + (d 1). Note that I must leave a pebble on node u i before moving on to node u i+1, but the rest of the pebbles can be reused. The time at which I need the most pebbles is when pebbling node the d th node in In(v). At this point I need to have d 1 pebbles on the previous nodes in I(v) and Pebbles(r 1) pebbles that I allow this last friend to use for a total of Pebbles(r) = Pebbles(r 1) + (d 1) pebbles. This evaluates to Pebbles(r) = (d 1) (r 1) + 1. (c) Conclusion: Compare and contrast the time and the space used by these two algorithms. Remember that w is much bigger than h and d. The iterative algorithm uses time w h which is the number of nodes, while the recursive algorithm uses exponential time. On the other hand, the iterative algorithm uses time Θ(w) pebbles which is more than the Θ(dh) used recursive algorithm because we were given that w is much bigger than h and d. (d) (This question was not asked for, but I am including the answer for student s information.) Once you learn recursive back tracking and dynamic programming, explain how your two pebbling algorithms correspond to these concepts. A recursive backtracking algorithm has an instance to solve and gets his friends to solve subinstances for him, who in turn get their friends to solve their subinstances for them. This sets up an entire recursive tree of subinstances to be solved. Some of these subinstances that are solved by different friend s friend s friends are in fact the exact same subinstances. Hence, the tree of subinstances can be collapsed into a DAG. Each node represents a subinstance that needs to be solved. If the subinstance associated with node v asks friends to solve the subinstance associated with nodes In(v) = {u 1, u 2,..., u d }, then there are edges from each of these nodes to v, specifying that they each need to be solved before v can be solved. The leaves of the DAG correspond to the easy to compute base cases. The goal is to pebble the node corresponding to the initial instance of the problem. The recursive back tracking algorithm gets friends to solve subinstances even if they have been solved before. This corresponds to the recursive algorithm above. It uses exponential time and memory proportional to the depth of recursion. The Dynamic Programming algorithm assigns one friend to each subinstance and solves each once starting the base cases and ending with the origin instance. This corresponds to the iterative algorithm above. It uses time and memory proportional to the number of subinstances. 16

17 v u 1 u 2 d=2 20. Recall the pebbling game from an earlier question. You can freely put a pebble on a leaf. If there are edges from each of the nodes In(v) = {u 1, u 2,..., u d } to v, then there needs to be a pebble on these nodes u 1, u 2,..., u d before you can put a pebble on v. Consider a rooted tree such that each node (except leaves) has d children pointing at it. The height is h. Get a pebble to the top. v u 1 u 2 u 3 Have pebbles on all d=3 children you can place pebble on parent v. (a) Give the Theta of the total number n of nodes in the tree. Show your math and give the intuition. There are d i nodes at level i. n = h i=0 di = Θ(d h ), because it is dominated by the biggest term. (b) What is the connection between pebbles and memory? What does a node in the graph represent? What does an edge in the graph represent? Each node represents a value (solution to a subinstance) that needs to be computed. Having a pebble on a node represents the fact that that value is stored in memory. If there are edges from each of the nodes In(v) = {u 1, u 2,..., u d } to v, then there needs to be a pebble on these nodes u 1, u 2,...,u d before you can put a pebble on v. This is modeling the fact that you need the values associated with u 1, u 2,..., u d in order to be able to compute the value associated with v. (c) What is the minimum amount of time needed to get a pebble to the root even if you have lots of pebbles? Why? The time needed is n = Θ(d n ), because each node will need to be pebbled. (d) Briefly describe the recursive algorithm for placing a pebble on the root using as few pebbles as possible. Describe this algorithm using the friends paradigm. My task is to place a pebble on some node v which is r rows from the bottom. For each of the d nodes u i In(v) with an edge to v, I ask a friend to place a pebble on node u i. Once, there is a pebble on each node in In(v), I place a pebble on node v. (e) What is the number of pebbles for this recursive algorithm to get a pebble to the root? Give and solve the recurrence relation. 17

12/30/2013 S. NALINI,AP/CSE

12/30/2013 S. NALINI,AP/CSE 12/30/2013 S. NALINI,AP/CSE 1 UNIT I ITERATIVE AND RECURSIVE ALGORITHMS Iterative Algorithms: Measures of Progress and Loop Invariants-Paradigm Shift: Sequence of Actions versus Sequence of Assertions-

More information

CSE 3101 Design and Analysis of Algorithms Solutions for Practice Test for Unit 2 Recursion

CSE 3101 Design and Analysis of Algorithms Solutions for Practice Test for Unit 2 Recursion SE 30 esign and Analysis of Algorithms Solutions for Practice Test for Unit Recursion Jeff Edmonds. We discuss two different purposes and definitions of size. The running time of an algorithm is defined

More information

Multiple-choice (35 pt.)

Multiple-choice (35 pt.) CS 161 Practice Midterm I Summer 2018 Released: 7/21/18 Multiple-choice (35 pt.) 1. (2 pt.) Which of the following asymptotic bounds describe the function f(n) = n 3? The bounds do not necessarily need

More information

Principles of Algorithm Design

Principles of Algorithm Design Principles of Algorithm Design When you are trying to design an algorithm or a data structure, it s often hard to see how to accomplish the task. The following techniques can often be useful: 1. Experiment

More information

Solutions to the Second Midterm Exam

Solutions to the Second Midterm Exam CS/Math 240: Intro to Discrete Math 3/27/2011 Instructor: Dieter van Melkebeek Solutions to the Second Midterm Exam Problem 1 This question deals with the following implementation of binary search. Function

More information

The divide and conquer strategy has three basic parts. For a given problem of size n,

The divide and conquer strategy has three basic parts. For a given problem of size n, 1 Divide & Conquer One strategy for designing efficient algorithms is the divide and conquer approach, which is also called, more simply, a recursive approach. The analysis of recursive algorithms often

More information

CSE 3101 Design and Analysis of Algorithms Practice Test for Unit 1 Loop Invariants and Iterative Algorithms

CSE 3101 Design and Analysis of Algorithms Practice Test for Unit 1 Loop Invariants and Iterative Algorithms CSE 0 Design and Analysis of Algorithms Practice Test for Unit Loop Invariants and Iterative Algorithms Jeff Edmonds First learn the steps. Then try them on your own. If you get stuck only look at a little

More information

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics 1 Sorting 1.1 Problem Statement You are given a sequence of n numbers < a 1, a 2,..., a n >. You need to

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17 5.1 Introduction You should all know a few ways of sorting in O(n log n)

More information

INDEX. Cambridge University Press How to Think About Algorithms Jeff Edmonds Index More information

INDEX. Cambridge University Press How to Think About Algorithms Jeff Edmonds Index More information INDEX 439 abstract data type (ADT), 1, 43 exercise solutions, 414 functions vs., 43 merging with queue, 56 specifications/implementations, 44 dictionary, 47 graphs, 47 link list implementation, 51 list,

More information

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer Module 2: Divide and Conquer Divide and Conquer Control Abstraction for Divide &Conquer 1 Recurrence equation for Divide and Conquer: If the size of problem p is n and the sizes of the k sub problems are

More information

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions Dr. Amotz Bar-Noy s Compendium of Algorithms Problems Problems, Hints, and Solutions Chapter 1 Searching and Sorting Problems 1 1.1 Array with One Missing 1.1.1 Problem Let A = A[1],..., A[n] be an array

More information

Algorithm Analysis. (Algorithm Analysis ) Data Structures and Programming Spring / 48

Algorithm Analysis. (Algorithm Analysis ) Data Structures and Programming Spring / 48 Algorithm Analysis (Algorithm Analysis ) Data Structures and Programming Spring 2018 1 / 48 What is an Algorithm? An algorithm is a clearly specified set of instructions to be followed to solve a problem

More information

How many leaves on the decision tree? There are n! leaves, because every permutation appears at least once.

How many leaves on the decision tree? There are n! leaves, because every permutation appears at least once. Chapter 8. Sorting in Linear Time Types of Sort Algorithms The only operation that may be used to gain order information about a sequence is comparison of pairs of elements. Quick Sort -- comparison-based

More information

Module 2: Classical Algorithm Design Techniques

Module 2: Classical Algorithm Design Techniques Module 2: Classical Algorithm Design Techniques Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Module

More information

(Refer Slide Time: 01.26)

(Refer Slide Time: 01.26) Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture # 22 Why Sorting? Today we are going to be looking at sorting.

More information

Plotting run-time graphically. Plotting run-time graphically. CS241 Algorithmics - week 1 review. Prefix Averages - Algorithm #1

Plotting run-time graphically. Plotting run-time graphically. CS241 Algorithmics - week 1 review. Prefix Averages - Algorithm #1 CS241 - week 1 review Special classes of algorithms: logarithmic: O(log n) linear: O(n) quadratic: O(n 2 ) polynomial: O(n k ), k 1 exponential: O(a n ), a > 1 Classifying algorithms is generally done

More information

(Refer Slide Time: 1:27)

(Refer Slide Time: 1:27) Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 1 Introduction to Data Structures and Algorithms Welcome to data

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

Chapter 6 Heap and Its Application

Chapter 6 Heap and Its Application Chapter 6 Heap and Its Application We have already discussed two sorting algorithms: Insertion sort and Merge sort; and also witnessed both Bubble sort and Selection sort in a project. Insertion sort takes

More information

CSci 231 Final Review

CSci 231 Final Review CSci 231 Final Review Here is a list of topics for the final. Generally you are responsible for anything discussed in class (except topics that appear italicized), and anything appearing on the homeworks.

More information

Midterm solutions. n f 3 (n) = 3

Midterm solutions. n f 3 (n) = 3 Introduction to Computer Science 1, SE361 DGIST April 20, 2016 Professors Min-Soo Kim and Taesup Moon Midterm solutions Midterm solutions The midterm is a 1.5 hour exam (4:30pm 6:00pm). This is a closed

More information

Lecture 15 : Review DRAFT

Lecture 15 : Review DRAFT CS/Math 240: Introduction to Discrete Mathematics 3/10/2011 Lecture 15 : Review Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Today slectureservesasareviewofthematerialthatwillappearonyoursecondmidtermexam.

More information

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]:

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]: CSE 101 Homework 1 Background (Order and Recurrence Relations), correctness proofs, time analysis, and speeding up algorithms with restructuring, preprocessing and data structures. Due Thursday, April

More information

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I MCA 312: Design and Analysis of Algorithms [Part I : Medium Answer Type Questions] UNIT I 1) What is an Algorithm? What is the need to study Algorithms? 2) Define: a) Time Efficiency b) Space Efficiency

More information

SORTING AND SELECTION

SORTING AND SELECTION 2 < > 1 4 8 6 = 9 CHAPTER 12 SORTING AND SELECTION ACKNOWLEDGEMENT: THESE SLIDES ARE ADAPTED FROM SLIDES PROVIDED WITH DATA STRUCTURES AND ALGORITHMS IN JAVA, GOODRICH, TAMASSIA AND GOLDWASSER (WILEY 2016)

More information

Introduction to Algorithms I

Introduction to Algorithms I Summer School on Algorithms and Optimization Organized by: ACM Unit, ISI and IEEE CEDA. Tutorial II Date: 05.07.017 Introduction to Algorithms I (Q1) A binary tree is a rooted tree in which each node has

More information

COMP3121/3821/9101/ s1 Assignment 1

COMP3121/3821/9101/ s1 Assignment 1 Sample solutions to assignment 1 1. (a) Describe an O(n log n) algorithm (in the sense of the worst case performance) that, given an array S of n integers and another integer x, determines whether or not

More information

Fast Sorting and Selection. A Lower Bound for Worst Case

Fast Sorting and Selection. A Lower Bound for Worst Case Lists and Iterators 0//06 Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 0 Fast Sorting and Selection USGS NEIC. Public domain government

More information

Data Structures and Algorithms. Part 2

Data Structures and Algorithms. Part 2 1 Data Structures and Algorithms Part 2 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples displayed

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

(Refer Slide Time: 02.06)

(Refer Slide Time: 02.06) Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 27 Depth First Search (DFS) Today we are going to be talking

More information

INDIAN STATISTICAL INSTITUTE

INDIAN STATISTICAL INSTITUTE INDIAN STATISTICAL INSTITUTE Mid Semestral Examination M. Tech (CS) - I Year, 2016-2017 (Semester - II) Design and Analysis of Algorithms Date : 21.02.2017 Maximum Marks : 60 Duration : 3.0 Hours Note:

More information

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems.

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems. 2.3 Designing algorithms There are many ways to design algorithms. Insertion sort uses an incremental approach: having sorted the subarray A[1 j - 1], we insert the single element A[j] into its proper

More information

Lecture 3: Sorting 1

Lecture 3: Sorting 1 Lecture 3: Sorting 1 Sorting Arranging an unordered collection of elements into monotonically increasing (or decreasing) order. S = a sequence of n elements in arbitrary order After sorting:

More information

Data Structures and Algorithms Chapter 2

Data Structures and Algorithms Chapter 2 1 Data Structures and Algorithms Chapter 2 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples

More information

MC 302 GRAPH THEORY 10/1/13 Solutions to HW #2 50 points + 6 XC points

MC 302 GRAPH THEORY 10/1/13 Solutions to HW #2 50 points + 6 XC points MC 0 GRAPH THEORY 0// Solutions to HW # 0 points + XC points ) [CH] p.,..7. This problem introduces an important class of graphs called the hypercubes or k-cubes, Q, Q, Q, etc. I suggest that before you

More information

COMP Data Structures

COMP Data Structures COMP 2140 - Data Structures Shahin Kamali Topic 5 - Sorting University of Manitoba Based on notes by S. Durocher. COMP 2140 - Data Structures 1 / 55 Overview Review: Insertion Sort Merge Sort Quicksort

More information

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof.

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof. Priority Queues 1 Introduction Many applications require a special type of queuing in which items are pushed onto the queue by order of arrival, but removed from the queue based on some other priority

More information

II (Sorting and) Order Statistics

II (Sorting and) Order Statistics II (Sorting and) Order Statistics Heapsort Quicksort Sorting in Linear Time Medians and Order Statistics 8 Sorting in Linear Time The sorting algorithms introduced thus far are comparison sorts Any comparison

More information

Fundamental mathematical techniques reviewed: Mathematical induction Recursion. Typically taught in courses such as Calculus and Discrete Mathematics.

Fundamental mathematical techniques reviewed: Mathematical induction Recursion. Typically taught in courses such as Calculus and Discrete Mathematics. Fundamental mathematical techniques reviewed: Mathematical induction Recursion Typically taught in courses such as Calculus and Discrete Mathematics. Techniques introduced: Divide-and-Conquer Algorithms

More information

Practice Problems for the Final

Practice Problems for the Final ECE-250 Algorithms and Data Structures (Winter 2012) Practice Problems for the Final Disclaimer: Please do keep in mind that this problem set does not reflect the exact topics or the fractions of each

More information

n = 1 What problems are interesting when n is just 1?

n = 1 What problems are interesting when n is just 1? What if n=1??? n = 1 What problems are interesting when n is just 1? Sorting? No Median finding? No Addition? How long does it take to add one pair of numbers? Multiplication? How long does it take to

More information

U.C. Berkeley CS170 : Algorithms, Fall 2013 Midterm 1 Professor: Satish Rao October 10, Midterm 1 Solutions

U.C. Berkeley CS170 : Algorithms, Fall 2013 Midterm 1 Professor: Satish Rao October 10, Midterm 1 Solutions U.C. Berkeley CS170 : Algorithms, Fall 2013 Midterm 1 Professor: Satish Rao October 10, 2013 Midterm 1 Solutions 1 True/False 1. The Mayan base 20 system produces representations of size that is asymptotically

More information

CS521 \ Notes for the Final Exam

CS521 \ Notes for the Final Exam CS521 \ Notes for final exam 1 Ariel Stolerman Asymptotic Notations: CS521 \ Notes for the Final Exam Notation Definition Limit Big-O ( ) Small-o ( ) Big- ( ) Small- ( ) Big- ( ) Notes: ( ) ( ) ( ) ( )

More information

To illustrate what is intended the following are three write ups by students. Diagonalization

To illustrate what is intended the following are three write ups by students. Diagonalization General guidelines: You may work with other people, as long as you write up your solution in your own words and understand everything you turn in. Make sure to justify your answers they should be clear

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

DIVIDE & CONQUER. Problem of size n. Solution to sub problem 1

DIVIDE & CONQUER. Problem of size n. Solution to sub problem 1 DIVIDE & CONQUER Definition: Divide & conquer is a general algorithm design strategy with a general plan as follows: 1. DIVIDE: A problem s instance is divided into several smaller instances of the same

More information

Assignment 1. Stefano Guerra. October 11, The following observation follows directly from the definition of in order and pre order traversal:

Assignment 1. Stefano Guerra. October 11, The following observation follows directly from the definition of in order and pre order traversal: Assignment 1 Stefano Guerra October 11, 2016 1 Problem 1 Describe a recursive algorithm to reconstruct an arbitrary binary tree, given its preorder and inorder node sequences as input. First, recall that

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 8 Sorting in Linear Time Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 Sorting So Far

More information

FINAL EXAMINATION. COMP-250: Introduction to Computer Science - Fall 2010

FINAL EXAMINATION. COMP-250: Introduction to Computer Science - Fall 2010 STUDENT NAME: STUDENT ID: McGill University Faculty of Science School of Computer Science FINAL EXAMINATION COMP-250: Introduction to Computer Science - Fall 2010 December 20, 2010 2:00-5:00 Examiner:

More information

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1 Asymptotics, Recurrence and Basic Algorithms 1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1 2. O(n) 2. [1 pt] What is the solution to the recurrence T(n) = T(n/2) + n, T(1)

More information

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9

CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 CSC 505, Spring 2005 Week 6 Lectures page 1 of 9 Objectives: learn general strategies for problems about order statistics learn how to find the median (or k-th largest) in linear average-case number of

More information

CS1800 Discrete Structures Fall 2016 Profs. Aslam, Gold, Ossowski, Pavlu, & Sprague December 16, CS1800 Discrete Structures Final

CS1800 Discrete Structures Fall 2016 Profs. Aslam, Gold, Ossowski, Pavlu, & Sprague December 16, CS1800 Discrete Structures Final CS1800 Discrete Structures Fall 2016 Profs. Aslam, Gold, Ossowski, Pavlu, & Sprague December 16, 2016 Instructions: CS1800 Discrete Structures Final 1. The exam is closed book and closed notes. You may

More information

CPSC 536N: Randomized Algorithms Term 2. Lecture 5

CPSC 536N: Randomized Algorithms Term 2. Lecture 5 CPSC 536N: Randomized Algorithms 2011-12 Term 2 Prof. Nick Harvey Lecture 5 University of British Columbia In this lecture we continue to discuss applications of randomized algorithms in computer networking.

More information

FINAL EXAM SOLUTIONS

FINAL EXAM SOLUTIONS COMP/MATH 3804 Design and Analysis of Algorithms I Fall 2015 FINAL EXAM SOLUTIONS Question 1 (12%). Modify Euclid s algorithm as follows. function Newclid(a,b) if a

More information

Jana Kosecka. Linear Time Sorting, Median, Order Statistics. Many slides here are based on E. Demaine, D. Luebke slides

Jana Kosecka. Linear Time Sorting, Median, Order Statistics. Many slides here are based on E. Demaine, D. Luebke slides Jana Kosecka Linear Time Sorting, Median, Order Statistics Many slides here are based on E. Demaine, D. Luebke slides Insertion sort: Easy to code Fast on small inputs (less than ~50 elements) Fast on

More information

COMP 250 Fall recurrences 2 Oct. 13, 2017

COMP 250 Fall recurrences 2 Oct. 13, 2017 COMP 250 Fall 2017 15 - recurrences 2 Oct. 13, 2017 Here we examine the recurrences for mergesort and quicksort. Mergesort Recall the mergesort algorithm: we divide the list of things to be sorted into

More information

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION DESIGN AND ANALYSIS OF ALGORITHMS Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION http://milanvachhani.blogspot.in EXAMPLES FROM THE SORTING WORLD Sorting provides a good set of examples for analyzing

More information

EECS 2011M: Fundamentals of Data Structures

EECS 2011M: Fundamentals of Data Structures M: Fundamentals of Data Structures Instructor: Suprakash Datta Office : LAS 3043 Course page: http://www.eecs.yorku.ca/course/2011m Also on Moodle Note: Some slides in this lecture are adopted from James

More information

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015

MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015 CS161, Lecture 2 MergeSort, Recurrences, Asymptotic Analysis Scribe: Michael P. Kim Date: April 1, 2015 1 Introduction Today, we will introduce a fundamental algorithm design paradigm, Divide-And-Conquer,

More information

University of Waterloo CS240 Winter 2018 Assignment 2. Due Date: Wednesday, Jan. 31st (Part 1) resp. Feb. 7th (Part 2), at 5pm

University of Waterloo CS240 Winter 2018 Assignment 2. Due Date: Wednesday, Jan. 31st (Part 1) resp. Feb. 7th (Part 2), at 5pm University of Waterloo CS240 Winter 2018 Assignment 2 version: 2018-02-04 15:38 Due Date: Wednesday, Jan. 31st (Part 1) resp. Feb. 7th (Part 2), at 5pm Please read the guidelines on submissions: http://www.student.cs.uwaterloo.ca/~cs240/

More information

Sorting is a problem for which we can prove a non-trivial lower bound.

Sorting is a problem for which we can prove a non-trivial lower bound. Sorting The sorting problem is defined as follows: Sorting: Given a list a with n elements possessing a total order, return a list with the same elements in non-decreasing order. Remember that total order

More information

CS:3330 (22c:31) Algorithms

CS:3330 (22c:31) Algorithms What s an Algorithm? CS:3330 (22c:31) Algorithms Introduction Computer Science is about problem solving using computers. Software is a solution to some problems. Algorithm is a design inside a software.

More information

Here is a recursive algorithm that solves this problem, given a pointer to the root of T : MaxWtSubtree [r]

Here is a recursive algorithm that solves this problem, given a pointer to the root of T : MaxWtSubtree [r] CSE 101 Final Exam Topics: Order, Recurrence Relations, Analyzing Programs, Divide-and-Conquer, Back-tracking, Dynamic Programming, Greedy Algorithms and Correctness Proofs, Data Structures (Heap, Binary

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures Spring 2019 Alexis Maciel Department of Computer Science Clarkson University Copyright c 2019 Alexis Maciel ii Contents 1 Analysis of Algorithms 1 1.1 Introduction.................................

More information

1 (15 points) LexicoSort

1 (15 points) LexicoSort CS161 Homework 2 Due: 22 April 2016, 12 noon Submit on Gradescope Handed out: 15 April 2016 Instructions: Please answer the following questions to the best of your ability. If you are asked to show your

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY. Lecture 11 CS2110 Spring 2016

SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY. Lecture 11 CS2110 Spring 2016 1 SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY Lecture 11 CS2110 Spring 2016 Time spent on A2 2 Histogram: [inclusive:exclusive) [0:1): 0 [1:2): 24 ***** [2:3): 84 ***************** [3:4): 123 *************************

More information

Data Structures and Algorithms CSE 465

Data Structures and Algorithms CSE 465 Data Structures and Algorithms CSE 465 LECTURE 4 More Divide and Conquer Binary Search Exponentiation Multiplication Sofya Raskhodnikova and Adam Smith Review questions How long does Merge Sort take on

More information

CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis. Aaron Bauer Winter 2014

CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis. Aaron Bauer Winter 2014 CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis Aaron Bauer Winter 2014 Previously, on CSE 373 We want to analyze algorithms for efficiency (in time and space) And do so generally

More information

Chapter 8 Sort in Linear Time

Chapter 8 Sort in Linear Time Chapter 8 Sort in Linear Time We have so far discussed several sorting algorithms that sort a list of n numbers in O(nlog n) time. Both the space hungry merge sort and the structurely interesting heapsort

More information

Problem Set 6 Due: 11:59 Sunday, April 29

Problem Set 6 Due: 11:59 Sunday, April 29 CS230 Data Structures Handout # 36 Prof. Lyn Turbak Monday, April 23 Wellesley College Problem Set 6 Due: 11:59 Sunday, April 29 Reading: You are expected to read and understand all of the following handouts,

More information

1 The range query problem

1 The range query problem CS268: Geometric Algorithms Handout #12 Design and Analysis Original Handout #12 Stanford University Thursday, 19 May 1994 Original Lecture #12: Thursday, May 19, 1994 Topics: Range Searching with Partition

More information

CPSC 311 Lecture Notes. Sorting and Order Statistics (Chapters 6-9)

CPSC 311 Lecture Notes. Sorting and Order Statistics (Chapters 6-9) CPSC 311 Lecture Notes Sorting and Order Statistics (Chapters 6-9) Acknowledgement: These notes are compiled by Nancy Amato at Texas A&M University. Parts of these course notes are based on notes from

More information

Analysis of Algorithms

Analysis of Algorithms Analysis of Algorithms Concept Exam Code: 16 All questions are weighted equally. Assume worst case behavior and sufficiently large input sizes unless otherwise specified. Strong induction Consider this

More information

Lecture 7 Quicksort : Principles of Imperative Computation (Spring 2018) Frank Pfenning

Lecture 7 Quicksort : Principles of Imperative Computation (Spring 2018) Frank Pfenning Lecture 7 Quicksort 15-122: Principles of Imperative Computation (Spring 2018) Frank Pfenning In this lecture we consider two related algorithms for sorting that achieve a much better running time than

More information

Lecture 5: Sorting Part A

Lecture 5: Sorting Part A Lecture 5: Sorting Part A Heapsort Running time O(n lg n), like merge sort Sorts in place (as insertion sort), only constant number of array elements are stored outside the input array at any time Combines

More information

! Addition! Multiplication! Bigger Example - RSA cryptography

! Addition! Multiplication! Bigger Example - RSA cryptography ! Addition! Multiplication! Bigger Example - RSA cryptography Modular Arithmetic Modular Exponentiation Primality Testing (Fermat s little theorem) Probabilistic algorithm Euclid s Algorithm for gcd (greatest

More information

1 Minimum Cut Problem

1 Minimum Cut Problem CS 6 Lecture 6 Min Cut and Karger s Algorithm Scribes: Peng Hui How, Virginia Williams (05) Date: November 7, 07 Anthony Kim (06), Mary Wootters (07) Adapted from Virginia Williams lecture notes Minimum

More information

Reading 8 : Recursion

Reading 8 : Recursion CS/Math 40: Introduction to Discrete Mathematics Fall 015 Instructors: Beck Hasti, Gautam Prakriya Reading 8 : Recursion 8.1 Recursion Recursion in computer science and mathematics refers to the idea of

More information

ECE 250 Algorithms and Data Structures

ECE 250 Algorithms and Data Structures ECE 250 Algorithms and Data Structures Sections 001 and 002 FINAL EXAMINATION Douglas Wilhelm Harder dwharder@uwaterloo.ca EIT 4018 x37023 2014-04-16T09:00P2H30M Rooms: PAC 7, 8 If you are writing a supplemental

More information

SORTING, SETS, AND SELECTION

SORTING, SETS, AND SELECTION CHAPTER 11 SORTING, SETS, AND SELECTION ACKNOWLEDGEMENT: THESE SLIDES ARE ADAPTED FROM SLIDES PROVIDED WITH DATA STRUCTURES AND ALGORITHMS IN C++, GOODRICH, TAMASSIA AND MOUNT (WILEY 2004) AND SLIDES FROM

More information

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Divide and Conquer

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Divide and Conquer Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Divide and Conquer Divide and-conquer is a very common and very powerful algorithm design technique. The general idea:

More information

Assertions & Verification & Example Loop Invariants Example Exam Questions

Assertions & Verification & Example Loop Invariants Example Exam Questions 2014 November 27 1. Assertions & Verification & Example Loop Invariants Example Exam Questions 2. A B C Give a general template for refining an operation into a sequence and state what questions a designer

More information

Limitations of Algorithmic Solvability In this Chapter we investigate the power of algorithms to solve problems Some can be solved algorithmically and

Limitations of Algorithmic Solvability In this Chapter we investigate the power of algorithms to solve problems Some can be solved algorithmically and Computer Language Theory Chapter 4: Decidability 1 Limitations of Algorithmic Solvability In this Chapter we investigate the power of algorithms to solve problems Some can be solved algorithmically and

More information

Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order.

Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order. Sorting The sorting problem is defined as follows: Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order. Remember that total order

More information

Your favorite blog : (popularly known as VIJAY JOTANI S BLOG..now in facebook.join ON FB VIJAY

Your favorite blog :  (popularly known as VIJAY JOTANI S BLOG..now in facebook.join ON FB VIJAY Course Code : BCS-042 Course Title : Introduction to Algorithm Design Assignment Number : BCA(IV)-042/Assign/14-15 Maximum Marks : 80 Weightage : 25% Last Date of Submission : 15th October, 2014 (For July

More information

Hi everyone. Starting this week I'm going to make a couple tweaks to how section is run. The first thing is that I'm going to go over all the slides

Hi everyone. Starting this week I'm going to make a couple tweaks to how section is run. The first thing is that I'm going to go over all the slides Hi everyone. Starting this week I'm going to make a couple tweaks to how section is run. The first thing is that I'm going to go over all the slides for both problems first, and let you guys code them

More information

Comparison Sorts. Chapter 9.4, 12.1, 12.2

Comparison Sorts. Chapter 9.4, 12.1, 12.2 Comparison Sorts Chapter 9.4, 12.1, 12.2 Sorting We have seen the advantage of sorted data representations for a number of applications Sparse vectors Maps Dictionaries Here we consider the problem of

More information

CPSC 320 Midterm 2 Thursday March 13th, 2014

CPSC 320 Midterm 2 Thursday March 13th, 2014 CPSC 320 Midterm 2 Thursday March 13th, 2014 [12] 1. Answer each question with True or False, and then justify your answer briefly. [2] (a) The Master theorem can be applied to the recurrence relation

More information

Solutions to Problem Set 1

Solutions to Problem Set 1 CSCI-GA.3520-001 Honors Analysis of Algorithms Solutions to Problem Set 1 Problem 1 An O(n) algorithm that finds the kth integer in an array a = (a 1,..., a n ) of n distinct integers. Basic Idea Using

More information

Week - 01 Lecture - 03 Euclid's Algorithm for gcd. Let us continue with our running example of gcd to explore more issues involved with program.

Week - 01 Lecture - 03 Euclid's Algorithm for gcd. Let us continue with our running example of gcd to explore more issues involved with program. Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 01 Lecture - 03 Euclid's Algorithm

More information

We will give examples for each of the following commonly used algorithm design techniques:

We will give examples for each of the following commonly used algorithm design techniques: Review This set of notes provides a quick review about what should have been learned in the prerequisite courses. The review is helpful to those who have come from a different background; or to those who

More information

O(n): printing a list of n items to the screen, looking at each item once.

O(n): printing a list of n items to the screen, looking at each item once. UNIT IV Sorting: O notation efficiency of sorting bubble sort quick sort selection sort heap sort insertion sort shell sort merge sort radix sort. O NOTATION BIG OH (O) NOTATION Big oh : the function f(n)=o(g(n))

More information

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014 CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting Aaron Bauer Winter 2014 The main problem, stated carefully For now, assume we have n comparable elements in an array and we want

More information

Total Points: 60. Duration: 1hr

Total Points: 60. Duration: 1hr CS800 : Algorithms Fall 201 Nov 22, 201 Quiz 2 Practice Total Points: 0. Duration: 1hr 1. (,10) points Binary Heap. (a) The following is a sequence of elements presented to you (in order from left to right):

More information

Section 05: Solutions

Section 05: Solutions Section 05: Solutions 1. Memory and B-Tree (a) Based on your understanding of how computers access and store memory, why might it be faster to access all the elements of an array-based queue than to access

More information

We will show that the height of a RB tree on n vertices is approximately 2*log n. In class I presented a simple structural proof of this claim:

We will show that the height of a RB tree on n vertices is approximately 2*log n. In class I presented a simple structural proof of this claim: We have seen that the insert operation on a RB takes an amount of time proportional to the number of the levels of the tree (since the additional operations required to do any rebalancing require constant

More information

Lecture Notes for Advanced Algorithms

Lecture Notes for Advanced Algorithms Lecture Notes for Advanced Algorithms Prof. Bernard Moret September 29, 2011 Notes prepared by Blanc, Eberle, and Jonnalagedda. 1 Average Case Analysis 1.1 Reminders on quicksort and tree sort We start

More information