Here is a recursive algorithm that solves this problem, given a pointer to the root of T : MaxWtSubtree [r]

Similar documents
Midterm solutions. n f 3 (n) = 3

Solutions. (a) Claim: A d-ary tree of height h has at most 1 + d +...

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]:

Analysis of Algorithms

CSE 332 Autumn 2013: Midterm Exam (closed book, closed notes, no calculators)

COMPSCI 311: Introduction to Algorithms First Midterm Exam, October 3, 2018

CSE 332, Spring 2010, Midterm Examination 30 April 2010

Multiple-choice (35 pt.)

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

CPSC 320 Midterm 2 Thursday March 13th, 2014

How much space does this routine use in the worst case for a given n? public static void use_space(int n) { int b; int [] A;

Trees (Part 1, Theoretical) CSE 2320 Algorithms and Data Structures University of Texas at Arlington

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics

CSE 332 Spring 2013: Midterm Exam (closed book, closed notes, no calculators)

CS521 \ Notes for the Final Exam

Notes slides from before lecture. CSE 21, Winter 2017, Section A00. Lecture 10 Notes. Class URL:

Solutions to relevant spring 2000 exam problems

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD

A6-R3: DATA STRUCTURE THROUGH C LANGUAGE

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof.

Trees. CSE 373 Data Structures

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

(b) int count = 0; int i = 1; while (i<m) { for (int j=i; j<n; j++) { count = count + 1; i = i + 1; O(M + N 2 ) (c) int count = 0; int i,j,k; for (i=1

Data Structures and Algorithms

Outline. Definition. 2 Height-Balance. 3 Searches. 4 Rotations. 5 Insertion. 6 Deletions. 7 Reference. 1 Every node is either red or black.

CSE 100 Advanced Data Structures

University of Toronto Department of Electrical and Computer Engineering. Midterm Examination. ECE 345 Algorithms and Data Structures Fall 2010

Figure 4.1: The evolution of a rooted tree.

Final Examination CSE 100 UCSD (Practice)

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

DATA STRUCTURES AND ALGORITHMS

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

UNIVERSITY REGULATIONS

Heaps with merging. We can implement the other priority queue operations in terms of merging!

Module 2: Classical Algorithm Design Techniques

CMSC351 - Fall 2014, Homework #2

Data Structures and Algorithms Chapter 2

CS134 Spring 2005 Final Exam Mon. June. 20, 2005 Signature: Question # Out Of Marks Marker Total

CSE 373 Winter 2009: Midterm #1 (closed book, closed notes, NO calculators allowed)

II (Sorting and) Order Statistics

Test 1 SOLUTIONS. June 10, Answer each question in the space provided or on the back of a page with an indication of where to find the answer.

12/5/17. trees. CS 220: Discrete Structures and their Applications. Trees Chapter 11 in zybooks. rooted trees. rooted trees

Copyright 2000, Kevin Wayne 1

CPSC W2 Midterm #2 Sample Solutions

Transform & Conquer. Presorting

Instructions. Definitions. Name: CMSC 341 Fall Question Points I. /12 II. /30 III. /10 IV. /12 V. /12 VI. /12 VII.

1 Format. 2 Topics Covered. 2.1 Minimal Spanning Trees. 2.2 Union Find. 2.3 Greedy. CS 124 Quiz 2 Review 3/25/18

EXAM ELEMENTARY MATH FOR GMT: ALGORITHM ANALYSIS NOVEMBER 7, 2013, 13:15-16:15, RUPPERT D

CMPSCI 311: Introduction to Algorithms Practice Final Exam

21# 33# 90# 91# 34# # 39# # # 31# 98# 0# 1# 2# 3# 4# 5# 6# 7# 8# 9# 10# #

CSE 332 Spring 2014: Midterm Exam (closed book, closed notes, no calculators)

16 Greedy Algorithms

Analysis of Algorithms

Lecture 6: Analysis of Algorithms (CS )

Section Summary. Introduction to Trees Rooted Trees Trees as Models Properties of Trees

(D) There is a constant value n 0 1 such that B is faster than A for every input of size. n n 0.

CSL 201 Data Structures Mid-Semester Exam minutes

EECS 2011M: Fundamentals of Data Structures

Greedy Algorithms. Alexandra Stefan

Problem Set 5 Solutions

Binary Tree. Preview. Binary Tree. Binary Tree. Binary Search Tree 10/2/2017. Binary Tree

UNIT 4 Branch and Bound

CSE 332 Winter 2015: Midterm Exam (closed book, closed notes, no calculators)

(2,4) Trees. 2/22/2006 (2,4) Trees 1

8. Binary Search Tree

Section 05: Solutions

DATA STRUCTURE : A MCQ QUESTION SET Code : RBMCQ0305

Final Exam. Introduction to Artificial Intelligence. CS 188 Spring 2010 INSTRUCTIONS. You have 3 hours.

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

CISC 320 Midterm Exam

CSE 373 Spring 2010: Midterm #1 (closed book, closed notes, NO calculators allowed)

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G.

2. (a) Explain when the Quick sort is preferred to merge sort and vice-versa.

Solutions to Exam Data structures (X and NV)

CS2 Algorithms and Data Structures Note 6

Balanced Search Trees. CS 3110 Fall 2010

Algorithms. AVL Tree

Recall: Properties of B-Trees

Algorithm Theory, Winter Term 2015/16 Problem Set 5 - Sample Solution

Plotting run-time graphically. Plotting run-time graphically. CS241 Algorithmics - week 1 review. Prefix Averages - Algorithm #1

Data Structures and Algorithms Week 4

The divide and conquer strategy has three basic parts. For a given problem of size n,

University of Toronto Department of Electrical and Computer Engineering. Midterm Examination. ECE 345 Algorithms and Data Structures Fall 2012

The smallest element is the first one removed. (You could also define a largest-first-out priority queue)

Topic: Heaps and priority queues

Part 1: Multiple Choice Circle only ONE answer. Each multiple choice question is worth 3.5 marks.

O(n): printing a list of n items to the screen, looking at each item once.

Trees. Russell Impagliazzo and Miles Jones Thanks to Janine Tiefenbruck. April 29, 2016

CSci 231 Final Review

CSE373 Fall 2013, Midterm Examination October 18, 2013

FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard

Lecture 5: Sorting Part A

CMSC th Lecture: Graph Theory: Trees.

COMP Analysis of Algorithms & Data Structures

15.4 Longest common subsequence

(2,4) Trees Goodrich, Tamassia (2,4) Trees 1

CS301 - Data Structures Glossary By

Solutions to Problem Set 1

CPSC 311 Lecture Notes. Sorting and Order Statistics (Chapters 6-9)

Transcription:

CSE 101 Final Exam Topics: Order, Recurrence Relations, Analyzing Programs, Divide-and-Conquer, Back-tracking, Dynamic Programming, Greedy Algorithms and Correctness Proofs, Data Structures (Heap, Binary Search Tree, Stacks, Graphs, Lists), Using Data Structures in Algorithms Time: 3 Hours Some problems have multiple parts; do all parts. EXPLAIN ALL ANSWERS, with at least a few lines or sentences of precise English. Order Notation For each of the following answer True or False and give a brief explanation (1 or 2 lines or sentences.) Each is worth 4 points, 2 points for the correct answer and 2 points for the explanation or proof. 1. 1000n + 12, 500 O(n log n) True, 1000n + 12, 500 < 13, 5000n < 13500n log n. so 2. n 2 + (n 1) 2 + (n 2) 2 +...1 2 O(n 2 ). False. n/2 of the terms are larger than n 2 /4, so the sum is Ω(n 3 ), not upper bounded by O(n 2 ). 3. 2 3n O(2 n ). False. 2 3n /2 n = 2 2n, which goes to infinity with large n. Thus, we can t have 2 3n < C2 n for any constant C. 4. If n 16, an O(n log n) time algorithm is always at least four times faster than an O(n 2 ) time algorithm. Flase. Since O notation hides multiplicative constants, you cannot reach an exact numerical conclusion for specific values of n. Also, O is an upper bound, so that O(n 2 ) algorithm could also be O(n). (Either of these answers is enough). 5. If f and g are any positive, non-decreasing functions, then (f(n) + g(n)) 2 Θ((f(n)) 2 +(g(n)) 2 ) (Prove or give counter-example.) True. (f(n)+g(n)) 2 = f(n) 2 +2f(n)g(n)+g(n) 2 f(n) 2 +g(n) 2 since f and g are non-negative. On the other hand, 2f(n)g(n) < 2max((f(n), g(n)) 2 < 2f(n) 2 +2g(n) 2 since the max squared will be one of those two terms. So all terms are upper bounded by O(f(n) 2 + g(n) 2 ). Divide and Conquer The maximum weight sub-tree problem is as follows. You are given a balanced binary tree T of size n, where each node i T has a (not necessarily positive) weight w(i) for each node i T. (Every node in T has pointers to its left-child, right-child, and parent, and you are given a pointer to the root of the tree. A NIL field for the children means the node is a leaf, and for the parent, means the node is the root. You are given a pointer to the root r of T.) A rooted sub-tree of T is a connected sub-graph of T containing the root r. (So a sub-tree is not necessarily the entire sub-tree rooted at a node. However, it cannot contain the children of a node without containing the node.) You wish to find the maximum possible value of the sum of weights of nodes in a rooted sub-tree S of T, i S w(i). 1

Here is a recursive algorithm that solves this problem, given a pointer to the root of T : MaxWtSubtree [r] 1. IF r = NIL return 0. 2. A max(o, M axw tsubtree[r.lef tchild]) 3. B max(o, M axw tsubtree[r.rightchild]) 4. Return w[r] + A + B. a Give a recurrence and a time analysis for this algorithm in the case when T is a complete binary tree of height h and size n = 2 h 1 (10 pts.) We call ourselves recursively on the left and right subtree, and then have constant work. If both sub-trees are of size at most n/2, as in the balanced case, T (n) 2T (n/2) + O(1). Since 2 > 2 0, this is in the bottom-heavy case of the Master Theorem, so the total time is O(n log22 ) = O(n). b Prove that the same worst-case bound holds if T is any tree of size n. (10 pts.) More generally, if the left subtree has size L and the right sub-tree has size R, n = L+R+1, and T (n) T (L)+T (R)+O(1(. We can prove by strong induction that T (n) is linear-time. Let c be greater than the hidden constant in the O(1) above, and greater than the time the algorithm takes on inputs of size 1. By definition, if the tree has size n = 1, the algorithm takes time less than Cn = C. Assume the algorithm takes time at most Cn for 1 n n. Then on an input of size n, it takes time T (L)+T (R)+c for c < c the hidden constant. By the inductive hyptothesis, T (L) cl and T (R) cr, to the total time is at most T (n) T (L) + T (R) + c cl + cr + c cn. Alternatively, observe that we call the algorithm recursively throughout at most once per node of the tree, and each time, the nonrecursive part is constant time. Monotone matchings All the remaining questions concern variations of the following problem. Let G be a bipartite graph, with L = {u 1,..u l } the set of nodes on the left, R = {v 1,..v r } the set of nodes on the right, E the set of edges, each with one endpoint in L and the other in R, and m = E the number of edges. A matching in G is a set of edges M E so that no two edges in M share an endpoint (neither the one in L nor the one in R). A matching M is monotone if for every two edges (u i1, v j1 ) and (u i2, v j2 ) in M, if i 1 < i 2 2

then j 1 < j 2. That is, one could draw all the edges in the matching without crossing, if the nodes are put in order on the two sides. The problem is, given a bipartite graph G, find the largest monotone matching in G. Assume l r. Then a monotone matching M is perfect if it has size l, i.e., M = l. Greedy Algorithms and data structures Part 1 : 10 points Below is a greedy strategy for the largest monotone matching problem. Give a counter-example where it fails to produce the optimal solution. (Hint: Since below you will show that the algorithm works when the maximum monotone matching is perfect, your example shouldn t have a perfect monotone matching.) Candidate Strategy A : For each i = 1 to l, if u i has at least one undeleted neighbor v j, match it to the unmatched neighbor with smallest value of j. Then delete u i and v 1,...v j, and repeat. Say that G has three nodes u 1, u 2, u 3 on the right and three nodes v 1, v 2, v 3 on the left. If u 1 is connected to v 3 only, u 2 to v 1 and u 3 to v 2, then the best monotone matching is to use the last two edges. But the greedy algorithm above picks the first edge, leaving it no additional edges to choose from. Part 2: 5 pts Illustrate the above strategy on the following graph with a perfect matching: L = {u 1, u 2, u 3 },R = {v 1, v 2, v 3, v 4, v 5 }, and E = {(u 1, v 2 ), (u 1, v 3 ), (u 1, v 4 ), (u 2, v 1 ), (u 2, v 2 ), (u 2, v 3 ), (u 2, v 5 )(u 3, v 1 ), (u 3, v 5 )} Skipped Part 3: 10 points Prove that, if G has a perfect matching, then Candidate Strategy A finds one. (Hint: Use one of the following two methods. Let strategy A match u i with v ji (unless it can t be matched). Let OP T be a perfect matching that matches each u i with v ki. (Note: all left nodes will be matched by OP T, since it is perfect.) Transformation method: Prove by induction on T that there is a leftperfect matching OP T T that matches each u i with v ji for 1 i T. We show by induction, that, if there is a perfect monotone matching, then there is one that matches the first i nodes on the left as does the greedy algorithm. In the base case, i = 0, there is nothing to prove. Assume Opt i is a perefect monotone matching that matches the first i nodes as does the greedy algorithm. Say that Opt i matches u i to some v j (as does the greedy algorithm) and matches u i+1 to some v k. We must have k > j for OP T i to be monotone. So at the start of the i + 1 st iteration of the greedy algorithm, v k has not been deleted. 3

The greedy algorithm will then match i + 1 to the first neighbor that has not been deleted, if any. Since v k is some neighbor that hasn t been deleted, it will either match i + 1 to v k or some v k with k < k. In the first case, we can let OP T i+1 = OP T i. In the second, we can replace the edge from u i+1 to v k in OP T i with the edge from u i+1 to v k to construct OP T i+1. In either case, OP T i+1 is a perfect monotone matching, because the match for u i+1 is greater than j and hence after any node used for nodes before u i+1 and less than k and hence before any node used for nodes after u i+1. By induction, we have shown the claim for any i. In particular, for i = n, there is a perfect monotone matching that matches every node the same as the greedy algorithm does, so the greedy algorithm is a perfect monotone matching. Part 4: 10 points Describe an efficient algorithm that carries out the strategy. Your description should specify which data structures you use, and any pre-processing steps. Assume the graph is given in adjacency list format. Give a time analysis, in terms of l, r and m. Assume the graph is in adjacency list format. We just keep track of the index k of the last node v k that has been matched. For each i, we run through the list of neighbors, and find the smallest k > k so that v k N(u i ). This takes time proportional to the degree of u i. If no such k exists, we leave u i unmatched. Otherwise, we match it to v k and set k to k. Since the time for each node on the left is O(1 + deg(u i )), the total time will be O( L + E ). Back-tracking and Dynamic Programming The following recursive algorithm for the maximal monotone matching problem finds the maximum matching whether or not it is perfect. It branches on whether a node on the left is matched or unmatched. By the analysis of greedy algorithm A above, we can see that when a node is matched, it should always be matched to its smallest neighbor. The backtracking algorithm just returns the size of the maximum monotone matching, not the actual matching. BTMMM(G = (L = {u 1,..u l }, R = {v 1,..v r }, E): bipartite graph) 1. IF L = 0 return 0. 2. Unmatched BT MMM(G {u 1 }). 3. IF N(u 1 ) = 0 return Unmatched 4. Let J be the first neighbor of u 1, i.e., the smallest value so that v J N(u 1 ) 5. Matched 1 + BT MMM(G {u 1, v 1,..v J }) 6. Return M ax(m atched, U nmatched) 4

Part1: 5 points Illustrate the above algorithm on your counter-example graph for the greedy strategy, (as a tree of recursive calls and answers. ) Skipped Part2: 5 points Give an upper bound on the number of recursive calls the above algorithm makes, in the worst-case. (Be sure to expain your answer.) There are two ways to give this bound. We make two recursive calls at each step. In terms of the number of nodes n = l + r, one call is of size n 1, and the other of size at most n 2. This gives us an upper bound of O(F ib n ), the n th Fibonnacci number. On the other hand, both recursive calls reduce L by 1. Thus, the depth of the binary tree is at most L which gives an upper bound of O(2 L. Both bounds are correct, and neither is always better than the other. We could combine them by saying the upper bound is the minimum of the two. Part 3: 10 points Give a dynamic programming version of the recurrence. At any point in the recursion, the remaining nodes on the left are u I..u l and the remainign nodes on the right are of the form v K...v r. So we create a matrix MM[I, K] to store the answers for each 1 I l + 1 and each 1 K r + 1, I is increasing as we make calls, so we need to fill in this matrix in decreasing order of I. The base cases are when I = l + 1, i.e., the left side is empty. DPMMM(G = (L = {u 1,..u l }, R = {v 1,..v r }, E): bipartite graph) 1. Initialize MM[1...l + 1, 1...r + 1]. 2. FOR K = 1 to r + 1 do: MM[l + 1, K] o. 3. FOR I = l downto 1 do: FOR K = 1 to r + 1 do: 4. Unmatched MM[I + 1, K]. 5. IF u I has no neighbor u H with H K 6. THEN MM[I, K] Unmatched 7. ELSE let J be the smallest such H 8. let Matched 1 + MM[I, J]. 9. MM[I, K] max(matched, Unmatched) 10. Return MM[1, 1]. Part 4: 5 points Give a time analysis of this dynamic programming algorithm. In the most naive version of the algorithm above, for each node u I, we search through all of its neighbors r + 1 times. So the cost is I O((deg(u I)(r + 1))) = O(r E ). 5

But if we sort the neighbors of I before doing the loop for all J, and just keep a counter for where we left off in the search for a neighbor H > K, we can update this counter in constant time when we increment K. (It is either the same, or the next on the sorted list). Then the total time is I (O(deg(u I) log deg(u I )+r)) I O(deg(u I) log r + r)) = O( E log r + rl). Part 5: 5 points Show the array or matrix that your dynamic programming algorithm produces on the example graph from Part 2 of the greedy algorithm. Skipped. 6