CSE373 Fall 2013, Final Examination December 10, 2013 Please do not turn the page until the bell rings.

Similar documents
CSE373 Fall 2013, Final Examination December 10, 2013 Please do not turn the page until the bell rings.

CSE373 Fall 2013, Second Midterm Examination November 15, 2013

CSE373 Winter 2014, Final Examination March 18, 2014 Please do not turn the page until the bell rings.

CSE373 Fall 2013, Midterm Examination October 18, 2013

CSE373 Winter 2014, Final Examination March 18, 2014 Please do not turn the page until the bell rings.

CSE 332, Spring 2010, Midterm Examination 30 April 2010

CSE373 Winter 2014, Midterm Examination January 29, 2014

CSE 332, Spring 2010, Final Examination 8 June 2010

CSE332 Summer 2012 Final Exam, August 15, 2012

CSE 332 Winter 2015: Midterm Exam (closed book, closed notes, no calculators)

CSE332, Spring 2012, Final Examination June 5, 2012

CSE 332 Spring 2013: Midterm Exam (closed book, closed notes, no calculators)

CSE 332 Spring 2014: Midterm Exam (closed book, closed notes, no calculators)

CSE373 Spring 2015: Final

CSE 373 Autumn 2012: Midterm #2 (closed book, closed notes, NO calculators allowed)

CSE 332 Autumn 2013: Midterm Exam (closed book, closed notes, no calculators)

Instructions. Definitions. Name: CMSC 341 Fall Question Points I. /12 II. /30 III. /10 IV. /12 V. /12 VI. /12 VII.

CSE 373 Spring 2010: Midterm #1 (closed book, closed notes, NO calculators allowed)

CSE 332 Autumn 2016 Final Exam (closed book, closed notes, no calculators)

CSE 373 Spring 2010: Midterm #2 (closed book, closed notes, NO calculators allowed)

CSE373: Data Structure & Algorithms Lecture 18: Comparison Sorting. Dan Grossman Fall 2013

CSE 373 Autumn 2010: Midterm #1 (closed book, closed notes, NO calculators allowed)

CSE351 Winter 2016, Final Examination March 16, 2016

CSE 332 Winter 2018 Final Exam (closed book, closed notes, no calculators)

CSE 373 Winter 2009: Midterm #1 (closed book, closed notes, NO calculators allowed)

CSE331 Spring 2015, Final Examination June 8, 2015

CSE331 Winter 2014, Midterm Examination February 12, 2014

CSE331 Winter 2014, Midterm Examination February 12, 2014

CSE 250 Final Exam. Fall 2013 Time: 3 hours. Dec 11, No electronic devices of any kind. You can open your textbook and notes

Selection, Bubble, Insertion, Merge, Heap, Quick Bucket, Radix

Practice Midterm Exam Solutions

12/1/2016. Sorting. Savitch Chapter 7.4. Why sort. Easier to search (binary search) Sorting used as a step in many algorithms

University of Waterloo Department of Electrical and Computer Engineering ECE 250 Data Structures and Algorithms. Final Examination T09:00

University of Illinois at Urbana-Champaign Department of Computer Science. Final Examination

DATA STRUCTURES AND ALGORITHMS

About this exam review

COSC 2007 Data Structures II Final Exam. Part 1: multiple choice (1 mark each, total 30 marks, circle the correct answer)

COMP Data Structures

CLO Assessment CLO1 Q1(10) CLO2 Q2 (10) CLO3 Q4 (10) CLO4 Q3a (4)

Sorting is ordering a list of objects. Here are some sorting algorithms

Solution to CSE 250 Final Exam

Faster Sorting Methods

CPSC 211, Sections : Data Structures and Implementations, Honors Final Exam May 4, 2001

CSE 373 NOVEMBER 8 TH COMPARISON SORTS

CSE 373: Practice Final

CSE505, Fall 2012, Midterm Examination October 30, 2012

Elementary Sorting Algorithms

CSE331 Fall 2014, Final Examination December 9, 2014 Please do not turn the page until 2:30. Rules:

Prelim 2. CS 2110, November 20, 2014, 7:30 PM Extra Total Question True/False Short Answer

CSE 373 Autumn 2010: Midterm #2 (closed book, closed notes, NO calculators allowed)

Final Exam in Algorithms and Data Structures 1 (1DL210)

CSE 332 Winter 2015: Final Exam (closed book, closed notes, no calculators)

University of Waterloo Department of Electrical and Computer Engineering ECE 250 Data Structures and Algorithms. Final Examination

CSC 273 Data Structures

Final Exam. COMP Summer I June 26, points

Course Review for Finals. Cpt S 223 Fall 2008

Sorting. Weiss chapter , 8.6

CSE 303, Spring 2005, Midterm Examination 29 April Please do not turn the page until everyone is ready.

COS 226 Midterm Exam, Spring 2009

Computer Science Foundation Exam

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION

CS61B, Fall 2011 Final Examination (corrected) P. N. Hilfinger

Sorting. Sorting in Arrays. SelectionSort. SelectionSort. Binary search works great, but how do we create a sorted array in the first place?

CS61BL. Lecture 5: Graphs Sorting

D. Θ nlogn ( ) D. Ο. ). Which of the following is not necessarily true? . Which of the following cannot be shown as an improvement? D.

Prelim 2. CS 2110, 16 November 2017, 7:30 PM Total Question Name Short Heaps Tree Collections Sorting Graph

Suppose that the following is from a correct C++ program:

Real-world sorting (not on exam)

COS 226 Midterm Exam, Spring 2011

CSE Data Structures and Introduction to Algorithms... In Java! Instructor: Fei Wang. Mid-Term Exam. CSE2100 DS & Algorithms 1

CS134 Spring 2005 Final Exam Mon. June. 20, 2005 Signature: Question # Out Of Marks Marker Total

CSE 332: Data Structures & Parallelism Lecture 10:Hashing. Ruth Anderson Autumn 2018

CSE341 Spring 2016, Midterm Examination April 29, 2016

University of Waterloo Department of Electrical and Computer Engineering ECE 250 Data Structures and Algorithms. Final Examination

University of Waterloo Department of Electrical and Computer Engineering ECE 250 Algorithms and Data Structures

Programming II (CS300)

ECE242 Data Structures and Algorithms Fall 2008

CSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting. Ruth Anderson Winter 2019

1. AVL Trees (10 Points)

CSE 2123 Sorting. Jeremy Morris

Sorting. Sorting. Stable Sorting. In-place Sort. Bubble Sort. Bubble Sort. Selection (Tournament) Heapsort (Smoothsort) Mergesort Quicksort Bogosort

CSE332 Summer 2010: Midterm Exam Sample Solutions

ECE 2035 Programming HW/SW Systems Fall problems, 5 pages Exam Three 28 November 2012

University of Illinois at Urbana-Champaign Department of Computer Science. Final Examination

CSE341 Spring 2016, Midterm Examination April 29, 2016

CSE341, Fall 2011, Midterm Examination October 31, 2011

Prelim CS410, Summer July 1998 Please note: This exam is closed book, closed note. Sit every-other seat. Put your answers in this exam. The or

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting

Lecture Notes 14 More sorting CSS Data Structures and Object-Oriented Programming Professor Clark F. Olson

FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard

CSE 373 MAY 24 TH ANALYSIS AND NON- COMPARISON SORTING

Sorting and Selection

CSE 373: Data Structures and Algorithms

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014

CSE332: Data Abstractions Lecture 21: Parallel Prefix and Parallel Sorting. Tyler Robison Summer 2010

CS171 Final Practice Exam

Final Exam in Algorithms and Data Structures 1 (1DL210)

CSE331 Winter 2014, Final Examination March 17, 2014 Please do not turn the page until 8:30. Rules:

CSE341 Spring 2017, Midterm Examination April 28, 2017

Part I: Short Answer (12 questions, 65 points total)

Transcription:

CSE373 Fall 2013, Final Examination December 10, 2013 Please do not turn the page until the bell rings. Rules: The exam is closed-book, closed-note, closed calculator, closed electronics. Please stop promptly at 4:20. There are 114 points total, distributed unevenly among 10 questions (many with multiple parts): Question Max Earned 1 28 2 8 3 11 4 15 5 7 6 7 7 12 8 8 9 9 10 9 Advice: Read questions carefully. Understand a question before you start writing. Write down thoughts and intermediate steps so you can get partial credit. But clearly circle your final answer. The questions are not necessarily in order of difficulty. Skip around. Make sure you get to all the problems. If you have questions, ask. Relax. You are here to learn.

1. (28 points) Don t miss part (b) of this question. (a) Fill in this table with the worst-case asymptotic running time of each operation when using the data structure listed. Assume the following: Items are comparable (given two items, one is less than, equal, or greater than the other) in O(1) time. For insertions, it is the client s responsibility not to insert an item if there is already an equal item in the data structure (so the operations do not need to check this). For insertions, assume the data structure has enough room (do not include any resizing costs). For deletions, assume we do not use lazy deletion. insert lookup delete getmin getmax (take an item and add it to the structure) (take an item and return if it is in the structure) (take an item and remove it from the structure, if present in it) (return smallest item in the structure) (return largest item in the structure) sorted array unsorted array array kept organized as a min-heap AVL tree Hashtable with chaining for collision resolution (b) The entries in the table above are worst-case. For each one, circle the entry if we expect an asymptotically better run-time in practice. The correct answer for this question circles two entries. See next page

(a) insert lookup delete getmin getmax (take an item and add it to the structure) (take an item and return if it is in the structure) (take an item and remove it from the structure, if present in it) (return smallest item in the structure) (return largest item in the structure) sorted array O(n) O(log n) O(n) O(1) O(1) unsorted array O(1) O(n) O(n) O(n) O(n) array kept organized as a min-heap O(log n) O(n) O(n) O(1) O(n) AVL tree O(log n) O(log n) O(log n) O(log n) O(log n) Hashtable with chaining for collision resolution O(1) O(n) O(n) O(n) O(n) For hashtable getmin and getmax, we accepted O(n + tablesize) since that is even more correct but deducted 1 point total for O(tableSize). (b) Circle these two: Hashtable lookup and Hashtable delete

2. (8 points) Suppose there is a class C that has a constructor that takes an argument of type A and stores the content of the A object for later use by other methods. For each definition of class A below, answer yes copy if the constructor in C would need to make a deep copy of the argument to preserve the abstraction defined by C or no copy if it is not necessary. (a) public class B { public int x; public int y; public B(int _x, int _y) { x = _x; y = _y; public class A { public B a; public B b; public A(B _a, B _b) { a = _a; b = _b; (b) public class B { public final int x; public final int y; public B(int _x, int _y) { x = _x; y = _y; public class A { public final B a; public final B b; public A(B _a, B _b) { a = _a; b = _b; (c) public class B { public final int x; public final int y; public B(int _x, int _y) { x = _x; y = _y; public class A { public B a; public B b; public A(B _a, B _b) { a = _a; b = _b; (d) public class B { public int x; public int y; public B(int _x, int _y) { x = _x; y = _y; public class A { public final B a; public final B b; public A(B _a, B _b) { a = _a; b = _b; (a) yes copy; (b) no copy; (c) yes copy; (d) yes copy Note: For (c), we need to copy, but a shallow copy suffices. We gave credit if this was explained.

3. (11 points) For each of the six questions in parts (a)-(c), answer in terms of big-oh and the number of vertices in the graph V. (a) Suppose a graph has no edges. i. What is the asymptotic space cost of storing the graph as an adjacency list? ii. What is the asymptotic space cost of storing the graph as an adjacency matrix? (b) Suppose a graph has every possible edge. i. What is the asymptotic space cost of storing the graph as an adjacency list? ii. What is the asymptotic space cost of storing the graph as an adjacency matrix? (c) Suppose an undirected graph has one node A that is connected to every other node and the graph has no other edges. i. What is the asymptotic space cost of storing the graph as an adjacency list? ii. What is the asymptotic space cost of storing the graph as an adjacency matrix? (d) Is an adjacency list faster or slower than an adjacency matrix for answering queries of the form, is edge (u, v) in the graph? (e) Is an adjacency list faster or slower than an adjacency matrix for answering queries of the form, are there any directed edges with u as the source node? (a) i. O( V ) ii. O( V 2 ) (b) i. O( V 2 ) ii. O( V 2 ) (c) i. O( V ) ii. O( V 2 ) (d) slower (e) faster

4. (15 points) (a) Draw a weighted undirected graph with exactly 3 nodes that has exactly 0 minimum spanning trees. (b) Draw a weighted undirected graph with exactly 3 nodes that has exactly 1 minimum spanning tree. (c) Draw a weighted undirected graph with exactly 3 nodes that has exactly 2 minimum spanning trees. (d) Draw a weighted undirected graph with exactly 3 nodes that has exactly 3 minimum spanning trees. (e) Argue in roughly 2-3 English sentences that no weighted undirected graph with 3 nodes can have more than 3 minimum spanning trees. (a) Any unconnected, weighted, undirected graph (b) Solution needs to have either 2 or 3 non-self edges and if it has 3 non-self edges, then no two can have the same weight (c) Graph needs 2 non-self edges, with exactly 2 having the same weight and the third edge having a lower weight (d) Graph needs 3 non-self edges, all with the same weight (e) A 3-node graph can only have 3 spanning trees because it takes 2 edges (the number of nodes minus 1) to create a spanning tree. If the nodes are A, B, and C, the only possible spanning trees are: (A,B), (A,C) (B,C), (A,C) (A,B), (B,C)

5. (7 points) Complete this Java method so that it properly implements insertion sort. Make sure the result is in-place (do not use another array). void insertionsort(int [] array) { for(int i=1; i < array.length; i++) { // YOUR CODE HERE // end of the for-loop void insertionsort(int [] array) { for(int i=1; i < array.length; i++) { int tmp = array[i]; int j; for(j=i; j > 0 && tmp < array[j-1]; j--) array[j] = array[j-1]; array[j] = tmp;

6. (7 points) Complete this Java method so that it properly implements selection sort. Make sure the result is in-place (do not use another array). Notice there are two places to add code. void selectionsort(int [] array) { for(int i=0; i < array.length; i++) { int index_of_least = i; int j; for(j=i+1; j < array.length; j++) { // YOUR CODE HERE #1 // end of the inner for-loop // YOUR CODE HERE #2 // end of the outer for-loop Note it is also okay to shift everything to the right to make room after the loop. This is more work and unnecessary, but it keeps the sort stable and does not change the asymptotic complexity. void selectionsort(int [] array) { for(int i=0; i < array.length; i++) { int index_of_least = i; int j; for(j=i+1; j < array.length; j++) { if(array[j] < array[index_of_least]) index_of_least = j; int tmp = array[i]; array[i] = array[index_of_least]; array[index_of_least] = tmp;

7. (12 points) Sorting short answer: In all questions about quick-sort, assume n is large enough that any use of a cut-off is not relevant. (a) What is the worst-case asymptotic running time of heap-sort? (b) What is the worst-case asymptotic running time of merge-sort? (c) What is the worst-case asymptotic running time of quick-sort? (d) Can heap-sort be done in-place? (e) Can merge-sort be done in-place? (f) Can quick-sort be done in-place? (g) Consider one item in an array that is sorted with mergesort. In asymptotic (big-oh) terms, how many times can that one item move to a different location? (h) What is the asymptotic running time of quick-sort if the array is already sorted (or almost sorted) and the pivot-selection strategy picks the leftmost element in the range-to-be-sorted? (i) What is the asymptotic running time of quick-sort if the array is already sorted (or almost sorted) and the pivot-selection strategy picks the rightmost element in the range-to-be-sorted? (j) What is the asymptotic running time of quick-sort if the array is already sorted (or almost sorted) and the pivot-selection strategy picks the middle element in the range-to-be-sorted? (k) Before starting a comparison sort for an array with n elements, how many possible orders are there for the elements in the final result? (l) If at some point during a comparison sort, the algorithm has enough information to narrow the final result down to k possibilities, what is the least number of possibilities that remain after performing an additional comparison? (a) O(n log n) (b) O(n log n) (c) O(n 2 ) (d) yes (e) no (f) yes (g) O(log n) (h) O(n 2 ) (i) O(n 2 ) (j) O(n log n) (k) n! (l) k/2

8. (8 points) Suppose we use radix sort to sort the numbers below, using a radix of 10. Show the state of the sort after each of the first two passes, not after the sorting is complete. If multiple numbers are in a bucket, put the numbers that come first closer to the top. Numbers to sort (in their initial order): 13, 105, 1492, 411, 1776, 1, 6, 2014, 99, 98, 96, 11, 21 Put the result after the first pass here: 0 1 2 3 4 5 6 7 8 9 Put the result after the second pass here: 0 1 2 3 4 5 6 7 8 9 After first pass: bucket: 0 1 2 3 4 5 6 7 8 9 contents: 411 1492 13 2014 105 1776 98 99 1 6 11 96 21 After second pass: bucket: 0 1 2 3 4 5 6 7 8 9 contents: 1 411 21 1776 1492 105 11 96 6 13 98 2014 99

9. (9 points) (a) Suppose you have a (large) linked list and you want to perform some operation f on every element and some operation g on every element. Your first version of the code first does all the f operations in one list traversal and then all g operations in a second traversal. Your second version does one traversal, doing f and then g for each element. Why might the second version be faster? i. temporal locality ii. spatial locality iii. it is faster asymptotically iv. none of the above (b) Suppose you have a large array of n data items that are in a random order and you want to return a sample of 1% of them. Your first version of the code returns (a copy of) every 100 th item (elements at index 0, 100, 200, etc.). Your second version returns (a copy of) the first 1% of the items (elements at index 0 up to n/100). Why might the second version be faster? i. temporal locality ii. spatial locality iii. it is faster asymptotically iv. none of the above (c) Suppose you have a large linked list of n integers and you want to print them in reverse order (the numbers closer to the end of the list first). The first version of your code follows this algorithm: Traverse the list from the beginning to determine what n is. For i = n, n 1, n 2,..., 1, traverse the list from the beginning to the i th element and print it The second version of your code looks like this, calling printreverse on the first node in the list. class ListNode { int x; ListNode next; void printreverse() { if(next!= null) next.printreverse(); System.out.println(x); Why might the second version be faster? i. temporal locality ii. spatial locality iii. it is faster asymptotically iv. none of the above (a) i. temporal locality, (b) ii. spatial locality, (c) iii. it is faster asymptotically

10. (9 points) (a) If you take a correct parallel program and comment out the lines that call a join method, which one of the following is most likely to happen? i. It could slow down because we will never have two threads execute at the same time. ii. It could slow down because the operating system will only assign one processor to run the program. iii. It could produce the wrong answer because the code that is supposed to combine answers from other threads will never run. iv. It could produce the wrong answer because the code that is supposed to combine answers from other threads will run before the locations it reads hold the correct data. (b) For each of the following, indicate whether it can be computed with: A parallel map operation A parallel reduce operation Neither i. Replace each string in an array with an all-capitals version of the same string ii. The number of strings in an array that are not the empty string iii. The median of an array of numbers iv. The largest index in an array of numbers that holds a negative number (c) According to Amdahl s Law, if 25% of a program s execution time is spent on operations that cannot be parallelized, then what is the maximum speed-up parallelism can bring to the overall program execution time? (a) (iv) (b) (c) 4 i. map ii. reduce iii. neither iv. reduce