n 1 x i = xn 1 x 1. i= (2n 1) = n 2.

Similar documents
1. Convert the following decimal numbers to binary and back again.

Do you remember any iterative sorting methods? Can we produce a good sorting method by. We try to divide and conquer: break into subproblems

Analyzing Complexity of Lists

Fun facts about recursion

Insertion Sort: an algorithm for sorting an array

COSC242 Lecture 7 Mergesort and Quicksort

Module 10. Recursion. Adapted from Absolute Java, Rose Williams, Binghamton University

CS171 Midterm Exam. October 29, Name:

6/4/12. Recursive void Methods. Chapter 11. Vertical Numbers. Vertical Numbers. Vertical Numbers. Algorithm for Vertical Numbers

CSCI 136 Data Structures & Advanced Programming. Lecture 9 Fall 2018 Instructors: Bills

Fundamental problem in computing science. putting a collection of items in order. Often used as part of another algorithm

Chapter 3:- Divide and Conquer. Compiled By:- Sanjay Patel Assistant Professor, SVBIT.

UNIT-2. Problem of size n. Sub-problem 1 size n/2. Sub-problem 2 size n/2. Solution to the original problem

Week - 04 Lecture - 02 Merge Sort, Analysis

Fundamental mathematical techniques reviewed: Mathematical induction Recursion. Typically taught in courses such as Calculus and Discrete Mathematics.

Intro to Algorithms. Professor Kevin Gold

CSc 110, Spring 2017 Lecture 39: searching

Chapter 1 Programming: A General Overview

Mergesort again. 1. Split the list into two equal parts

COMP Data Structures

COMP 250 Fall recurrences 2 Oct. 13, 2017

Chapter 1 Programming: A General Overview

Divide & Conquer. 2. Conquer the sub-problems by solving them recursively. 1. Divide the problem into number of sub-problems

SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY. Lecture 11 CS2110 Spring 2016

CS/ENGRD 2110 Object-Oriented Programming and Data Structures Spring 2012 Thorsten Joachims. Lecture 10: Asymptotic Complexity and

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017

(Refer Slide Time: 01.26)

Quicksort (Weiss chapter 8.6)

Algorithm for siftdown(int currentposition) while true (infinite loop) do if the currentposition has NO children then return

Data Structures and Algorithms for Engineers

S1) It's another form of peak finder problem that we discussed in class, We exploit the idea used in binary search.

Agenda. We ve done Growth of functions Asymptotic Notations (O, o,,!, ) Now Binary search as an example Recurrence relations, solving them

Algorithm Efficiency & Sorting. Algorithm efficiency Big-O notation Searching algorithms Sorting algorithms

Solutions to the Second Midterm Exam

CS126 Final Exam Review

Recursion: Factorial (1) Recursion. Recursion: Principle. Recursion: Factorial (2) Recall the formal definition of calculating the n factorial:

Week - 03 Lecture - 18 Recursion. For the last lecture of this week, we will look at recursive functions. (Refer Slide Time: 00:05)

COMP 250 Fall priority queues, heaps 1 Nov. 9, 2018

Recursion. Chapter 7. Copyright 2012 by Pearson Education, Inc. All rights reserved

Lecture 15 Binary Search Trees

Randomized Algorithms, Hash Functions

Problem. Input: An array A = (A[1],..., A[n]) with length n. Output: a permutation A of A, that is sorted: A [i] A [j] for all. 1 i j n.

Recursion. EECS2030: Advanced Object Oriented Programming Fall 2017 CHEN-WEI WANG

7. Sorting I. 7.1 Simple Sorting. Problem. Algorithm: IsSorted(A) 1 i j n. Simple Sorting

Recursion: The Beginning

Overview. A mathema5cal proof technique Proves statements about natural numbers 0,1,2,... (or more generally, induc+vely defined objects) Merge Sort

ECE G205 Fundamentals of Computer Engineering Fall Exercises in Preparation to the Midterm

CS 506, Sect 002 Homework 5 Dr. David Nassimi Foundations of CS Due: Week 11, Mon. Apr. 7 Spring 2014

Programming Languages 3. Definition and Proof by Induction

Data Structures and Algorithms Running time, Divide and Conquer January 25, 2018

Recursion: The Beginning

Assignment 1. Stefano Guerra. October 11, The following observation follows directly from the definition of in order and pre order traversal:

COE428 Lecture Notes Week 1 (Week of January 9, 2017)

CSCI 136 Data Structures & Advanced Programming. Lecture 12 Fall 2018 Profs Bill & Jon

Outline. Introduction. 2 Proof of Correctness. 3 Final Notes. Precondition P 1 : Inputs include

Divide-and-Conquer. Divide-and conquer is a general algorithm design paradigm:

recursive algorithms 1

Lecture 7 Quicksort : Principles of Imperative Computation (Spring 2018) Frank Pfenning

12/30/2013 S. NALINI,AP/CSE

10/5/2016. Comparing Algorithms. Analyzing Code ( worst case ) Example. Analyzing Code. Binary Search. Linear Search

CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis. Aaron Bauer Winter 2014

1 ICS 161: Design and Analysis of Algorithms Lecture notes for January 23, Bucket Sorting

Lecture 15 Notes Binary Search Trees

Test 1 Review Questions with Solutions

Recursion. Let s start by looking at some problems that are nicely solved using recursion. First, let s look at generating The Fibonacci series.

Introduction to the Analysis of Algorithms. Algorithm

Instructions. Definitions. Name: CMSC 341 Fall Question Points I. /12 II. /30 III. /10 IV. /12 V. /12 VI. /12 VII.

CS302 Topic: Algorithm Analysis #2. Thursday, Sept. 21, 2006

Complexity, Induction, and Recurrence Relations. CSE 373 Help Session 4/7/2016

Recursion. Tracing Method Calls via a Stack. Beyond this lecture...

COMP 250 Fall heaps 2 Nov. 3, 2017

CSC 273 Data Structures

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics

COMP 250 Winter generic types, doubly linked lists Jan. 28, 2016

The Limits of Sorting Divide-and-Conquer Comparison Sorts II

Homework Assignment #3. 1 (5 pts) Demonstrate how mergesort works when sorting the following list of numbers:

Searching Algorithms/Time Analysis

Lecture Notes 14 More sorting CSS Data Structures and Object-Oriented Programming Professor Clark F. Olson

SEARCHING, SORTING, AND ASYMPTOTIC COMPLEXITY

Sorting and Searching

Lecture Notes on Binary Search Trees

PRAM Divide and Conquer Algorithms

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17

Lecture 6: Sequential Sorting

CSC Design and Analysis of Algorithms. Lecture 5. Decrease and Conquer Algorithm Design Technique. Decrease-and-Conquer

Chapter 13. Recursion. Copyright 2016 Pearson, Inc. All rights reserved.

Divide and Conquer. Algorithm Fall Semester

1 Probabilistic analysis and randomized algorithms

This chapter covers recursive definition, including finding closed forms.

Better sorting algorithms (Weiss chapter )

Logic and Computation Lecture 20 CSU 290 Spring 2009 (Pucella) Thursday, Mar 12, 2009

Recursion Chapter 3.5

Recursion CSCI 136: Fundamentals of Computer Science II Keith Vertanen Copyright 2011

Recursion. Chapter 11. Chapter 11 1

CSCI-1200 Data Structures Spring 2018 Lecture 7 Order Notation & Basic Recursion

The divide and conquer strategy has three basic parts. For a given problem of size n,

Java Comparable interface

LECTURE 17. Array Searching and Sorting

Sorting & Growth of Functions

KF5008 Algorithm Efficiency; Sorting and Searching Algorithms;

Transcription:

Questions 1. Use mathematical induction to prove that, for any n 1 n 1 x i = xn 1 x 1. (Recall that I proved it in a different way back in lecture 2.) 2. Use mathematical induction to prove that, for all n 1, 1 + 3 + 5 + + (2n 1) = n 2. 3. (a) Suppose you wish to count down the numbers from a given n down to 1. You can use a while loop to do this: countdown(n){ while (n > 0) { print n n-- Write a recursive version of this algorithm. (b) Now write a recursive countup(n) algorithm which counts up from 1 to n. 4. Write a recursive algorithm that prints out a consecutive sequence of elements in an array. The method has three parameters: the array name, and the first and last indices to print from i.e. displayarray( a, first, last) This should be easy, but there is one little gotcha there so see if you can avoid it. 5. Here is an alternative reverse method for the SLinkedList<E> class which reverses the order of nodes in a singly linked list. (See online code from linked list exercises). The method calls a recursive helper method reverserecursivehelper which does most of the work. Write this helper method. If your solution is different from the given one, then you should add your method to the SLinkedList class and test it! public void reverserecursive(){ SNode<E> oldhead = head; reverserecursivehelper(head); head = tail; // swap head and tail tail = oldhead; // last updated: 11 th Oct, 2017 at 20:45 1

6. Consider a simplified version of the game 20 questions, in which one person has a number in mind from 0 to 2 20 1. You can easily find that number with your twenty questions e.g. by asking for the ith bit of the number. (This is essentially binary search.) Here is a slightly different game. Suppose I am thinking of a positive integer n but it can be any positive integer. It is easy for you figure out the number using n questions, namely question i is is it the number i?. Give a faster algorithm, namely one that runs in time proportional to log n. 7. Suppose that you are given an array of n different numbers that strictly increase from index 0 to index m, and strictly decrease from index m to index n 1, where n is known but m is unknown. Note that there is a unique largest number in such a list, and it is at index m. Here are a few examples. m n --- --- [ 3, 5, 17, 18, 21, 6 ] 4 6 [ -3, 5 ] 1 2 [ 12, 7, 4, 2, -5 ] 0 5 Provide the missing pseudocode below of a recursive algorithm that returns the index m of the largest number in the array, in time proportional to log n. The algorithm is initially called with low = 0, high = n-1. findm( a, low, high ){ // array is a[ ], assume low <= high if (low == high) return low else{ // ADD YOUR CODE HERE (AND ONLY HERE) 8. Recall the mergesort algorithm which recursively partitions a list of size n into two sub-lists of half the size, sorts the two sublists, and then merges them Show the order of the list elements after all merges of lists of size 1 to 2 have been completed, and then after all merges of lists of size 2 to lists of size 4 have been completed: ( 6, 5, 2, 8, 4, 3, 7, 1 ) original list, n=8 (,,,,,,, ) merges from n=1 to n=2 completed (,,,,,,, ) merges from n=2 to n=4 completed ( 1, 2, 3, 4, 5, 6, 7, 8 ) final sorted list last updated: 11 th Oct, 2017 at 20:45 2

9. I claimed in the lecture that the recursive method for power(x,n) takes about the same amount of time to run as the iterative method, even though the recursive method uses O(log 2 n) multiplies and the iterative method uses O(n) multiplies. The reason is that one cannot just just consider the number of multiplications, but one also needs to consider the number of digits being multiplied. Calculate a ballpark estimate on the amount of time it takes to compute x n using the recursive versus iterative method. Assume that x is represented with M digits, and assume that the time it takes to multiple two numbers with M 1 and M 2 digits is cm 1 M 2 where c is some constant. last updated: 11 th Oct, 2017 at 20:45 3

Answers 1. The base case is easy. Substitute n = 0 and we get 0 = 0 which is true. For the induction step, we hypothesize that k 1 x i = xk 1 x 1 for k 0, and we want to show it follows from this hypothesis that k x i = xk+1 1 x 1. Take the left side of the last equation, and rewrite it: k x i = which is what we wanted to show. k 1 x i + x k = xk 1 x 1 + xk, = xk 1 x 1 + xk ( x 1 x 1 ) = xk+1 1 x 1 by induction hypothesis 2. The base case of n 0 = 1 is obvious, since there is only a single term on the left hand side, i.e. 1 = 1 2. The induction hypothesis is the statement P (k): P (k) 1 + 3 + 5 + + (2k 1) = k 2 To prove the induction step, we show that if P (k) is true, then P (k + 1) must also be true. As usual, we take the left side of the equation of P (k + 1): k+1 (2i 1) = 2(k + 1) 1 + i=1 k (2i 1) i=1 = 2(k + 1) 1 + k 2, by the induction hypothesis = 2k + 1 + k 2 = (k + 1) 2. Thus, the induction step is also proved, and so we re done. last updated: 11 th Oct, 2017 at 20:45 4

3. (a) countdown(n){ if (n > 0) { print n countdown(n - 1) Did you remember the base case? (b) countup(n){ if (n > 0) { countup(n - 1) print n Did you get the order of the instructions correct? 4. displayarray( a, first, last){ print a[first]) if (first < last) // Did you include this condition? displayarray(a, first+1, last) Alternatively, you could do it like this: displayarray( a, first, last){ if (first < last) displayarray(a, first, last-1); print a[last]) 5. The following recursively reverses the list from head.next, and then cleans up the references that involve the original head. Also note the base case that the list has just one element. private void reverserecursivehelper(snode<e> head){ if (head.next!= null){ reverserecursivehelper(head.next); head.next.next = head; head.next = null; last updated: 11 th Oct, 2017 at 20:45 5

6. The idea of the algorithm is to guess increasing powers of 2 until the power of 2 is bigger than the number. Then do a binary search (backwards). guess = 1 answer = false while (answer == false){ answer = (guess > n) // i.e. evaluate boolean expression guess = guess*2 // To reach here, we must have guess/2 <= n < guess binarysearch(n, [guess/2, guess] ) Both the while loop and the binary search take time proportional to log n. 7. Here are two different solutions. // SOLUTION 1 if (high-low == 1){ if (a[low] < a[high]) return high else return low else{ mid = (low + high)/2 if (a[mid-1] < a[mid]){ return findm(a, mid, high) else return findm(a, low, mid) // -------- SOLUTION 2 ------ mid = (low + high)/2 if (a[mid] < a[mid+1]){ return findm(a, mid+1, high) else return findm(a, low, mid) last updated: 11 th Oct, 2017 at 20:45 6

8. Below I have grouped into four lists of size two, and then these four lists of size two are merged into two lists of size four. ( 5,6, 2,8, 3,4, 1,7 ) after merging lists of size 2 to lists of size 2 ( 2,5,6,8, 1,3,4,7 ) after merging lists of size 2 to lists of size 4 9. First consider the iterative method. It computes x x (giving a result with 2M digits) and then multiplies the result by x (giving a result with 3M digits), and then multiplies the result by x etc. The first multiply takes time cm 2. The second multiply (x x 2 ) takes time cm (2M). The third multiply (x x 3 ) takes time cm (3M). The last or (n 1) th multiply (x x n 1 ) takes time cm ((n 1)M). So the total time taken is: which is or cm M + cm (2M) + cm (3M) +... + cm ((n 1)M) cm 2 (1 + 2 + 3 +... + (n 1)) 2 n(n 1) c M 2 Next consider the recursive method. The main point to make here is that x n = x n 2 x n 2 and notice that x n 2 has nm digits. So multiplying x n 2 by itself takes time c( nm)( n M) which 2 2 2 is roughly half the time it takes for the iterature method. But this ignores all the work that was needed to compute x n 2 itself. In terms of O( ) ideas, you can already see that the recursive method for computing x n is no faster than the iterative method. And the recursive one might even be worse. (In fact, the two are the same. But I won t torture you with the proof of that. I ll consider myself satisfied if you get to here!) last updated: 11 th Oct, 2017 at 20:45 7